Updates from: 05/26/2022 01:11:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Quickstart Web App Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-web-app-dotnet.md
Azure Active Directory B2C provides functionality to allow users to update their
The ASP.NET web application includes an Azure AD access token in the request to the protected web API resource to perform operations on the user's to-do list items.
-You've successfully used your Azure AD B2C user account to make an authorized call an Azure AD B2C protected web API.
+You've successfully used your Azure AD B2C user account to make an authorized call to an Azure AD B2C protected web API.
## Next steps
active-directory Scenario Spa Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-acquire-token.md
For success and failure of the silent token acquisition, MSAL Angular provides e
import { MsalBroadcastService } from '@azure/msal-angular'; import { EventMessage, EventType } from '@azure/msal-browser';
+import { filter, Subject, takeUntil } from 'rxjs';
+ // In app.component.ts export class AppComponent implements OnInit { private readonly _destroying$ = new Subject<void>();
For success and failure of the silent token acquisition, MSAL Angular provides c
```javascript // In app.component.ts ngOnInit() {
- this.subscription= this.broadcastService.subscribe("msal:acquireTokenFailure", (payload) => {
+ this.subscription = this.broadcastService.subscribe("msal:acquireTokenFailure", (payload) => {
}); } ngOnDestroy() {
You can use optional claims for the following purposes:
To request optional claims in `IdToken`, you can send a stringified claims object to the `claimsRequest` field of the `AuthenticationParameters.ts` class. ```javascript
-"optionalClaims":
- {
- "idToken": [
- {
- "name": "auth_time",
- "essential": true
- }
- ],
-
+var claims = {
+ optionalClaims:
+ {
+ idToken: [
+ {
+ name: "auth_time",
+ essential: true
+ }
+ ],
+ }
+};
+
var request = { scopes: ["user.read"], claimsRequest: JSON.stringify(claims)
active-directory Cross Cloud Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-cloud-settings.md
After each organization has completed these steps, Azure AD B2B collaboration be
In your Microsoft cloud settings, enable the Microsoft Azure cloud you want to collaborate with.
+> [!NOTE]
+> The admin experience is currently still deploying to national clouds. To access the admin experience in Microsoft Azure Government or Microsoft Azure China, you can use these links:
+>
+>Microsoft Azure Government - https://aka.ms/cloudsettingsusgov
+>
+>Microsoft Azure China - https://aka.ms/cloudsettingschina
+ 1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service. 1. Select **External Identities**, and then select **Cross-tenant access settings (Preview)**.
-1. Select **Cross cloud settings**.
+1. Select **Microsoft cloud settings (Preview)**.
1. Select the checkboxes next to the external Microsoft Azure clouds you want to enable. ![Screenshot showing Microsoft cloud settings.](media/cross-cloud-settings/cross-cloud-settings.png)
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
To set up B2B collaboration, both organizations configure their Microsoft cloud
For configuration steps, see [Configure Microsoft cloud settings for B2B collaboration (Preview)](cross-cloud-settings.md).
+> [!NOTE]
+> The admin experience is currently still deploying to national clouds. To access the admin experience in Microsoft Azure Government or Microsoft Azure China, you can use these links:
+>
+>Microsoft Azure Government - https://aka.ms/cloudsettingsusgov
+>
+>Microsoft Azure China - https://aka.ms/cloudsettingschina
+ ### Default settings in cross-cloud scenarios To collaborate with a partner tenant in a different Microsoft Azure cloud, both organizations need to mutually enable B2B collaboration with each other. The first step is to enable the partner's cloud in your cross-tenant settings. When you first enable another cloud, B2B collaboration is blocked for all tenants in that cloud. You need to add the tenant you want to collaborate with to your Organizational settings, and at that point your default settings go into effect for that tenant only. You can allow the default settings to remain in effect, or you can modify the organizational settings for the tenant.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
With a recent improvement, Smart Lockout now synchronizes the lockout state acro
-### Public Preview - Enabling customization capabilities for the Self-Service Password Reset (SSPR) hyperlinks, footer hyperlinks and browser icons in Company Branding.
-
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-Updating the Company Branding functionality on the Azure AD/Microsoft 365 sign-in experience to allow customizing Self Service Password Reset (SSPR) hyperlinks, footer hyperlinks and browser icon. For more information, see: [Add branding to your organizationΓÇÖs Azure Active Directory sign-in page](customize-branding.md).
-- ### Public Preview - Integration of Microsoft 365 App Certification details into AAD UX and Consent Experiences
Microsoft 365 Certification status for an app is now available in Azure AD conse
-### Public Preview - Organizations can replace all references to Microsoft on the AAD auth experience
-
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-Updating the Company Branding functionality on the Azure AD/Microsoft 365 sign-in experience to allow customizing Self Service Password Reset (SSPR) hyperlinks, footer hyperlinks and browser icon. For more information, see: [Add branding to your organizationΓÇÖs Azure Active Directory sign-in page](customize-branding.md).
-- ### Public preview - Use Azure AD access reviews to review access of B2B direct connect users in Teams shared channels
active-directory Datawiza With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-with-azure-ad.md
Title: Secure hybrid access with Datawiza
-description: In this tutorial, learn how to integrate Datawiza with Azure AD for secure hybrid access
+description: Learn how to integrate Datawiza with Azure AD. See how to use Datawiza and Azure AD to authenticate users and give them access to on-premises and cloud apps.
Previously updated : 8/27/2021 Last updated : 05/19/2022 + # Tutorial: Configure Datawiza with Azure Active Directory for secure hybrid access In this sample tutorial, learn how to integrate Azure Active Directory (Azure AD) with [Datawiza](https://www.datawiza.com/) for secure hybrid access.
-Datawiza's [Datawiza Access Broker
-(DAB)](https://www.datawiza.com/access-broker) extends Azure AD to enable Single Sign-on (SSO) and granular access controls to protect on-premise and cloud-hosted applications, such as Oracle E-Business Suite, Microsoft IIS, and SAP.
+Datawiza's [Datawiza Access Broker (DAB)](https://www.datawiza.com/access-broker) extends Azure AD to enable single sign-on (SSO) and provide granular access controls to protect on-premises and cloud-hosted applications, such as Oracle E-Business Suite, Microsoft IIS, and SAP.
-Using this solution enterprises can quickly transition from legacy Web Access Managers (WAMs), such as Symantec SiteMinder, NetIQ, Oracle, and IBM to Azure AD without rewriting applications. Enterprises can also use Datawiza as a no-code or low-code solution to integrate new applications to Azure AD. This saves engineering time, reduces cost significantly and delivers the project in a secured manner.
+By using this solution, enterprises can quickly transition from legacy web access managers (WAMs), such as Symantec SiteMinder, NetIQ, Oracle, and IBM, to Azure AD without rewriting applications. Enterprises can also use Datawiza as a no-code or low-code solution to integrate new applications to Azure AD. This approach saves engineering time, reduces cost significantly, and delivers the project in a secured manner.
## Prerequisites
-To get started, you'll need:
+To get started, you need:
- An Azure subscription. If you don\'t have a subscription, you can get a [trial account](https://azure.microsoft.com/free/). - An [Azure AD tenant](../fundamentals/active-directory-access-create-new-tenant.md) that's linked to your Azure subscription. -- [Docker](https://docs.docker.com/get-docker/) and
-[docker-compose](https://docs.docker.com/compose/install/)
-are required to run DAB. Your applications can run on any platform, such as the virtual machine and bare metal.
+- [Docker](https://docs.docker.com/get-docker/) and [docker-compose](https://docs.docker.com/compose/install/), which are required to run DAB. Your applications can run on any platform, such as a virtual machine and bare metal.
-- An application that you'll transition from a legacy identity system to Azure AD. In this example, DAB is deployed on the same server where the application is. The application will run on localhost: 3001 and DAB proxies traffic to the application via localhost: 9772. The traffic to the application will reach DAB first and then be proxied to the application.
+- An application that you'll transition from a legacy identity system to Azure AD. In this example, DAB is deployed on the same server as the application. The application runs on localhost: 3001, and DAB proxies traffic to the application via localhost: 9772. The traffic to the application reaches DAB first and is then proxied to the application.
## Scenario description Datawiza integration includes the following components: -- [Azure AD](../fundamentals/active-directory-whatis.md) - Microsoft's cloud-based identity and access management service, which helps users sign in and access external and internal resources.
+- [Azure AD](../fundamentals/active-directory-whatis.md) - A cloud-based identity and access management service from Microsoft. Azure AD helps users sign in and access external and internal resources.
-- Datawiza Access Broker (DAB) - The service user sign on and transparently passes identity to applications through HTTP headers.
+- Datawiza Access Broker (DAB) - The service that users sign on to. DAB transparently passes identity information to applications through HTTP headers.
-- Datawiza Cloud Management Console (DCMC) - A centralized management console that manages DAB. DCMC provides UI and RESTful APIs for administrators to manage the configurations of DAB and its access control policies.
+- Datawiza Cloud Management Console (DCMC) - A centralized management console that manages DAB. DCMC provides UI and RESTful APIs for administrators to manage the DAB configuration and access control policies.
The following architecture diagram shows the implementation.
-![image shows architecture diagram](./media/datawiza-with-azure-active-directory/datawiza-architecture-diagram.png)
+![Architecture diagram that shows the authentication process that gives a user access to an on-premises application.](./media/datawiza-with-azure-active-directory/datawiza-architecture-diagram.png)
-|Steps| Description|
+|Step| Description|
|:-|:--|
-| 1. | The user makes a request to access the on-premises or cloud-hosted application. DAB proxies the request made by the user to the application.|
-| 2. |The DAB checks the user's authentication state. If it doesn't receive a session token, or the supplied session token is invalid, then it sends the user to Azure AD for authentication.|
+| 1. | The user makes a request to access the on-premises or cloud-hosted application. DAB proxies the request made by the user to the application.|
+| 2. | DAB checks the user's authentication state. If it doesn't receive a session token, or the supplied session token is invalid, it sends the user to Azure AD for authentication.|
| 3. | Azure AD sends the user request to the endpoint specified during the DAB application's registration in the Azure AD tenant.|
-| 4. | The DAB evaluates access policies and calculates attribute values to be included in HTTP headers forwarded to the application. During this step, the DAB may call out to the Identity provider to retrieve the information needed to set the header values correctly. The DAB sets the header values and sends the request to the application. |
-| 5. | The user is now authenticated and has access to the application.|
+| 4. | DAB evaluates access policies and calculates attribute values to be included in HTTP headers forwarded to the application. During this step, DAB may call out to the identity provider to retrieve the information needed to set the header values correctly. DAB sets the header values and sends the request to the application. |
+| 5. | The user is authenticated and has access to the application.|
## Onboard with Datawiza
-To integrate your on-premises or cloud-hosted application with Azure AD, login to [Datawiza Cloud Management
+To integrate your on-premises or cloud-hosted application with Azure AD, sign in to [Datawiza Cloud Management
Console](https://console.datawiza.com/) (DCMC). ## Create an application on DCMC
-[Create an application](https://docs.datawiza.com/step-by-step/step2.html) and generate a key pair of `PROVISIONING_KEY` and `PROVISIONING_SECRET` for the application on the DCMC.
+In the next step, you create an application on DCMC and generate a key pair for the app. The key pair consists of a `PROVISIONING_KEY` and `PROVISIONING_SECRET`. To create the app and generate the key pair, follow the instructions in [Datawiza Cloud Management Console](https://docs.datawiza.com/step-by-step/step2.html).
-For Azure AD, Datawiza offers a convenient [One click integration](https://docs.datawiza.com/tutorial/web-app-azure-one-click.html). This method to integrate Azure AD with DCMC can create an application registration on your behalf in your Azure AD tenant.
+For Azure AD, Datawiza offers a convenient [one-click integration](https://docs.datawiza.com/tutorial/web-app-azure-one-click.html). This method to integrate Azure AD with DCMC can create an application registration on your behalf in your Azure AD tenant.
-![image shows configure idp](./media/datawiza-with-azure-active-directory/configure-idp.png)
+![Screenshot of the Datawiza Configure I D P page. Boxes for name, protocol, and other values are visible. An automatic generator option is turned on.](./media/datawiza-with-azure-active-directory/configure-idp.png)
-Instead, if you want to use an existing web application in your Azure AD tenant, you can disable the option and populate the fields of the form. You'll need the tenant ID, client ID, and client secret. [Create a web application and get these values in your tenant](https://docs.datawiza.com/idp/azure.html).
+Instead, if you want to use an existing web application in your Azure AD tenant, you can disable the option and populate the fields of the form. You need the tenant ID, client ID, and client secret. For more information about creating a web application and getting these values, see [Microsoft Azure AD in the Datawiza documentation](https://docs.datawiza.com/idp/azure.html).
-![image shows configure idp using form](./media/datawiza-with-azure-active-directory/use-form.png)
+![Screenshot of the Datawiza Configure I D P page. Boxes for name, protocol, and other values are visible. An automatic generator option is turned off.](./media/datawiza-with-azure-active-directory/use-form.png)
## Run DAB with a header-based application
-1. You can use either Docker or Kubernetes to run DAB. The docker image is needed for users to create a sample header-based application. [Configure DAB and SSO
-integration](https://docs.datawiza.com/step-by-step/step3.html). [Deploy DAB with Kubernetes](https://docs.datawiza.com/tutorial/web-app-AKS.html). A sample docker image `docker-compose.yml` file is provided for you to download and use. [Log in to the container registry](https://docs.datawiza.com/step-by-step/step3.html#important-step) to download the images of DAB and the header-based application.
+You can use either Docker or Kubernetes to run DAB. The docker image is needed to create a sample header-based application.
- ```yaml
-
+To run DAB with a header-based application, follow these steps:
+
+1. Use either Docker or Kubernetes to run DAB:
+
+ - For Docker-specific instructions, see [Deploy Datawiza Access Broker With Your App](https://docs.datawiza.com/step-by-step/step3.html).
+ - For Kubernetes-specific instructions, see [Deploy Datawiza Access Broker with a Web App using Kubernetes](https://docs.datawiza.com/tutorial/web-app-AKS.html).
+
+ You can use the following sample docker image docker-compose.yml file:
+
+ ```yaml
+
datawiza-access-broker: image: registry.gitlab.com/datawiza/access-broker container_name: datawiza-access-broker
integration](https://docs.datawiza.com/step-by-step/step3.html). [Deploy DAB wit
header-based-app: image: registry.gitlab.com/datawiza/header-based-app restart: always
- ports:
- - "3001:3001"
+ ports:
+ - "3001:3001"
```
-2. After executing `docker-compose -f docker-compose.yml up`, the
-header-based application should have SSO enabled with Azure AD. Open a browser and type in `http://localhost:9772/`.
+1. To sign in to the container registry and download the images of DAB and the header-based application, follow the instructions in [Important Step](https://docs.datawiza.com/step-by-step/step3.html#important-step).
-3. An Azure AD login page will show up.
+1. Run the following command:
-## Pass user attributes to the header-based application
+ `docker-compose -f docker-compose.yml up`
-1. DAB gets user attributes from IdP and can pass the user attributes to the application via header or cookie. See the instructions on how to [pass user attributes](https://docs.datawiza.com/step-by-step/step4.html) such as email address, firstname, and lastname to the header-based application.
+ The header-based application should now have SSO enabled with Azure AD.
-2. After successfully configuring the user attributes, you should see the green check sign for each of the user attributes.
+1. In a browser, go to `http://localhost:9772/`. An Azure AD sign-in page appears.
- ![image shows datawiza application home page](./media/datawiza-with-azure-active-directory/datawiza-application-home-page.png)
+## Pass user attributes to the header-based application
-## Test the flow
+DAB gets user attributes from Azure AD and can pass these attributes to the application via a header or cookie.
+
+To pass user attributes such as an email address, a first name, and a last name to the header-based application, follow the instructions in [Pass User Attributes](https://docs.datawiza.com/step-by-step/step4.html).
-1. Navigate to the application URL.
+After successfully configuring the user attributes, you should see a green check mark next to each attribute.
-2. The DAB should redirect to the Azure AD login page.
+![Screenshot that shows the Datawiza application home page. Green check marks are visible next to the host, email, firstname, and lastname attributes.](./media/datawiza-with-azure-active-directory/datawiza-application-home-page.png)
+
+## Test the flow
-3. After successfully authenticating, you should be redirected to DAB.
+1. Go to the application URL. DAB should redirect you to the Azure AD sign-in page.
-4. The DAB evaluates policies, calculates headers, and sends the user to the upstream application.
+1. After successfully authenticating, you should be redirected to DAB.
-5. Your requested application should show up.
+DAB evaluates policies, calculates headers, and sends you to the upstream application. Your requested application should appear.
## Next steps
active-directory Migrate Okta Federation To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-federation-to-azure-active-directory.md
Title: Tutorial to migrate Okta federation to Azure Active Directory-managed authentication
+ Title: Migrate Okta federation to Azure Active Directory
-description: Learn how to migrate your Okta federated applications to Azure AD-managed authentication.
+description: Learn how to migrate your Okta-federated applications to managed authentication under Azure AD. See how to migrate federation in a staged manner.
- Previously updated : 09/01/2021 Last updated : 05/19/2022 + # Tutorial: Migrate Okta federation to Azure Active Directory-managed authentication
Seamless SSO can be deployed to password hash synchronization or pass-through au
Follow the [deployment guide](../hybrid/how-to-connect-sso-quick-start.md#step-1-check-the-prerequisites) to ensure that you deploy all necessary prerequisites of seamless SSO to your users.
-For our example, we'll configure password hash synchronization and seamless SSO.
+For this example, you configure password hash synchronization and seamless SSO.
### Configure Azure AD Connect for password hash synchronization and seamless SSO
Follow these steps to configure Azure AD Connect for password hash synchronizati
1. On your Azure AD Connect server, open the **Azure AD Connect** app and then select **Configure**.
- ![Screenshot that shows the Azure A D icon and Configure button.](media/migrate-okta-federation-to-azure-active-directory/configure-azure-ad.png)
+ ![Screenshot that shows the Azure A D icon and the Configure button in the Azure A D Connect app.](media/migrate-okta-federation-to-azure-active-directory/configure-azure-ad.png)
-1. Select **Change user sign-in** > **Next**.
+1. Select **Change user sign-in**, and then select **Next**.
- ![Screenshot that shows the page for changing user sign-in.](media/migrate-okta-federation-to-azure-active-directory/change-user-signin.png)
+ ![Screenshot of the Azure A D Connect app that shows the page for changing user sign-in.](media/migrate-okta-federation-to-azure-active-directory/change-user-signin.png)
1. Enter your global administrator credentials.
- ![Screenshot that shows where to enter global admin credentials.](media/migrate-okta-federation-to-azure-active-directory/global-admin-credentials.png)
+ ![Screenshot of the Azure A D Connect app that shows where to enter global admin credentials.](media/migrate-okta-federation-to-azure-active-directory/global-admin-credentials.png)
1. Currently, the server is configured for federation with Okta. Change the selection to **Password Hash Synchronization**. Then select **Enable single sign-on**.
Follow these steps to enable seamless SSO:
1. Enter the domain administrator credentials for the local on-premises system. Then select **Next**.
- ![Screenshot that shows settings for user sign-in.](media/migrate-okta-federation-to-azure-active-directory/domain-admin-credentials.png)
+ ![Screenshot of the Azure A D Connect app that shows settings for user sign-in.](media/migrate-okta-federation-to-azure-active-directory/domain-admin-credentials.png)
1. On the final page, select **Configure** to update the Azure AD Connect server.
- ![Screenshot that shows the configuration page.](media/migrate-okta-federation-to-azure-active-directory/update-azure-ad-connect-server.png)
+ ![Screenshot of the Ready to configure page of the Azure A D Connect app.](media/migrate-okta-federation-to-azure-active-directory/update-azure-ad-connect-server.png)
1. Ignore the warning for hybrid Azure AD join for now. You'll reconfigure the device options after you disable federation from Okta.
- ![Screenshot that shows the link to configure device options.](media/migrate-okta-federation-to-azure-active-directory/reconfigure-device-options.png)
+ ![Screenshot of the Azure A D Connect app. A warning about the hybrid Azure A D join is visible. A link for configuring device options is also visible.](media/migrate-okta-federation-to-azure-active-directory/reconfigure-device-options.png)
## Configure staged rollout features
After you enable password hash sync and seamless SSO on the Azure AD Connect ser
1. In the [Azure portal](https://portal.azure.com/#home), select **View** or **Manage Azure Active Directory**.
- ![Screenshot that shows the Azure portal.](media/migrate-okta-federation-to-azure-active-directory/azure-portal.png)
+ ![Screenshot that shows the Azure portal. A welcome message is visible.](media/migrate-okta-federation-to-azure-active-directory/azure-portal.png)
1. On the **Azure Active Directory** menu, select **Azure AD Connect**. Then confirm that **Password Hash Sync** is enabled in the tenant.
After you enable password hash sync and seamless SSO on the Azure AD Connect ser
1. Your **Password Hash Sync** setting might have changed to **On** after the server was configured. If the setting isn't enabled, enable it now.
- Notice that **Seamless single sign-on** is set to **Off**. If you attempt to enable it, you'll get an error because it's already enabled for users in the tenant.
+ Notice that **Seamless single sign-on** is set to **Off**. If you attempt to enable it, you get an error because it's already enabled for users in the tenant.
1. Select **Manage groups**.
- ![Screenshot that shows the button for managing groups.](media/migrate-okta-federation-to-azure-active-directory/password-hash-sync.png)
+ ![Screenshot of the Enable staged rollout features page in the Azure portal. A Manage groups button is visible.](media/migrate-okta-federation-to-azure-active-directory/password-hash-sync.png)
-Follow the instructions to add a group to the password hash sync rollout. In the following example, the security group starts with 10 members.
+1. Follow the instructions to add a group to the password hash sync rollout. In the following example, the security group starts with 10 members.
-![Screenshot that shows an example of a security group.](media/migrate-okta-federation-to-azure-active-directory/example-security-group.png)
+ ![Screenshot of the Manage groups for Password Hash Sync page in the Azure portal. A group is visible in a table.](media/migrate-okta-federation-to-azure-active-directory/example-security-group.png)
-After you add the group, wait for about 30 minutes while the feature takes effect in your tenant. When the feature has taken effect, your users will no longer be redirected to Okta when they attempt to access Office 365 services.
+1. After you add the group, wait for about 30 minutes while the feature takes effect in your tenant. When the feature has taken effect, your users are no longer redirected to Okta when they attempt to access Office 365 services.
The staged rollout feature has some unsupported scenarios: -- Legacy authentication such as POP3 and SMTP aren't supported.
+- Legacy authentication protocols such as POP3 and SMTP aren't supported.
- If you've configured hybrid Azure AD join for use with Okta, all the hybrid Azure AD join flows go to Okta until the domain is defederated. A sign-on policy should remain in Okta to allow legacy authentication for hybrid Azure AD join Windows clients. ## Create an Okta app in Azure AD
To configure the enterprise application registration for Okta:
1. On the left menu, under **Manage**, select **Enterprise applications**.
- ![Screenshot that shows the "Enterprise applications" selection.](media/migrate-okta-federation-to-azure-active-directory/enterprise-application.png)
+ ![Screenshot that shows the left menu of the Azure portal. Enterprise applications is visible.](media/migrate-okta-federation-to-azure-active-directory/enterprise-application.png)
1. On the **All applications** menu, select **New application**.
- ![Screenshot that shows the "New application" selection.](media/migrate-okta-federation-to-azure-active-directory/new-application.png)
+ ![Screenshot that shows the All applications page in the Azure portal. A new application is visible.](media/migrate-okta-federation-to-azure-active-directory/new-application.png)
1. Select **Create your own application**. On the menu that opens, name the Okta app and select **Register an application you're working on to integrate with Azure AD**. Then select **Create**.
- :::image type="content" source="media/migrate-okta-federation-to-azure-active-directory/register-application.png" alt-text="Screenshot that shows how to register an application." lightbox="media/migrate-okta-federation-to-azure-active-directory/register-application.png":::
+ :::image type="content" source="media/migrate-okta-federation-to-azure-active-directory/register-application.png" alt-text="Screenshot that shows the Create your own application menu. The app name is visible. The option to integrate with Azure A D is turned on." lightbox="media/migrate-okta-federation-to-azure-active-directory/register-application.png":::
-1. Select **Accounts in any organizational directory (Any Azure AD Directory - Multitenant)** > **Register**.
+1. Select **Accounts in any organizational directory (Any Azure AD Directory - Multitenant)**, and then select **Register**.
![Screenshot that shows how to register an application and change the application account.](media/migrate-okta-federation-to-azure-active-directory/register-change-application.png) 1. On the Azure AD menu, select **App registrations**. Then open the newly created registration.
- ![Screenshot that shows the new app registration.](media/migrate-okta-federation-to-azure-active-directory/app-registration.png)
+ ![Screenshot that shows the App registrations page in the Azure portal. The new app registration is visible.](media/migrate-okta-federation-to-azure-active-directory/app-registration.png)
1. Record your tenant ID and application ID. >[!Note] >You'll need the tenant ID and application ID to configure the identity provider in Okta.
- ![Screenshot that shows the tenant ID and application ID.](media/migrate-okta-federation-to-azure-active-directory/record-ids.png)
+ ![Screenshot that shows the Okta Application Access page in the Azure portal. The tenant I D and application I D are called out.](media/migrate-okta-federation-to-azure-active-directory/record-ids.png)
1. On the left menu, select **Certificates & secrets**. Then select **New client secret**. Give the secret a generic name and set its expiration date.
To configure the enterprise application registration for Okta:
>[!NOTE] >The value and ID aren't shown later. If you fail to record this information now, you'll have to regenerate a secret.
- ![Screenshot that shows where to record the secret's value and I D.](media/migrate-okta-federation-to-azure-active-directory/record-secrets.png)
+ ![Screenshot of the Certificates and secrets page. The value and I D of the secret are visible.](media/migrate-okta-federation-to-azure-active-directory/record-secrets.png)
1. On the left menu, select **API permissions**. Grant the application access to the OpenID Connect (OIDC) stack. 1. Select **Add a permission** > **Microsoft Graph** > **Delegated permissions**.
- :::image type="content" source="media/migrate-okta-federation-to-azure-active-directory/delegated-permissions.png" alt-text="Screenshot that shows delegated permissions." lightbox="media/migrate-okta-federation-to-azure-active-directory/delegated-permissions.png":::
+ :::image type="content" source="media/migrate-okta-federation-to-azure-active-directory/delegated-permissions.png" alt-text="Screenshot that shows the A P I permissions page of the Azure portal. A delegated permission for reading is visible." lightbox="media/migrate-okta-federation-to-azure-active-directory/delegated-permissions.png":::
1. In the OpenID permissions section, add **email**, **openid**, and **profile**. Then select **Add permissions**.
- :::image type="content" source="media/migrate-okta-federation-to-azure-active-directory/add-permissions.png" alt-text="Screenshot that shows how to add permissions." lightbox="media/migrate-okta-federation-to-azure-active-directory/add-permissions.png":::
+ :::image type="content" source="media/migrate-okta-federation-to-azure-active-directory/add-permissions.png" alt-text="Screenshot that shows the A P I permissions page of the Azure portal. Permissions for email, openid, profile, and reading are visible." lightbox="media/migrate-okta-federation-to-azure-active-directory/add-permissions.png":::
1. Select **Grant admin consent for \<tenant domain name>** and wait until the **Granted** status appears.
- ![Screenshot that shows granted consent.](media/migrate-okta-federation-to-azure-active-directory/grant-consent.png)
+ ![Screenshot of the A P I permissions page that shows a message about granted consent.](media/migrate-okta-federation-to-azure-active-directory/grant-consent.png)
1. On the left menu, select **Branding**. For **Home page URL**, add your user's application home page.
- ![Screenshot that shows how to add branding.](media/migrate-okta-federation-to-azure-active-directory/add-branding.png)
+ ![Screenshot of the Branding page in the Azure portal. Several input boxes are visible, including one for the home page U R L.](media/migrate-okta-federation-to-azure-active-directory/add-branding.png)
1. In the Okta administration portal, select **Security** > **Identity Providers** to add a new identity provider. Select **Add Microsoft**.
- ![Screenshot that shows how to add the identity provider.](media/migrate-okta-federation-to-azure-active-directory/configure-idp.png)
+ ![Screenshot of the Okta administration portal. Add Microsoft is visible in the Add Identity Provider list.](media/migrate-okta-federation-to-azure-active-directory/configure-idp.png)
1. On the **Identity Provider** page, copy your application ID to the **Client ID** field. Copy the client secret to the **Client Secret** field.
-1. Select **Show Advanced Settings**. By default, this configuration will tie the user principal name (UPN) in Okta to the UPN in Azure AD for reverse-federation access.
+1. Select **Show Advanced Settings**. By default, this configuration ties the user principal name (UPN) in Okta to the UPN in Azure AD for reverse-federation access.
>[!IMPORTANT] >If your UPNs in Okta and Azure AD don't match, select an attribute that's common between users.
-1. Finish your selections for autoprovisioning. By default, if a user doesn't match in Okta, the system will attempt to provision the user in Azure AD. If you've migrated provisioning away from Okta, select **Redirect to Okta sign-in page**.
+1. Finish your selections for autoprovisioning. By default, if no match is found for an Okta user, the system attempts to provision the user in Azure AD. If you've migrated provisioning away from Okta, select **Redirect to Okta sign-in page**.
- ![Screenshot that shows the option for redirecting to the Okta sign-in page.](media/migrate-okta-federation-to-azure-active-directory/redirect-okta.png)
+ ![Screenshot of the General Settings page in the Okta admin portal. The option for redirecting to the Okta sign-in page is visible.](media/migrate-okta-federation-to-azure-active-directory/redirect-okta.png)
Now that you've created the identity provider (IDP), you need to send users to the correct IDP.
To configure the enterprise application registration for Okta:
In this example, the **Division** attribute is unused on all Okta profiles, so it's a good choice for IDP routing.
- ![Screenshot that shows the division attribute for I D P routing.](media/migrate-okta-federation-to-azure-active-directory/division-idp-routing.png)
+ ![Screenshot of the Edit Rule page in the Okta admin portal. A rule definition that involves the division attribute is visible.](media/migrate-okta-federation-to-azure-active-directory/division-idp-routing.png)
1. Now that you've added the routing rule, record the redirect URI so you can add it to the application registration.
To configure the enterprise application registration for Okta:
1. On your application registration, on the left menu, select **Authentication**. Then select **Add a platform** > **Web**.
- :::image type="content" source="media/migrate-okta-federation-to-azure-active-directory/add-platform.png" alt-text="Screenshot that shows how to add a web platform." lightbox="media/migrate-okta-federation-to-azure-active-directory/add-platform.png":::
+ :::image type="content" source="media/migrate-okta-federation-to-azure-active-directory/add-platform.png" alt-text="Screenshot of the Authentication page in the Azure portal. Add a platform and a Configure platforms menu are visible." lightbox="media/migrate-okta-federation-to-azure-active-directory/add-platform.png":::
1. Add the redirect URI that you recorded in the IDP in Okta. Then select **Access tokens** and **ID tokens**.
- ![Screenshot that shows Okta access and I D tokens.](media/migrate-okta-federation-to-azure-active-directory/access-id-tokens.png)
+ ![Screenshot of the Configure Web page in the Azure portal. A redirect U R I is visible. The access and I D tokens are selected.](media/migrate-okta-federation-to-azure-active-directory/access-id-tokens.png)
1. In the admin console, select **Directory** > **People**. Select your first test user to edit the profile. 1. In the profile, add **ToAzureAD** as in the following image. Then select **Save**.
- ![Screenshot that shows how to edit a profile.](media/migrate-okta-federation-to-azure-active-directory/profile-editing.png)
+ ![Screenshot of the Okta admin portal. Profile settings are visible, and the Division box contains ToAzureAD.](media/migrate-okta-federation-to-azure-active-directory/profile-editing.png)
-1. Try to sign in to the [Microsoft 356 portal](https://portal.office.com) as the modified user. If your user isn't a part of the managed authentication pilot, you'll notice that your action loops. To exit the loop, add the user to the managed authentication experience.
+1. Try to sign in to the [Microsoft 356 portal](https://portal.office.com) as the modified user. If your user isn't part of the managed authentication pilot, your action enters a loop. To exit the loop, add the user to the managed authentication experience.
## Test Okta app access on pilot members
-After you configure the Okta app in Azure AD and you configure the IDP in the Okta portal, you must assign the application to users.
+After you configure the Okta app in Azure AD and you configure the IDP in the Okta portal, assign the application to users.
1. In the Azure portal, select **Azure Active Directory** > **Enterprise applications**.
After you configure the Okta app in Azure AD and you configure the IDP in the Ok
>[!NOTE] >You can add users and groups only from the **Enterprise applications** page. You can't add users from the **App registrations** menu.
- ![Screenshot that shows how to add a group.](media/migrate-okta-federation-to-azure-active-directory/add-group.png)
+ ![Screenshot of the Users and groups page of the Azure portal. A group called Managed Authentication Staging Group is visible.](media/migrate-okta-federation-to-azure-active-directory/add-group.png)
1. After about 15 minutes, sign in as one of the managed authentication pilot users and go to [My Apps](https://myapplications.microsoft.com).
- ![Screenshot that shows the My Apps gallery.](media/migrate-okta-federation-to-azure-active-directory/my-applications.png)
+ ![Screenshot that shows the My Apps gallery. An icon for Okta Application Access is visible.](media/migrate-okta-federation-to-azure-active-directory/my-applications.png)
1. Select the **Okta Application Access** tile to return the user to the Okta home page.
-## Test-managed authentication on pilot members
+## Test managed authentication on pilot members
After you configure the Okta reverse-federation app, have your users conduct full testing on the managed authentication experience. We recommend that you set up company branding to help your users recognize the tenant they're signing in to. For more information, see [Add branding to your organization's Azure AD sign-in page](../fundamentals/customize-branding.md).
active-directory Migrate Okta Sync Provisioning To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sync-provisioning-to-azure-active-directory.md
Title: Tutorial to migrate Okta sync provisioning to Azure AD Connect-based synchronization
+ Title: Migrate Okta sync provisioning to Azure AD Connect
-description: In this tutorial, you learn how to migrate your Okta sync provisioning to Azure AD Connect-based synchronization.
+description: Learn how to migrate user provisioning from Okta to Azure Active Directory (Azure AD). See how to use Azure AD Connect server or Azure AD cloud provisioning.
- Previously updated : 09/01/2021 Last updated : 05/19/2022 + # Tutorial: Migrate Okta sync provisioning to Azure AD Connect-based synchronization
-In this tutorial, you'll learn how your organization can currently migrate User provisioning from Okta to Azure Active Directory (Azure AD) and migrate either User sync or Universal sync to Azure AD Connect. This capability will enable further provisioning into Azure AD and Office 365.
+In this tutorial, you'll learn how your organization can migrate user provisioning from Okta to Azure Active Directory (Azure AD) and migrate either User Sync or Universal Sync to Azure AD Connect. This capability enables further provisioning into Azure AD and Office 365.
Migrating synchronization platforms isn't a small change. Each step of the process mentioned in this article should be validated against your own environment before you remove Azure AD Connect from staging mode or enable the Azure AD cloud provisioning agent.
Migrating synchronization platforms isn't a small change. Each step of the proce
When you switch from Okta provisioning to Azure AD, you have two choices. You can use either an Azure AD Connect server or Azure AD cloud provisioning. To understand the differences between the two, read the [comparison article from Microsoft](../cloud-sync/what-is-cloud-sync.md#comparison-between-azure-ad-connect-and-cloud-sync).
-Azure AD cloud provisioning will be the most familiar migration path for Okta customers who use Universal or User sync. The cloud provisioning agents are lightweight. They can be installed on or near domain controllers like the Okta directory sync agents. Don't install them on the same server.
+Azure AD cloud provisioning is the most familiar migration path for Okta customers who use Universal Sync or User Sync. The cloud provisioning agents are lightweight. You can install them on or near domain controllers like the Okta directory sync agents. Don't install them on the same server.
Use an Azure AD Connect server if your organization needs to take advantage of any of the following technologies when you synchronize users: - Device synchronization: Hybrid Azure AD join or Hello for Business-- Passthrough authentication-- More than 150,000-object support
+- Pass-through authentication
+- Support for more than 150,000 objects
- Support for writeback >[!NOTE]
->All prerequisites should be taken into consideration when you install Azure AD Connect or Azure AD cloud provisioning. To learn more before you continue with installation, see [Prerequisites for Azure AD Connect](../hybrid/how-to-connect-install-prerequisites.md).
+>Take all prerequisites into consideration when you install Azure AD Connect or Azure AD cloud provisioning. To learn more before you continue with installation, see [Prerequisites for Azure AD Connect](../hybrid/how-to-connect-install-prerequisites.md).
## Confirm ImmutableID attribute synchronized by Okta
-ImmutableID is the core attribute used to tie synchronized objects to their on-premises counterparts. Okta takes the Active Directory objectGUID of an on-premises object and converts it to a Base64 encoded string. Then, by default it stamps that string to the ImmutableID field in Azure AD.
+ImmutableID is the core attribute used to tie synchronized objects to their on-premises counterparts. Okta takes the Active Directory objectGUID of an on-premises object and converts it to a Base64-encoded string. By default, it then stamps that string to the ImmutableID field in Azure AD.
You can connect to Azure AD PowerShell and examine the current ImmutableID value. If you've never used the Azure AD PowerShell module, run `Install-Module AzureAD` in an administrative PowerShell session before you run the following commands:
If you already have the module, you might receive a warning to update to the lat
After the module is installed, import it and follow these steps to connect to the Azure AD service:
-1. Enter your global administrator credentials in the modern authentication window.
+1. Enter your global administrator credentials in the authentication window.
- ![Screenshot that shows import-module.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/import-module.png)
+ ![Screenshot of the Azure A D PowerShell window. The install-module, import-module, and connect commands are visible with their output.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/import-module.png)
-1. After you connect to the tenant, verify the settings for your ImmutableID values. The example shown uses Okta defaults of objectGUID to ImmutableID.
+1. After you connect to the tenant, verify the settings for your ImmutableID values. The following example uses the Okta default approach of converting the objectGUID into the ImmutableID.
- ![Screenshot that shows Okta defaults of objectGUID to ImmutableID.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/okta-default-objectid.png)
+ ![Screenshot of the Azure A D PowerShell window. The Get-AzureADUser command is visible. Its output includes the UserPrincipalName and the ImmutableId.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/okta-default-objectid.png)
-1. There are several ways to manually confirm the objectGUID to Base64 conversion on-premises. For individual validation, use this example:
+1. There are several ways to manually confirm the conversion from objectGUID to Base64 on-premises. To test an individual value, use these commands:
```PowerShell Get-ADUser onpremupn | fl objectguid
After the module is installed, import it and follow these steps to connect to th
[system.convert]::ToBase64String(([GUID]$objectGUID).ToByteArray()) ```
- ![Screenshot that shows how to manually change Okta objectGUID to ImmutableID.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/manual-objectguid.png)
+ ![Screenshot of the Azure A D PowerShell window. The commands that convert an objectGUID to an ImmutableID are visible with their output.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/manual-objectguid.png)
## Mass validation methods for objectGUID
-Before you cut over to Azure AD Connect, it's critical to validate that the ImmutableID values in Azure AD are going to exactly match their on-premises values.
+Before you move to Azure AD Connect, it's critical to validate that the ImmutableID values in Azure AD exactly match their on-premises values.
-The example will grab *all* on-premises Azure AD users and export a list of their objectGUID values and ImmutableID values already calculated to a CSV file.
+The following command gets *all* on-premises Azure AD users and exports a list of their objectGUID values and ImmutableID values already calculated to a CSV file.
-1. Run these commands in PowerShell on a domain controller on-premises:
+1. Run this command in PowerShell on an on-premises domain controller:
```PowerShell
- Get-ADUser -Filter * -Properties objectGUID | Select -Object
+ Get-ADUser -Filter * -Properties objectGUID | Select-Object
UserPrincipalName, Name, objectGUID, @{Name = 'ImmutableID'; Expression = { [system.convert]::ToBase64String((GUID).tobytearray()) } } | export-csv C:\Temp\OnPremIDs.csv ```
- ![Screenshot that shows domain controller on-premises commands.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/domain-controller.png)
+ ![Screenshot of a .csv file that lists sample output data. Columns include UserPrincipalName, Name, objectGUID, and ImmutableID.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/domain-controller.png)
-1. Run these commands in an Azure AD PowerShell session to gather the already synchronized values:
+1. Run this command in an Azure AD PowerShell session to list the already synchronized values:
```powershell Get-AzureADUser -all $true | Where-Object {$_.dirsyncenabled -like
The example will grab *all* on-premises Azure AD users and export a list of thei
ImmutableID | export-csv C:\\temp\\AzureADSyncedIDS.csv ```
- ![Screenshot that shows an Azure AD PowerShell session.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/azure-ad-powershell.png)
+ ![Screenshot of a .csv file that lists sample output data. Columns include UserPrincipalName, objectGUID, and ImmutableID.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/azure-ad-powershell.png)
- After you have both exports, confirm that the ImmutableID for each user matches.
+ After you have both exports, confirm that each user's ImmutableID values match.
>[!IMPORTANT] >If your ImmutableID values in the cloud don't match objectGUID values, you've modified the defaults for Okta sync. You've likely chosen another attribute to determine ImmutableID values. Before you move on to the next section, it's critical to identify which source attribute is populating ImmutableID values. Ensure that you update the attribute Okta is syncing before you disable Okta sync.
The example will grab *all* on-premises Azure AD users and export a list of thei
After you've prepared your list of source and destination targets, it's time to install an Azure AD Connect server. If you've opted to use Azure AD Connect cloud provisioning, skip this section.
-1. Continue with [downloading and installing Azure AD Connect](../hybrid/how-to-connect-install-custom.md) to your chosen server.
+1. Download and install Azure AD Connect on your chosen server by following the instructions in [Custom installation of Azure Active Directory Connect](../hybrid/how-to-connect-install-custom.md).
+
+1. In the left panel, select **Identifying users**.
-1. On the **Identifying users** page, under **Select how users should be identified with Azure AD**, select the **Choose a specific attribute** option. Then, select **mS-DS-ConsistencyGUID** if you haven't modified the Okta defaults.
+1. On the **Uniquely identifying your users** page, under **Select how users should be identified with Azure AD**, select **Choose a specific attribute**. Then select **mS-DS-ConsistencyGUID** if you haven't modified the Okta defaults.
>[!WARNING]
- >This is the most critical step on this page. Before you select **Next**, ensure that the attribute you're selecting for a source anchor is what *currently* populates your existing Azure AD users. If you select the wrong attribute, you must uninstall and reinstall Azure AD Connect to reselect this option.
+ >This step is critical. Ensure that the attribute that you select for a source anchor is what *currently* populates your existing Azure AD users. If you select the wrong attribute, you need to uninstall and reinstall Azure AD Connect to reselect this option.
- ![Screenshot that shows mS-DS-ConsistencyGuid.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/consistency-guid.png)
+ ![Screenshot of the Azure A D Connect window. The page is titled Uniquely identifying your users, and the mS-DS-ConsistencyGuid attribute is selected.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/consistency-guid.png)
+
+1. Select **Next**.
-1. On the **Configure** page, make sure to select the **Enable staging mode** checkbox. Then select **Install**.
+1. In the left panel, select **Configure**.
- ![Screenshot that shows the Enable staging mode checkbox.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/enable-staging-mode.png)
+1. On the **Ready to configure** page, select **Enable staging mode**. Then select **Install**.
+
+ ![Screenshot of the Azure A D Connect window. The page is titled Ready to configure, and the Enable staging mode checkbox is selected.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/enable-staging-mode.png)
1. After the configuration is complete, select **Exit**.
After you've prepared your list of source and destination targets, it's time to
1. Open **Synchronization Service** as an administrator.
- ![Screenshot that shows opening Synchronization Service.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/open-sync-service.png)
+ ![Screenshot that shows the Synchronization Service shortcut menus, with More and Run as administrator selected.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/open-sync-service.png)
-1. Check that **Full Synchronization** to the domain.onmicrosoft.com connector space has users displaying under the **Connectors with Flow Updates** tab.
+1. Find the **Full Synchronization** to the domain.onmicrosoft.com connector space. Check that there are users under the **Connectors with Flow Updates** tab.
- ![Screenshot that shows the Connectors with Flow Updates tab.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/connector-flow-update.png)
+ ![Screenshot of the Synchronization Service window. The Connectors with Flow Updates tab is selected.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/connector-flow-update.png)
1. Verify there are no deletions pending in the export. Select the **Connectors** tab, and then highlight the domain.onmicrosoft.com connector space. Then select **Search Connector Space**.
- ![Screenshot that shows the Search Connector Space action.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/search-connector-space.png)
+ ![Screenshot of the Synchronization Service window. The Search Connector Space action is selected.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/search-connector-space.png)
-1. In the **Search Connector Space** dialog, select the **Scope** dropdown and select **Pending Export**.
+1. In the **Search Connector Space** dialog, under **Scope**, select **Pending Export**.
- ![Screenshot that shows Pending Export.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/pending-export.png)
+ ![Screenshot of the Search Connector Space dialog. In the Scope list, Pending Export is selected.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/pending-export.png)
1. Select **Delete** and then select **Search**. If all objects have matched properly, there should be zero matching records for **Deletes**. Record any objects pending deletion and their on-premises values.
- ![Screenshot that shows deleted matching records.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/delete-matching-records.png)
+ ![Screenshot of the Search Connector Space dialog. In the search results, Text is highlighted that indicates that there were zero matching records.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/delete-matching-records.png)
-1. Clear **Delete**, and select **Add** and **Modify**, followed by a search. You should see update functions for all users currently being synchronized to Azure AD via Okta. Add any new objects that Okta isn't currently syncing, but that exist in the organizational unit (OU) structure that was selected during the Azure AD Connect installation.
+1. Clear **Delete**, and select **Add** and **Modify**. Then select **Search**. You should see update functions for all users currently being synchronized to Azure AD via Okta. Add any new objects that Okta isn't currently syncing, but that exist in the organizational unit (OU) structure that was selected during the Azure AD Connect installation.
- ![Screenshot that shows adding a new object.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/add-new-object.png)
+ ![Screenshot of the Search Connector Space dialog. In the search results, seven records are visible.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/add-new-object.png)
-1. Double-clicking on updates shows what Azure AD Connect will communicate with Azure AD.
+1. To see what Azure AD Connect will communicate with Azure AD, double-click an update.
1. If there are any **add** functions for a user who already exists in Azure AD, their on-premises account doesn't match their cloud account. AD Connect has determined it will create a new object and record any new adds that are unexpected. Make sure to correct the ImmutableID value in Azure AD before you exit the staging mode.
After you've prepared your list of source and destination targets, it's time to
Verify that your updates still include all attributes expected in Azure AD. If multiple attributes are being deleted, you might need to manually populate these on-premises AD values before you remove the staging mode.
- ![Screenshot that shows populating on-premises add values.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/on-premises-ad-values.png)
+ ![Screenshot of the Connector Space Object Properties window. The attributes for user John Smith are visible.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/on-premises-ad-values.png)
>[!NOTE]
- >Before you continue to the next step, ensure all user attributes are syncing properly and show on the **Pending Export** tab as expected. If they're deleted, make sure their ImmutableID values match and the user is in one of the selected OUs for synchronization.
+ >Before you continue to the next step, ensure all user attributes are syncing properly and appear on the **Pending Export** tab as expected. If they're deleted, make sure their ImmutableID values match and the user is in one of the selected OUs for synchronization.
## Install Azure AD cloud sync agents
-After you've prepared your list of source and destination targets, it's time to [install and configure Azure AD cloud sync agents](../cloud-sync/tutorial-single-forest.md). If you've opted to use an Azure AD Connect server, skip this section.
+After you've prepared your list of source and destination targets, install and configure Azure AD cloud sync agents by following the instructions in [Tutorial: Integrate a single forest with a single Azure AD tenant](../cloud-sync/tutorial-single-forest.md). If you've opted to use an Azure AD Connect server, skip this section.
## Disable Okta provisioning to Azure AD
After you've verified the Azure AD Connect installation and your pending exports
1. Go to your Okta portal, select **Applications**, and then select your Okta app used to provision users to Azure AD. Open the **Provisioning** tab and select the **Integration** section.
- ![Screenshot that shows the Integration section in Okta.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/integration-section.png)
+ ![Screenshot that shows the Integration section in the Okta portal.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/integration-section.png)
-1. Select **Edit**, clear the **Enable API integration** option and select **Save**.
+1. Select **Edit**, clear the **Enable API integration** option, and select **Save**.
- ![Screenshot that shows editing the Enable API integration in Okta.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/edit-api-integration.png)
+ ![Screenshot that shows the Integration section in the Okta portal. A message on the page says provisioning is not enabled.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/edit-api-integration.png)
>[!NOTE]
- >If you have multiple Office 365 apps handling provisioning to Azure AD, ensure they're all switched off.
+ >If you have multiple Office 365 apps that handle provisioning to Azure AD, ensure they're all switched off.
## Disable staging mode in Azure AD Connect
After you disable Okta provisioning, the Azure AD Connect server is ready to beg
1. Run the installation wizard from the desktop again and select **Configure**.
- ![Screenshot that shows the Azure AD Connect server.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/azure-ad-connect-server.png)
+ ![Screenshot of the Azure A D Connect window. The welcome page is visible with a Configure button at the bottom.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/azure-ad-connect-server.png)
1. Select **Configure staging mode** and then select **Next**. Enter your global administrator credentials.
- ![Screenshot that shows the Configure staging mode option.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/configure-staging-mode.png)
+ ![Screenshot of the Azure A D Connect window. On the left, Tasks is selected. On the Additional tasks page, Configure staging mode is selected.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/configure-staging-mode.png)
-1. Clear the **Enable staging mode** option and select **Next**.
+1. Clear **Enable staging mode** and select **Next**.
- ![Screenshot that shows clearing the Enable staging mode option.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/uncheck-enable-staging-mode.png)
+ ![Screenshot of the Azure A D Connect window. On the left, Staging Mode is selected. On the Configure staging mode page, nothing is selected.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/uncheck-enable-staging-mode.png)
1. Select **Configure** to continue.
- ![Screenshot that shows selecting the Configure button.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/ready-to-configure.png)
+ ![Screenshot of the Ready to configure page in Azure A D Connect. On the left, Configure is selected. A Configure button is also visible.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/ready-to-configure.png)
-1. After the configuration completes, open the **Synchronization Service** as an administrator. View the **Export** on the domain.onmicrosoft.com connector. Verify that all additions, updates, and deletions are done as expected.
+1. After the configuration finishes, open the **Synchronization Service** as an administrator. View the **Export** on the domain.onmicrosoft.com connector. Verify that all additions, updates, and deletions are done as expected.
- ![Screenshot that shows verifying the sync service.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/verify-sync-service.png)
+ ![Screenshot of the Synchronization Service window. An export line is selected, and export statistics like the number of adds, updates, and deletes are visible.](./media/migrate-okta-sync-provisioning-to-azure-active-directory-connect-based-synchronization/verify-sync-service.png)
-You've now successfully migrated to Azure AD Connect server-based provisioning. Updates and expansions to the feature set of Azure AD Connect can be done by rerunning the installation wizard.
+You've now successfully migrated to Azure AD Connect server-based provisioning. You can update and expand the feature set of Azure AD Connect by rerunning the installation wizard.
## Enable cloud sync agents
-After you disable Okta provisioning, the Azure AD cloud sync agent is ready to begin synchronizing objects. Return to the [Azure AD portal](https://aad.portal.azure.com/).
+After you disable Okta provisioning, the Azure AD cloud sync agent is ready to begin synchronizing objects.
+
+1. Go to the [Azure AD portal](https://aad.portal.azure.com/).
-1. Modify the **Configuration** profile to **Enabled**.
+1. In the **Configuration** profile, select **Enable**.
1. Return to the provisioning menu and select **Logs**.
-1. Evaluate that the provisioning connector has properly updated in-place objects. The cloud sync agents are nondestructive. They'll fail their updates if a match didn't occur properly.
+1. Check that the provisioning connector has properly updated in-place objects. The cloud sync agents are nondestructive. Their updates fail if a match isn't found.
1. If a user is mismatched, make the necessary updates to bind the ImmutableID values. Then restart the cloud provisioning sync. ## Next steps
-For more information about migrating from Okta to Azure AD, see:
+For more information about migrating from Okta to Azure AD, see these resources:
- [Migrate applications from Okta to Azure AD](migrate-applications-from-okta-to-azure-active-directory.md) - [Migrate Okta federation to Azure AD managed authentication](migrate-okta-federation-to-azure-active-directory.md)
active-directory Groups Create Eligible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-create-eligible.md
Add-AzureADGroupMember -ObjectId $roleAssignablegroup.Id -RefObjectId $member.Ob
### Create a role-assignable group in Azure AD ```http
-POST https://graph.microsoft.com/beta/groups
+POST https://graph.microsoft.com/v1.0/groups
{ "description": "This group is assigned to Helpdesk Administrator built-in role of Azure AD.", "displayName": "Contoso_Helpdesk_Administrators",
active-directory Manage Roles Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/manage-roles-portal.md
If PIM is enabled, you have additional capabilities, such as making a user eligi
$roleAssignmentEligible = Open-AzureADMSPrivilegedRoleAssignmentRequest -ProviderId 'aadRoles' -ResourceId $aadTenant.Id -RoleDefinitionId $roleDefinition.Id -SubjectId $user.objectId -Type 'AdminAdd' -AssignmentState 'Eligible' -schedule $schedule -reason "Review billing info" ```
-## Microsoft Graph API
+## Microsoft Graph PIM API
-Follow these instructions to assign a role using the Microsoft Graph API in [Graph Explorer](https://aka.ms/ge).
+Follow these instructions to assign a role using the Microsoft Graph PIM API.
### Assign a role
-In this example, a security principal with objectID `f8ca5a85-489a-49a0-b555-0a6d81e56f0d` is assigned the Billing Administrator role (role definition ID `b0f54661-2d74-4c50-afa3-1ec803f12efe`) at tenant scope. If you want to see the list of immutable role template IDs of all built-in roles, see [Azure AD built-in roles](permissions-reference.md).
-
-1. Sign in to the [Graph Explorer](https://aka.ms/ge).
-2. Select **POST** as the HTTP method from the dropdown.
-3. Select the API version to **v1.0**.
-4. Use the [Create unifiedRoleAssignment](/graph/api/rbacapplication-post-roleassignments) API to assign roles. Add following details to the URL and Request Body and select **Run query**.
+In this example, a security principal with objectID `f8ca5a85-489a-49a0-b555-0a6d81e56f0d` is assigned the Billing Administrator role (role definition ID `b0f54661-2d74-4c50-afa3-1ec803f12efe`) at tenant scope. To see the list of immutable role template IDs of all built-in roles, see [Azure AD built-in roles](permissions-reference.md).
```http POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments
Content-type: application/json
### Assign a role using PIM
-In this example, a security principal with objectID `f8ca5a85-489a-49a0-b555-0a6d81e56f0d` is assigned a time-bound eligible role assignment to Billing Administrator (role definition ID `b0f54661-2d74-4c50-afa3-1ec803f12efe`) for 180 days.
+#### Assign a time-bound eligible role assignment
-1. Sign in to the [Graph Explorer](https://aka.ms/ge).
-2. Select **POST** as the HTTP method from the dropdown.
-3. Select the API version to **beta**.
-4. Use the [Create unifiedRoleEligibilityScheduleRequest](/graph/api/unifiedroleeligibilityschedulerequest-post-unifiedroleeligibilityschedulerequests) API to assign roles using PIM. Add following details to the URL and Request Body and select **Run query**.
+In this example, a security principal with objectID `f8ca5a85-489a-49a0-b555-0a6d81e56f0d` is assigned a time-bound eligible role assignment to Billing Administrator (role definition ID `b0f54661-2d74-4c50-afa3-1ec803f12efe`) for 180 days.
```http
-POST https://graph.microsoft.com/beta/rolemanagement/directory/roleEligibilityScheduleRequests
+POST https://graph.microsoft.com/v1.0/rolemanagement/directory/roleEligibilityScheduleRequests
Content-type: application/json {
- "action": "AdminAssign",
+ "action": "adminAssign",
"justification": "for managing admin tasks", "directoryScopeId": "/", "principalId": "f8ca5a85-489a-49a0-b555-0a6d81e56f0d",
Content-type: application/json
"scheduleInfo": { "startDateTime": "2021-07-15T19:15:08.941Z", "expiration": {
- "type": "AfterDuration",
+ "type": "afterDuration",
"duration": "PT180D" } } } ```
+#### Assign a permanent eligible role assignment
+ In the following example, a security principal is assigned a permanent eligible role assignment to Billing Administrator. ```http
-POST https://graph.microsoft.com/beta/rolemanagement/directory/roleEligibilityScheduleRequests
+POST https://graph.microsoft.com/v1.0/rolemanagement/directory/roleEligibilityScheduleRequests
Content-type: application/json {
- "action": "AdminAssign",
+ "action": "adminAssign",
"justification": "for managing admin tasks", "directoryScopeId": "/", "principalId": "f8ca5a85-489a-49a0-b555-0a6d81e56f0d",
Content-type: application/json
"scheduleInfo": { "startDateTime": "2021-07-15T19:15:08.941Z", "expiration": {
- "type": "NoExpiration"
+ "type": "noExpiration"
} } } ```
-To activate the role assignment, use the [Create unifiedRoleAssignmentScheduleRequest](/graph/api/unifiedroleassignmentschedulerequest-post-unifiedroleassignmentschedulerequests) API.
+#### Activate a role assignment
+
+To activate the role assignment, use the [Create roleAssignmentScheduleRequests](/graph/api/rbacapplication-post-roleeligibilityschedulerequests) API.
```http
-POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignmentScheduleRequests
+POST https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignmentScheduleRequests
Content-type: application/json {
- "action": "SelfActivate",
+ "action": "selfActivate",
"justification": "activating role assignment for admin privileges", "roleDefinitionId": "b0f54661-2d74-4c50-afa3-1ec803f12efe", "directoryScopeId": "/",
Content-type: application/json
} ```
+For more information about managing Azure AD roles through the PIM API in Microsoft Graph, see [Overview of role management through the privileged identity management (PIM) API](/graph/api/resources/privilegedidentitymanagementv3-overview).
+ ## Next steps - [List Azure AD role assignments](view-assignments.md)
active-directory Blinq Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blinq-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Blinq for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Blinq.
++
+writer: twimmers
+
+ms.assetid: 5b076ac0-cd0e-43c3-85ed-8591bfd424ff
++++ Last updated : 05/25/2022+++
+# Tutorial: Configure Blinq for automatic user provisioning
+
+This tutorial describes the steps you need to do in both Blinq and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Blinq](https://blinq.me/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Blinq.
+> * Remove users in Blinq when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Blinq.
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Blinq with Admin permission
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Blinq](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Blinq to support provisioning with Azure AD
+
+1. Navigate to [Blinq Admin Console](https://dash.blinq.me) in a separate browser tab.
+1. If you aren't logged in to Blinq you will need to do so.
+1. Click on your workspace in the top left corner of the screen.
+1. In the dropdown click **Settings**.
+1. Under the **Integrations** page you should see **Team Card Provisioning** which contains a URL and Token. You will need to generate the token by clicking **Generate**.
+Copy the **URL** and **Token**. The URL and the Token are to be inserted into the **Tenant URL*** and **Secret Token** field in the Azure portal respectively.
+
+## Step 3. Add Blinq from the Azure AD application gallery
+
+Add Blinq from the Azure AD application gallery to start managing provisioning to Blinq. If you have previously setup Blinq for SSO, you can use the same application. However it's recommended you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user and group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Blinq
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in Blinq based on user and group assignments in Azure AD.
+
+### To configure automatic user provisioning for Blinq in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Blinq**.
+
+ ![Screenshot of the Blinq link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. In the **Admin Credentials** section, input your Blinq Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Blinq. If the connection fails, ensure your Blinq account has Admin permissions and try again.
+
+ ![Screenshot of Token field.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Blinq**.
+
+1. Review the user attributes that are synchronized from Azure AD to Blinq in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Blinq for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Blinq API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Blinq|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |displayName|String||
+ |nickName|String||
+ |title|String||
+ |preferredLanguage|String||
+ |locale|String||
+ |timezone|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |name.formatted|String||
+ |name.middleName|String||
+ |name.honorificPrefix|String||
+ |name.honorificSuffix|String||
+ |externalId|String||
+ |emails[type eq "work"].value|String||
+ |emails[type eq "home"].value|String||
+ |emails[type eq "other"].value|String||
+ |phoneNumbers[type eq "work"].value|String||
+ |phoneNumbers[type eq "mobile"].value|String||
+ |phoneNumbers[type eq "fax"].value|String||
+ |phoneNumbers[type eq "home"].value|String||
+ |phoneNumbers[type eq "other"].value|String||
+ |phoneNumbers[type eq "pager"].value|String||
+ |addresses[type eq "work"].formatted|String||
+ |addresses[type eq "work"].streetAddress|String||
+ |addresses[type eq "work"].locality|String||
+ |addresses[type eq "work"].region|String||
+ |addresses[type eq "work"].postalCode|String||
+ |addresses[type eq "work"].country|String||
+ |addresses[type eq "home"].formatted|String||
+ |addresses[type eq "home"].streetAddress|String||
+ |addresses[type eq "home"].locality|String||
+ |addresses[type eq "home"].region|String||
+ |addresses[type eq "home"].postalCode|String||
+ |addresses[type eq "home"].country|String||
+ |addresses[type eq "other"].formatted|String||
+ |addresses[type eq "other"].streetAddress|String||
+ |addresses[type eq "other"].locality|String||
+ |addresses[type eq "other"].region|String||
+ |addresses[type eq "other"].postalCode|String||
+ |addresses[type eq "other"].country|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
++
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Blinq, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and groups that you would like to provision to Blinq by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to complete than next cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## Change Logs
+05/25/2022 - **Schema Discovery** feature enabled on this app.
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Cerby Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cerby-provisioning-tutorial.md
Once you've configured provisioning, use the following resources to monitor your
* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion * If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+## Troubleshooting Tips
+If you need to regenerate the SCIM API authentication token, complete the following steps:
+
+1. Send an email with your request to [Cerby Support Team](mailto:support@cerby.com). The Cerby team regenerates the SCIM API authentication token.
+1. Receive the response email from Cerby to confirm that the token was successfully regenerated.
+1. Complete the instructions from the [How to Retrieve the SCIM API Authentication Token from Cerby](https://help.cerby.com/en/articles/5638472-how-to-configure-automatic-user-provisioning-for-azure-ad) article to retrieve the new token.
+
+ >[!NOTE]
+ >The Cerby team is currently developing a self-service solution for regenerating the SCIM API authentication token. To regenerate the token, the Cerby team members must validate their identity.
+ ## More resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory Cisco Umbrella Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-umbrella-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Cisco Umbrella Admin SSO | Microsoft Docs'
+ Title: 'Tutorial: Azure AD integration with Cisco Umbrella Admin SSO'
description: Learn how to configure single sign-on between Azure Active Directory and Cisco Umbrella Admin SSO.
Previously updated : 03/16/2021 Last updated : 05/24/2022
-# Tutorial: Azure Active Directory integration with Cisco Umbrella Admin SSO
+# Tutorial: Azure AD integration with Cisco Umbrella Admin SSO
In this tutorial, you'll learn how to integrate Cisco Umbrella Admin SSO with Azure Active Directory (Azure AD). When you integrate Cisco Umbrella Admin SSO with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Cisco Umbrella Admin SSO single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
4. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
Follow these steps to enable Azure AD SSO in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Metadata XML** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
6. On the **Set up Cisco Umbrella Admin SSO** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
2. From the left side of menu, click **Admin** and navigate to **Authentication** and then click on **SAML**.
- ![The Admin](./media/cisco-umbrella-tutorial/admin.png)
+ ![Screenshot shows the Admin menu window.](./media/cisco-umbrella-tutorial/admin.png "Administrator")
3. Choose **Other** and click on **NEXT**.
- ![The Other](./media/cisco-umbrella-tutorial/other.png)
+ ![Screenshot shows the Other menu window.](./media/cisco-umbrella-tutorial/other.png "Folder")
4. On the **Cisco Umbrella Admin SSO Metadata**, page, click **NEXT**.
- ![The metadata](./media/cisco-umbrella-tutorial/metadata.png)
+ ![Screenshot shows the metadata file page.](./media/cisco-umbrella-tutorial/metadata.png "File")
5. On the **Upload Metadata** tab, if you had pre-configured SAML, select **Click here to change them** option and follow the below steps.
- ![The Next](./media/cisco-umbrella-tutorial/next.png)
+ ![Screenshot shows the Next Folder window.](./media/cisco-umbrella-tutorial/next.png "Values")
6. In the **Option A: Upload XML file**, upload the **Federation Metadata XML** file that you downloaded from the Azure portal and after uploading metadata the below values get auto populated automatically then click **NEXT**.
- ![The choosefile](./media/cisco-umbrella-tutorial/choose-file.png)
+ ![Screenshot shows the choosefile from folder.](./media/cisco-umbrella-tutorial/choose-file.png "Federation")
7. Under **Validate SAML Configuration** section, click **TEST YOUR SAML CONFIGURATION**.
- ![The Test](./media/cisco-umbrella-tutorial/test.png)
+ ![Screenshot shows the Test SAML Configuration.](./media/cisco-umbrella-tutorial/test.png "Validate")
8. Click **SAVE**.
In the case of Cisco Umbrella Admin SSO, provisioning is a manual task.
2. From the left side of menu, click **Admin** and navigate to **Accounts**.
- ![The Account](./media/cisco-umbrella-tutorial/account.png)
+ ![Screenshot shows the Account of Cisco Umbrella Admin.](./media/cisco-umbrella-tutorial/account.png "Account")
3. On the **Accounts** page, click on **Add** on the top right side of the page and perform the following steps.
- ![The User](./media/cisco-umbrella-tutorial/create-user.png)
+ ![Screenshot shows the User of Accounts.](./media/cisco-umbrella-tutorial/create-user.png "User")
a. In the **First Name** field, enter the firstname like **Britta**.
active-directory Flexera One Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/flexera-one-tutorial.md
Previously updated : 12/29/2021 Last updated : 05/24/2022
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Flexera One single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
1. On the **Basic SAML Configuration** section, perform the following steps:
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Flexera One application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
- ![image](common/default-attributes.png)
+ ![Screenshot shows the image of Flexera One application.](common/default-attributes.png "Attributes")
1. In addition to above, Flexera One application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
1. On the **Set up Flexera One** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy Configuration appropriate U R L.](common/copy-configuration-urls.png "Configuration")
### Create an Azure AD test user
active-directory Iauditor Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/iauditor-tutorial.md
Previously updated : 03/24/2022 Last updated : 05/24/2022
To get started, you need the following items:
* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD. For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
Follow these steps to enable Azure AD SSO in the Azure portal.
1. iAuditor application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
- ![image](common/default-attributes.png)
+ ![Screenshot shows the image of iAuditor application.](common/default-attributes.png "Attributes")
1. In addition to above, iAuditor application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre-populated but you can review them as per your requirements.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificate-base64-download.png)
+ ![Screenshot shows the Certificate download link.](common/certificate-base64-download.png "Certificate")
### Create an Azure AD test user
active-directory Nodetrax Project Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nodetrax-project-tutorial.md
Previously updated : 10/06/2021 Last updated : 05/24/2022
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Nodetrax Project single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Nodetrax Project application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
- ![image](common/default-attributes.png)
+ ![Screenshot shows the image of Nodetrax Project application.](common/default-attributes.png "Attributes")
1. In addition to above, Nodetrax Project application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
1. On the **Set up Nodetrax Project** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy Configuration appropriate U R L.](common/copy-configuration-urls.png "Configuration")
### Create an Azure AD test user
active-directory Openlearning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/openlearning-tutorial.md
Previously updated : 02/17/2022 Last updated : 05/24/2022
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * OpenLearning single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
1. On the **Basic SAML Configuration** section, if you have **Service Provider metadata file**, perform the following steps: a. Click **Upload metadata file**.
- ![Upload metadata file](common/upload-metadata.png)
+ ![Screenshot shows to upload metadata file.](common/upload-metadata.png "Metadata")
b. Click on **folder logo** to select the metadata file and click **Upload**.
- ![choose metadata file](common/browse-upload-metadata.png)
+ ![Screenshot shows to choose metadata file.](common/browse-upload-metadata.png "Folder")
c. After the metadata file is successfully uploaded, the **Identifier** value gets auto populated in Basic SAML Configuration section.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
1. On the **Set up OpenLearning** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy Configuration appropriate U R L.](common/copy-configuration-urls.png "Configuration")
1. OpenLearning application expects to enable token encryption in order to make SSO work. To activate token encryption, go to the **Azure Active Directory** > **Enterprise applications** and select **Token encryption**. For more information, please refer this [link](../manage-apps/howto-saml-token-encryption.md).
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
The following questions and answers apply to the **Azure CNI network configurati
The entire cluster should use only one type of CNI.
-## AKS Engine
-
-[Azure Kubernetes Service Engine (AKS Engine)][aks-engine] is an open-source project that generates Azure Resource Manager templates you can use for deploying Kubernetes clusters on Azure.
-
-Kubernetes clusters created with AKS Engine support both the [kubenet][kubenet] and [Azure CNI][cni-networking] plugins. As such, both networking scenarios are supported by AKS Engine.
- ## Next steps Learn more about networking in AKS in the following articles:
Learn more about networking in AKS in the following articles:
[portal-01-networking-advanced]: ./media/networking-overview/portal-01-networking-advanced.png <!-- LINKS - External -->
-[aks-engine]: https://github.com/Azure/aks-engine
[services]: https://kubernetes.io/docs/concepts/services-networking/service/ [portal]: https://portal.azure.com [cni-networking]: https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md
az aks disable-addons --addons azure-keyvault-secrets-provider -g myResourceGrou
> When the Azure Key Vault Provider for Secrets Store CSI Driver is enabled, it updates the pod mount and the Kubernetes secret that's defined in the `secretObjects` field of `SecretProviderClass`. It does so by polling for changes periodically, based on the rotation poll interval you've defined. The default rotation poll interval is 2 minutes. >[!NOTE]
-> When the secret/key is updated in external secrets store after the initial pod deployment, the updated secret will be periodically updated in the pod mount and the Kubernetes Secret.
+> When a secret is updated in an external secrets store after initial pod deployment, the Kubernetes Secret and the pod mount will be periodically updated depending on how the application consumes the secret data.
>
-> Depending on how the application consumes the secret data:
+> **Mount the Kubernetes Secret as a volume**: Use the auto rotation and Sync K8s secrets features of Secrets Store CSI Driver. The application will need to watch for changes from the mounted Kubernetes Secret volume. When the Kubernetes Secret is updated by the CSI Driver, the corresponding volume contents are automatically updated.
>
-> 1. Mount Kubernetes secret as a volume: Use auto rotation feature + Sync K8s secrets feature in Secrets Store CSI Driver, application will need to watch for changes from the mounted Kubernetes Secret volume. When the Kubernetes Secret is updated by the CSI Driver, the corresponding volume contents are automatically updated.
-> 2. Application reads the data from containerΓÇÖs filesystem: Use rotation feature in Secrets Store CSI Driver, application will need to watch for the file change from the volume mounted by the CSI driver.
-> 3. Using Kubernetes secret for environment variable: The pod needs to be restarted to get the latest secret as environment variable.
-> Use something like https://github.com/stakater/Reloader to watch for changes on the synced Kubernetes secret and do rolling upgrades on pods
+> **Application reads the data from the containerΓÇÖs filesystem**: Use the rotation feature of Secrets Store CSI Driver. The application will need to watch for the file change from the volume mounted by the CSI driver.
+>
+> **Use the Kubernetes Secret for an environment variable**: Restart the pod to get the latest secret as an environment variable.
+> Use a tool such as [Reloader][reloader] to watch for changes on the synced Kubernetes Secret and perform rolling upgrades on pods.
To enable autorotation of secrets, use the `enable-secret-rotation` flag when you create your cluster:
Now that you've learned how to use the Azure Key Vault Provider for Secrets Stor
[kube-csi]: https://kubernetes-csi.github.io/docs/ [key-vault-provider-install]: https://azure.github.io/secrets-store-csi-driver-provider-azure/getting-started/installation [sample-secret-provider-class]: https://azure.github.io/secrets-store-csi-driver-provider-azure/getting-started/usage/#create-your-own-secretproviderclass-object
+[reloader]: https://github.com/stakater/Reloader
+
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
Global Azure cloud is supported with Arc support on the regions listed by [Azure
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Install the latest version of the [Azure CLI](/cli/azure/install-azure-cli-windows).
+- Install the latest version of the [Azure CLI][install-cli].
- If you don't have one already, you need to create an [AKS cluster][deploy-cluster] or connect an [Arc-enabled Kubernetes cluster][arc-k8s-cluster]. ### Set up the Azure CLI extension for cluster extensions
az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSClu
[az-provider-register]: /cli/azure/provider#az-provider-register [sample-application]: ./quickstart-dapr.md [k8s-version-support-policy]: ./supported-kubernetes-versions.md?tabs=azure-cli#kubernetes-version-support-policy
-[arc-k8s-cluster]: /azure-arc/kubernetes/quickstart-connect-cluster.md
+[arc-k8s-cluster]: ../azure-arc/kubernetes/quickstart-connect-cluster.md
[update-extension]: ./cluster-extensions.md#update-extension-instance
+[install-cli]: https://docs.microsoft.com/cli/azure/install-azure-cli
<!-- LINKS EXTERNAL --> [kubernetes-production]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-production
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
Learn more about Kubernetes services at the [Kubernetes services documentation][
<!-- LINKS - External --> [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubernetes-services]: https://kubernetes.io/docs/concepts/services-networking/service/
-[aks-engine]: https://github.com/Azure/aks-engine
<!-- LINKS - Internal --> [advanced-networking]: configure-azure-cni.md
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/intro-kubernetes.md
Learn more about deploying and managing AKS with the Azure CLI Quickstart.
> [Deploy an AKS Cluster using Azure CLI][aks-quickstart-cli] <!-- LINKS - external -->
-[aks-engine]: https://github.com/Azure/aks-engine
[kubectl-overview]: https://kubernetes.io/docs/user-guide/kubectl-overview/ [compliance-doc]: https://azure.microsoft.com/overview/trusted-cloud/compliance/
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
Learn more about using Internal Load Balancer for Inbound traffic at the [AKS In
[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubernetes-services]: https://kubernetes.io/docs/concepts/services-networking/service/
-[aks-engine]: https://github.com/Azure/aks-engine
<!-- LINKS - Internal --> [advanced-networking]: configure-azure-cni.md
aks Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md
Title: Managed NAT Gateway (preview)
+ Title: Managed NAT Gateway
description: Learn how to create an AKS cluster with managed NAT integration
Last updated 10/26/2021
-# Managed NAT Gateway (preview)
+# Managed NAT Gateway
Whilst AKS customers are able to route egress traffic through an Azure Load Balancer, there are limitations on the amount of outbound flows of traffic that is possible.
Azure NAT Gateway allows up to 64,000 outbound UDP and TCP traffic flows per IP
This article will show you how to create an AKS cluster with a Managed NAT Gateway for egress traffic. ## Before you begin To use Managed NAT gateway, you must have the following: * The latest version of the Azure CLI
-* The `aks-preview` extension version 0.5.31 or later
* Kubernetes version 1.20.x or above
-### Install aks-preview CLI extension
-
-You also need the *aks-preview* Azure CLI extension version 0.5.31 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
-
-### Register the `AKS-NATGatewayPreview` feature flag
-
-To use the NAT Gateway feature, you must enable the `AKS-NATGatewayPreview` feature flag on your subscription.
-
-```azurecli
-az feature register --namespace "Microsoft.ContainerService" --name "AKS-NATGatewayPreview"
-```
-You can check on the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-NATGatewayPreview')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
-- ## Create an AKS cluster with a Managed NAT Gateway To create an AKS cluster with a new Managed NAT Gateway, use `--outbound-type managedNATGateway` as well as `--nat-gateway-managed-outbound-ip-count` and `--nat-gateway-idle-timeout` when running `az aks create`. The following example creates a *myresourcegroup* resource group, then creates a *natcluster* AKS cluster in *myresourcegroup* with a Managed NAT Gateway, two outbound IPs, and an idle timeout of 30 seconds.
aks Quickstart Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-dapr.md
Title: Deploy an application with the Dapr cluster extension for Azure Kubernetes Service (AKS)
-description: Use the Dapr cluster extension for Azure Kubernetes Service (AKS) to deploy an application
+ Title: Deploy an application with the Dapr cluster extension for Azure Kubernetes Service (AKS) or Arc-enabled Kubernetes
+description: Use the Dapr cluster extension for Azure Kubernetes Service (AKS) or Arc-enabled Kubernetes to deploy an application
Last updated 05/03/2022
-# Quickstart: Deploy an application using the Dapr cluster extension for Azure Kubernetes Service (AKS)
+# Quickstart: Deploy an application using the Dapr cluster extension for Azure Kubernetes Service (AKS) or Arc-enabled Kubernetes
-In this quickstart, you will get familiar with using the [Dapr cluster extension][dapr-overview] in an AKS cluster. You will be deploying a hello world example, consisting of a Python application that generates messages and a Node application that consumes and persists them.
+In this quickstart, you will get familiar with using the [Dapr cluster extension][dapr-overview] in an AKS or Arc-enabled Kubernetes cluster. You will be deploying a hello world example, consisting of a Python application that generates messages and a Node application that consumes and persists them.
## Prerequisites * An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). * [Azure CLI installed](/cli/azure/install-azure-cli).
-* An AKS cluster with the [Dapr cluster extension][dapr-overview] enabled
+* An AKS or Arc-enabled Kubernetes cluster with the [Dapr cluster extension][dapr-overview] enabled
## Clone the repository
You should see the latest JSON in the response.
## Clean up resources
-Use the [az group delete][az-group-delete] command to remove the resource group, the AKS cluster, namespace, and all related resources.
+Use the [az group delete][az-group-delete] command to remove the resource group, the cluster, the namespace, and all related resources.
```azurecli-interactive az group delete --name MyResourceGroup
aks Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/support-policies.md
Microsoft manages and monitors the following components through the control pane
AKS isn't a Platform-as-a-Service (PaaS) solution. Some components, such as agent nodes, have *shared responsibility*, where users must help maintain the AKS cluster. User input is required, for example, to apply an agent node operating system (OS) security patch.
-The services are *managed* in the sense that Microsoft and the AKS team deploys, operates, and is responsible for service availability and functionality. Customers can't alter these managed components. Microsoft limits customization to ensure a consistent and scalable user experience. For a fully customizable solution, see [AKS Engine](https://github.com/Azure/aks-engine).
+The services are *managed* in the sense that Microsoft and the AKS team deploys, operates, and is responsible for service availability and functionality. Customers can't alter these managed components. Microsoft limits customization to ensure a consistent and scalable user experience.
## Shared responsibility
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
Patches have a two month minimum lifecycle. To keep up to date when new patches
For information on how to upgrade your cluster, see [Upgrade an Azure Kubernetes Service (AKS) cluster][aks-upgrade]. <!-- LINKS - External -->
-[aks-engine]: https://github.com/Azure/aks-engine
[azure-update-channel]: https://azure.microsoft.com/updates/?product=kubernetes-service <!-- LINKS - Internal -->
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
A workload may require splitting a cluster's nodes into separate pools for logic
* All subnets assigned to node pools must belong to the same virtual network. * System pods must have access to all nodes/pods in the cluster to provide critical functionality such as DNS resolution and tunneling kubectl logs/exec/port-forward proxy.
-* If you expand your VNET after creating the cluster you must update your cluster (perform any managed cluster operation but node pool operations don't count) before adding a subnet outside the original cidr. AKS will error out on the agent pool add now though we originally allowed it. If you don't know how to reconcile your cluster file a support ticket.
+* If you expand your VNET after creating the cluster you must update your cluster (perform any managed cluster operation but node pool operations don't count) before adding a subnet outside the original cidr. AKS will error out on the agent pool add now though we originally allowed it. The `aks-preview` Azure CLI extension (version 0.5.66+) now supports running `az aks update -g <resourceGroup> -n <clusterName>` without any optional arguments. This command will perform an update operation without making any changes, which can recover a cluster stuck in a failed state.
* In clusters with Kubernetes version < 1.23.3, kube-proxy will SNAT traffic from new subnets, which can cause Azure Network Policy to drop the packets. * Windows nodes will SNAT traffic to the new subnets until the node pool is reimaged. * Internal load balancers default to one of the node pool subnets (usually the first subnet of the node pool at cluster creation). To override this behavior, you can [specify the load balancer's subnet explicitly using an annotation][internal-lb-different-subnet].
aks Virtual Nodes Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/virtual-nodes-cli.md
The pod is assigned an internal IP address from the Azure virtual network subnet
To test the pod running on the virtual node, browse to the demo application with a web client. As the pod is assigned an internal IP address, you can quickly test this connectivity from another pod on the AKS cluster. Create a test pod and attach a terminal session to it: ```console
-kubectl run -it --rm testvk --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
+kubectl run -it --rm testvk --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
``` Install `curl` in the pod using `apt-get`:
AKS_SUBNET=myVirtualNodeSubnet
NODE_RES_GROUP=$(az aks show --resource-group $RES_GROUP --name $AKS_CLUSTER --query nodeResourceGroup --output tsv) # Get network profile ID
-NETWORK_PROFILE_ID=$(az network profile list --resource-group $NODE_RES_GROUP --query [0].id --output tsv)
+NETWORK_PROFILE_ID=$(az network profile list --resource-group $NODE_RES_GROUP --query "[0].id" --output tsv)
# Delete the network profile az network profile delete --id $NETWORK_PROFILE_ID -y
+# Grab the service association link ID
+SAL_ID=$(az network vnet subnet show --resource-group $RES_GROUP --vnet-name $AKS_VNET --name $AKS_SUBNET --query id --output tsv)/providers/Microsoft.ContainerInstance/serviceAssociationLinks/default
+
+# Delete the service association link for the subnet
+az resource delete --ids $SAL_ID --api-version {api-version}
+ # Delete the subnet delegation to Azure Container Instances
-az network vnet subnet update --resource-group $RES_GROUP --vnet-name $AKS_VNET --name $AKS_SUBNET --remove delegations 0
+az network vnet subnet update --resource-group $RES_GROUP --vnet-name $AKS_VNET --name $AKS_SUBNET --remove delegations
``` ## Next steps
aks Virtual Nodes Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/virtual-nodes-portal.md
The pod is assigned an internal IP address from the Azure virtual network subnet
To test the pod running on the virtual node, browse to the demo application with a web client. As the pod is assigned an internal IP address, you can quickly test this connectivity from another pod on the AKS cluster. Create a test pod and attach a terminal session to it: ```console
-kubectl run -it --rm virtual-node-test --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
+kubectl run -it --rm virtual-node-test --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
``` Install `curl` in the pod using `apt-get`:
aks Windows Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-faq.md
Use the following configuration:
1. In your Kubernetes service configuration, set **externalTrafficPolicy=Local**. This ensures that the Kubernetes service directs traffic only to pods within the local node. 1. In your Kubernetes service configuration, set **sessionAffinity: ClientIP**. This ensures that the Azure Load Balancer gets configured with session affinity.
-## What if I need a feature that's not supported?
-
-If you encounter feature gaps, the open-source [aks-engine][aks-engine] project provides an easy and fully customizable way of running Kubernetes in Azure, including Windows support. For more information, see [AKS roadmap][aks-roadmap].
- ## Next steps To get started with Windows Server containers in AKS, see [Create a node pool that runs Windows Server in AKS][windows-node-cli]. <!-- LINKS - external --> [kubernetes]: https://kubernetes.io
-[aks-engine]: https://github.com/azure/aks-engine
[upstream-limitations]: https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#supported-functionality-and-limitations [intro-windows]: https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/ [aks-roadmap]: https://github.com/Azure/AKS/projects/1
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-advanced-policies.md
This sample policy shows an example of using the `send-one-way-request` policy t
<choose> <when condition="@(context.Response.StatusCode >= 500)"> <send-one-way-request mode="new">
- <set-url>https://hooks.slack.com/services/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</set-url>
+ <set-url>https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX</set-url>
<set-method>POST</set-method> <set-body>@{ return new JObject(
app-service Quickstart Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-php.md
Azure CLI has a command [`az webapp up`](/cli/azure/webapp#az_webapp_up) that wi
In the terminal, deploy the code in your local folder using the [`az webapp up`](/cli/azure/webapp#az_webapp_up) command: ```azurecli
-az webapp up \
- --sku F1 \
- --logs
+az webapp up --runtime "php|8.0" --os-type=linux
``` - If the `az` command isn't recognized, be sure you have <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a> installed.--- The `--sku F1` argument creates the web app on the Free pricing tier, which incurs a no cost.-- The `--logs` flag configures default logging required to enable viewing the log stream immediately after launching the webapp.
+- The `--runtime "php|8.0"` argument creates the web app with PHP version 8.0.
+- The `--os-type=linux` argument creates the web app on App Service on Linux.
- You can optionally specify a name with the argument `--name <app-name>`. If you don't provide one, then a name will be automatically generated. - You can optionally include the argument `--location <location-name>` where `<location_name>` is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the [`az account list-locations`](/cli/azure/appservice#az_appservice_list_locations) command. - If you see the error, "Could not auto-detect the runtime stack of your app," make sure you're running the command in the code directory (See [Troubleshooting auto-detect issues with az webapp up](https://github.com/Azure/app-service-linux-docs/blob/master/AzWebAppUP/runtime_detection.md)).
Resource group creation complete
Creating AppServicePlan '&lt;app-service-plan-name>' ... Creating webapp '&lt;app-name>' ... Configuring default logging for the app, if not already enabled
-Creating zip with contents of dir /home/cephas/myExpressApp ...
+Creating zip with contents of dir /home/msangapu/myPhpApp ...
Getting scm site credentials for zip deployment Starting zip deployment. This operation can take a while to complete ... Deployment endpoint responded with status code 202
Browse to the deployed application in your web browser at the URL `http://<app-n
echo "Hello Azure!"; ```
-1. Save your changes, then redeploy the app using the [az webapp up](/cli/azure/webapp#az-webapp-up) command again with no arguments:
+1. Save your changes, then redeploy the app using the [az webapp up](/cli/azure/webapp#az-webapp-up) command again with these arguments:
```azurecli
- az webapp up
+ az webapp up --runtime "php|8.0" --os-type=linux
``` 1. Once deployment has completed, return to the browser window that opened during the **Browse to the app** step, and refresh the page.
This command may take a minute to run.
> [!div class="nextstepaction"] > [Configure PHP app](configure-language-php.md)
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
The application gateway infrastructure includes the virtual network, subnets, ne
## Virtual network and dedicated subnet
-An application gateway is a dedicated deployment in your virtual network. Within your virtual network, a dedicated subnet is required for the application gateway. You can have multiple instances of a given application gateway deployment in a subnet. You can also deploy other application gateways in the subnet. But you can't deploy any other resource in the application gateway subnet. You can't mix Standard_v2 and Standard Azure Application Gateway on the same subnet.
+An application gateway is a dedicated deployment in your virtual network. Within your virtual network, a dedicated subnet is required for the application gateway. You can have multiple instances of a given application gateway deployment in a subnet. You can also deploy other application gateways in the subnet. But you can't deploy any other resource in the application gateway subnet. You can't mix v1 and v2 Azure Application Gateway SKUs on the same subnet.
> [!NOTE] > [Virtual network service endpoint policies](../virtual-network/virtual-network-service-endpoint-policies-overview.md) are currently not supported in an Application Gateway subnet.
applied-ai-services Resource Customer Stories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/resource-customer-stories.md
Previously updated : 02/04/2022 Last updated : 05/25/2022
The following customers and partners have adopted Form Recognizer across a wide
| Customer/Partner | Description | Link | ||-|-|
-| **Acumatica** | [**Acumatica**](https://www.acumatica.com/) is a technology provider that develops cloud- and browser-based enterprise resource planning (ERP) software for small and medium-sized businesses (SMBs). To bring expense claims into the modern age, Acumatica incorporated Form Recognizer into its native application. The Form Recognizer's prebuilt-receipt API and machine learning capabilities are used to automatically extract data from receipts. Acumatica's customers can file multiple, error-free claims in a matter of seconds, freeing up more time to focus on other important tasks. | [Customer story](https://customers.microsoft.com/story/762684-acumatica-partner-professional-services-azure) |
+| **Acumatica** | [**Acumatica**](https://www.acumatica.com/) is a technology provider that develops cloud and browser-based enterprise resource planning (ERP) software for small and medium-sized businesses (SMBs). To bring expense claims into the modern age, Acumatica incorporated Form Recognizer into its native application. The Form Recognizer's prebuilt-receipt API and machine learning capabilities are used to automatically extract data from receipts. Acumatica's customers can file multiple, error-free claims in a matter of seconds, freeing up more time to focus on other important tasks. | [Customer story](https://customers.microsoft.com/story/762684-acumatica-partner-professional-services-azure) |
+ | **Air Canada** | In September 2021, [**Air Canada**](https://www.aircanada.com/) was tasked with verifying the COVID-19 vaccination status of thousands of worldwide employees in only two months. After realizing manual verification would be too costly and complex within the time constraint, Air Canada turned to its internal AI team for an automated solution. The AI team partnered with Microsoft and used Form Recognizer to roll out a fully functional, accurate solution within weeks. This partnership met the government mandate on time and saved thousands of hours of manual work. | [Customer story](https://customers.microsoft.com/story/1505667713938806113-air-canada-travel-transportation-azure-form-recognizer)|
|**Arkas Logistics** | [**Arkas Logistics**](http://www.arkaslojistik.com.tr/) is operates under the umbrella of Arkas Holding, Turkey's leading holding institution and operating in 23 countries. During the COVID-19 crisis, Arkas Logistics has been able to provide outstanding, complete logistical services thanks to its focus on contactless operation and digitalization steps. Form Recognizer powers a solution that maintains the continuity of the supply chain and allows for uninterrupted service. | [Customer story](https://customers.microsoft.com/story/842149-arkas-logistics-transportation-azure-en-turkey ) | |**Automation Anywhere**| [**Automation Anywhere**](https://www.automationanywhere.com/) is on a singular and unwavering mission to democratize automation by liberating teams from mundane, repetitive tasks, and allowing more time for innovation and creativity with cloud-native robotic process automation (RPA)software. To protect the citizens of the United Kingdom, healthcare providers must process tens of thousands of COVID-19 tests daily, each one accompanied by a form for the World Health Organization (WHO). Manually completing and processing these forms would potentially slow testing and divert resources away from patient care. In response, Automation Anywhere built an AI-powered bot to help a healthcare provider automatically process and submit the COVID-19 test forms at scale. | [Customer story](https://customers.microsoft.com/story/811346-automation-anywhere-partner-professional-services-azure-cognitive-services) | |**AvidXchange**| [**AvidXchange**](https://www.avidxchange.com/) has developed an accounts payable automation solution applying Form Recognizer. AvidXchange partners with Azure Cognitive Services to deliver an accounts payable automation solution for the middle market. Customers benefit from faster invoice processing times and increased accuracy to ensure their suppliers are paid the right amount, at the right time. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)| |**Blue Prism**| [**Blue Prism**](https://www.blueprism.com/) Decipher is an AI-powered document processing capability that's directly embedded into the company's connected-RPA platform. Decipher works with Form Recognizer to help organizations process forms faster and with less human effort. One of Blue Prism's customers has been testing the solution to automate invoice handling as part of its procurement process. | [Customer story](https://customers.microsoft.com/story/737482-blue-prism-partner-professional-services-azure) | |**Chevron**| [**Chevron**](https://www.chevron.com//) Canada Business Unit is now using Form Recognizer with UiPath's robotic process automation platform to automate the extraction of data and move it into back-end systems for analysis. Subject matter experts have more time to focus on higher-value activities and information flows more rapidly. Accelerated operational control enables the company to analyze its business with greater speed, accuracy, and depth. | [Customer story](https://customers.microsoft.com/story/chevron-mining-oil-gas-azure-cognitive-services)| |**Cross Masters**|[**Cross Masters**](https://crossmasters.com/), uses cutting-edge AI technologies not only as a passion, but as an essential part of a work culture requiring continuous innovation. One of the latest success stories is automation of manual paperwork required to process thousands of invoices. Cross Masters used Form Recognizer to develop a unique, customized solution, to provide clients with market insights from a large set of collected invoices. Most impressive is the extraction quality and continuous introduction of new features, such as model composing and table labeling. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)|
-|**Element**| [**Element**](https://www.element.com/) is a global business that provides specialist testing, inspection, and certification services to a diverse range of businesses. Element is one of the fastest growing companies in the global testing, inspection and certification sector having over 6,500 engaged experts working in more than 200 facilities across the globe. When the finance team for the Americas was forced to work from home during the COVID-19 pandemic, it needed to digitalize its paper processes fast. The creativity of the team and its use of Azure Form Recognizer delivered more than business as usualΓÇöit delivered significant efficiencies. The Element team used the tools in Microsoft Azure so the next phase could be expedited. Rather than coding from scratch, they saw the opportunity to use the Azure Form Recognizer. This integration quickly gave them the functionality they needed, together with the agility and security of Microsoft Azure. Microsoft Azure Logic Apps is used to automate the process of extracting the documents from email, storing them, and updating the system with the extracted data. Computer Vision, part of Azure Cognitive Services, partners with Azure Form Recognizer to extract the right data points from the invoice documentsΓÇöwhether they're a pdf or scanned images. | [Customer story](https://customers.microsoft.com/story/1414941527887021413-element)|
-|**Emaar Properties**| [**Emaar Properties**](https://www.emaar.com/en/), operates Dubai Mall, the world's most-visited retail and entertainment destination. Each year, the Dubai Mall draws more than 80 million visitors. To enrich the shopping experience, Emaar Properties offers a unique rewards program through a dedicated mobile app. Loyalty program points are earned via submitted receipts. Emaar Properties uses Microsoft Azure Form Recognizer to process submitted receipts and has achieved 92 percent reading accuracy.| [Customer story](https://customers.microsoft.com/story/1459754150957690925-emaar-retailers-azure-en-united-arab-emirates)|
-|**EY**| [**EY**](https://ey.com/) (Ernst & Young Global Limited) is a multinational professional services network that helps to create long-term value for clients and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries to help clients grow, transform, and operate. EY teams work across assurance, consulting, law, strategy, tax, and transactions to find solutions for complex issues facing our world today. The EY Technology team collaborated with Microsoft to build a platform that hastens invoice extraction and contract comparison processes. Azure Form Recognizer and Custom Vision partnered to enable EY teams to automate and improve the OCR and document handling processes for its consulting, tax, audit, and transactions services clients. | [Customer story](https://customers.microsoft.com/story/1404985164224935715-ey-professional-services-azure-form-recognizer)|
-|**Financial Fabric**| [**Financial Fabric**](https://www.financialfabric.com//), a Microsoft Cloud Solution Provider, delivers data architecture, science, and analytics services to investment managers at hedge funds, family offices, and corporate treasuries. Its daily processes involve extracting and normalizing data from thousands of complex financial documents, such as bank statements and legal agreements. The company then provides custom analytics to help its clients make better investment decisions. Extracting this data previously took days or weeks. By using Form Recognizer, Financial Fabric has reduced the time it takes to go from extraction to analysis to just minutes. | [Customer story](https://customers.microsoft.com/story/financial-fabric-banking-capital-markets-azure)|
-|**GEP**| [**GEP**](https://www.gep.com/) has developed an invoice processing solution for a client using Form Recognizer. "At GEP, we're seeing AI and automation make a profound impact on procurement and the supply chain. By combining our AI solution with Microsoft Form Recognizer, we automated the processing of 4,000 invoices a day for a client... It saved them tens of thousands of hours of manual effort, while improving accuracy, controls and compliance on a global scale." Sarateudu Sethi, GEP's Vice President of Artificial Intelligence. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)|
+|**Element**| [**Element**](https://www.element.com/) is a global business that provides specialist testing, inspection, and certification services to a diverse range of businesses. Element is one of the fastest growing companies in the global testing, inspection and certification sector having over 6,500 engaged experts working in more than 200 facilities across the globe. When the finance team for the Americas was forced to work from home during the COVID-19 pandemic, it needed to digitalize its paper processes fast. The creativity of the team and its use of Azure Form Recognizer delivered more than business as usualΓÇöit delivered significant efficiencies. The Element team used the tools in Azure so the next phase could be expedited. Rather than coding from scratch, they saw the opportunity to use the Azure Form Recognizer. This integration quickly gave them the functionality they needed, together with the agility and security of Azure. Azure Logic Apps is used to automate the process of extracting the documents from email, storing them, and updating the system with the extracted data. Computer Vision, part of Azure Cognitive Services, partners with Azure Form Recognizer to extract the right data points from the invoice documentsΓÇöwhether they're a pdf or scanned images. | [Customer story](https://customers.microsoft.com/story/1414941527887021413-element)|
+|**Emaar Properties**| [**Emaar Properties**](https://www.emaar.com/en/), operates Dubai Mall, the world's most-visited retail and entertainment destination. Each year, the Dubai Mall draws more than 80 million visitors. To enrich the shopping experience, Emaar Properties offers a unique rewards program through a dedicated mobile app. Loyalty program points are earned via submitted receipts. Emaar Properties uses Azure Form Recognizer to process submitted receipts and has achieved 92 percent reading accuracy.| [Customer story](https://customers.microsoft.com/story/1459754150957690925-emaar-retailers-azure-en-united-arab-emirates)|
+|**EY**| [**EY**](https://ey.com/) (Ernst & Young Global Limited) is a multinational professional services network that helps to create long-term value for clients and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries to help clients grow, transform, and operate. EY teams work across assurance, consulting, law, strategy, tax, and transactions to find solutions for complex issues facing our world today. The EY Technology team collaborated with Microsoft to build a platform that hastens invoice extraction and contract comparison processes. Azure Form Recognizer and Custom Vision partnered to enable EY teams to automate and improve the OCR and document handling processes for its transactions services clients. | [Customer story](https://customers.microsoft.com/story/1404985164224935715-ey-professional-services-azure-form-recognizer)|
+|**Financial Fabric**| [**Financial Fabric**](https://www.financialfabric.com/), a Microsoft Cloud Solution Provider, delivers data architecture, science, and analytics services to investment managers at hedge funds, family offices, and corporate treasuries. Its daily processes involve extracting and normalizing data from thousands of complex financial documents, such as bank statements and legal agreements. The company then provides custom analytics to help its clients make better investment decisions. Extracting this data previously took days or weeks. By using Form Recognizer, Financial Fabric has reduced the time it takes to go from extraction to analysis to just minutes. | [Customer story](https://customers.microsoft.com/story/financial-fabric-banking-capital-markets-azure)|
+|**Fujitsu**| [**Fujitsu**](https://scanners.us.fujitsu.com/about-us) is the world leader in document scanning technology, with more than 50 percent of global market share, but that doesn't stop the company from constantly innovating. To improve the performance and accuracy of its cloud scanning solution, Fujitsu incorporated Azure Form Recognizer. It took only a few months to deploy the new technologies, and they have boosted character recognition rates as high as 99.9 percent. This collaboration helps Fujitsu deliver market-leading innovation and give its customers powerful and flexible tools for end-to-end document management. | [Customer story](https://customers.microsoft.com/en-us/story/1504311236437869486-fujitsu-document-scanning-azure-form-recognizer)|
+|**GEP**| [**GEP**](https://www.gep.com/) has developed an invoice processing solution for a client using Form Recognizer. GEP combined their AI solution with Azure Form Recognizer to automate the processing of 4,000 invoices a day for a client saving them tens of thousands of hours of manual effort. This collaborative effort improved accuracy, controls, and compliance on a global scale." Sarateudu Sethi, GEP's Vice President of Artificial Intelligence. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)|
|**HCA Healthcare**| [**HCA Healthcare**](https://hcahealthcare.com/) is one of the nation's leading providers of healthcare with over 180 hospitals and 2,000 sites-of-care located throughout the United States and serving approximately 35 million patients each year. Currently, they're using Azure Form Recognizer to simplify and improve the patient onboarding experience and reducing administrative time spent entering repetitive data into the care center's system. | [Customer story](https://customers.microsoft.com/story/1404891793134114534-hca-healthcare-healthcare-provider-azure)| |**Icertis**| [**Icertis**](https://www.icertis.com/), is a Software as a Service (SaaS) provider headquartered in Bellevue, Washington. Icertis digitally transforms the contract management process with a cloud-based, AI-powered, contract lifecycle management solution. Azure Form Recognizer enables Icertis Contract Intelligence to take key-value pairs embedded in contracts and create structured data understood and operated upon by machine algorithms. Through these and other powerful Azure Cognitive and AI services, Icertis empowers customers in every industry to improve business in multiple ways: optimized manufacturing operations, added agility to retail strategies, reduced risk in IT services, and faster delivery of life-saving pharmaceutical products. | [Blog](https://cloudblogs.microsoft.com/industry-blog/en-in/unicorn/2022/01/12/how-icertis-built-a-contract-management-solution-using-azure-form-recognizer/)| |**Instabase**| [**Instabase**](https://instabase.com/) is a horizontal application platform that provides best-in-class machine learning processes to help retrieve, organize, identify, and understand complex masses of unorganized data. Instabase then brings this data into business workflows as organized information. The platform provides a repository of integrative applications to orchestrate and harness that information with the means to rapidly extend and enhance them as required. Instabase applications are fully containerized for widespread, infrastructure-agnostic deployment. | [Customer story](https://customers.microsoft.com/en-gb/story/1376278902865681018-instabase-partner-professional-services-azure)|
-|**Northern Trust**| [**Northern Trust**](https://www.northerntrust.com/) is a leading provider of wealth management, asset servicing, asset management, and banking to corporations, institutions, families, and individuals. As part of its initiative to digitize alternative asset servicing, Northern Trust has launched an AI-powered solution to extract unstructured investment data from alternative asset documents and making it accessible and actionable for asset-owner clients. Microsoft Azure Applied AI services accelerate time-to-value for enterprises building AI solutions. This proprietary solution transforms crucial information such as capital call notices, cash and stock distribution notices, and capital account statements from various unstructured formats into digital, actionable insights for investment teams. | [Customer story](https://www.businesswire.com/news/home/20210914005449/en/Northern-Trust-Automates-Data-Extraction-from-Alternative-Asset-Documentation)|
+|**Northern Trust**| [**Northern Trust**](https://www.northerntrust.com/) is a leading provider of wealth management, asset servicing, asset management, and banking to corporations, institutions, families, and individuals. As part of its initiative to digitize alternative asset servicing, Northern Trust has launched an AI-powered solution to extract unstructured investment data from alternative asset documents and making it accessible and actionable for asset-owner clients. Azure Applied AI services accelerate time-to-value for enterprises building AI solutions. This proprietary solution transforms crucial information from various unstructured formats into digital, actionable insights for investment teams. | [Customer story](https://www.businesswire.com/news/home/20210914005449/en/Northern-Trust-Automates-Data-Extraction-from-Alternative-Asset-Documentation)|
|**Standard Bank**| [**Standard Bank of South Africa**](https://www.standardbank.co.za/southafrica/personal/home) is Africa's largest bank by assets. Standard Bank is headquartered in Johannesburg, South Africa, and has more than 150 years of trade experience in Africa and beyond. When manual due diligence in cross-border transactions began absorbing too much staff time, the bank decided it needed a new way forward. Standard Bank uses Form Recognizer to significantly reduce its cross-border payments registration and processing time. | [Customer story](https://customers.microsoft.com/en-hk/story/1395059149522299983-standard-bank-of-south-africa-banking-capital-markets-azure-en-south-africa)| | **WEX**| [**WEX**](https://www.wexinc.com/) has developed a tool to process Explanation of Benefits documents using Form Recognizer. "The technology is truly amazing. I was initially worried that this type of solution wouldn't be feasible, but I soon realized that Form Recognizer can read virtually any document with accuracy." Matt Dallahan, Senior Vice President of Product Management and Strategy | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)|
-|**Wilson Allen** | [**Wilson Allen**](https://wilsonallen.com/) took advantage of AI container support for Microsoft Azure Cognitive Services and created a powerful AI solution that help firms around the world find unprecedented levels of insight in previously siloed and unstructured data. Its clients can use this data to support business development and foster client relationships. | [Customer story](https://customers.microsoft.com/story/814361-wilson-allen-partner-professional-services-azure)|
+|**Wilson Allen** | [**Wilson Allen**](https://wilsonallen.com/) took advantage of AI container support for Azure Cognitive Services and created a powerful AI solution that help firms around the world find unprecedented levels of insight in previously siloed and unstructured data. Its clients can use this data to support business development and foster client relationships. | [Customer story](https://customers.microsoft.com/story/814361-wilson-allen-partner-professional-services-azure)|
|**Zelros**| [**Zelros**](http://www.zelros.com/) offers AI-powered software for the insurance industry. Insurers use the Zelros platform to take in forms and seamlessly manage customer enrollment and claims filing. The company combined its technology with Form Recognizer to automatically pull key-value pairs and text out of documents. When insurers use the Zelros platform, they can quickly process paperwork, ensure high accuracy, and redirect thousands of hours previously spent on manual data extraction toward better service. | [Customer story](https://customers.microsoft.com/story/816397-zelros-insurance-azure)|
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/whats-new.md
Previously updated : 10/14/2020 Last updated : 05/25/2022
Welcome! This page covers what's new in the Metrics Advisor docs. Check back every month for information on service changes, doc additions and updates this month.
+## May 2022
+
+ **Detection configuration auto-tuning** has been released. This feature enables you to customize the service to better surface and personalize anomalies. Instead of the traditional way of setting configurations for each time series or a group of time series. A guided experience is provided to capture your detection preferences, such as the level of sensitivity, and the types of anomaly patterns, which allows you to tailor the model to your own needs on the back end. Those preferences can then be applied to all the time series you're monitoring. This allows you to reduce configuration costs while achieving better detection results.
+
+Check out [this article](how-tos/configure-metrics.md#tune-the-detection-configuration) to learn how to take advantage of the new feature.
+ ## SDK updates If you want to learn about the latest updates to Metrics Advisor client SDKs see:
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
If you use a proxy server for communication between Azure Automation and machine
> [!NOTE] > You can set up the proxy settings by PowerShell cmdlets or API.
+ To install the extension using cmdlets:
+
+1. Get the automation account details using the below API call.
+
+ ```http
+ GET https://westcentralus.management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/automationAccounts/{automationAccountName}?api-version=2021-06-22
+
+ ```
+
+ The API call will provide the value with the key: `AutomationHybridServiceUrl`. Use the URL in the next step to enable extension on the VM.
+
+1. Install the Hybrid Worker Extension on the VM by running the following PowerShell cmdlet (Required module: Az.Compute). Use the `properties.automationHybridServiceUrl` provided by the above API call
+
+ **Proxy server settings** # [Windows](#tab/windows)
$protectedsettings = @{
"ProxyPassword" = "password"; }; ```
+**Azure VMs**
+
+```powershell
+Set-AzVMExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 0.1 -Settings $settings
+```
+
+**Azure Arc-enabled VMs**
+
+```powershell
+New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 0.1 -Settings $settings -NoWait
+```
# [Linux](#tab/linux)
$settings = @{
"AutomationAccountURL" = "<registration-url>/<subscription-id>"; }; ```
+**Azure VMs**
+
+```powershell
+Set-AzVMExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForLinux -TypeHandlerVersion 0.1 -Settings $settings
+```
+
+**Azure Arc-enabled VMs**
+
+```powershell
+New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForLinux -TypeHandlerVersion 0.1 -Settings $settings -NoWait
+```
+ ### Firewall use
azure-arc Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/move-regions.md
+
+ Title: "Move Arc-enabled Kubernetes clusters between regions"
++ Last updated : 03/03/2021++++
+description: "Manually move your Azure Arc-enabled Kubernetes between regions"
+keywords: "Kubernetes, Arc, Azure, K8s, containers, region, move"
+#Customer intent: As a Kubernetes cluster administrator, I want to move my Arc-enabled Kubernetes cluster to another Azure region.
++
+# Move Arc-enabled Kubernetes clusters across Azure regions
+
+This article describes how to move Arc-enabled Kubernetes clusters (or connected cluster resources) to a different Azure region. You might move your resources to another region for a number of reasons. For example, to take advantage of a new Azure region, to deploy features or services available in specific regions only, to meet internal policy and governance requirements, or in response to capacity planning requirements.
+
+## Prerequisites
+
+- Ensure that Azure Arc-enabled Kubernetes resource (Microsoft.Kubernetes/connectedClusters) is supported in the target region.
+- Ensure that Azure Arc-enabled Kubernetes configuration (Microsoft.KubernetesConfiguration/SourceControlConfigurations, Microsoft.KubernetesConfiguration/Extensions, Microsoft.KubernetesConfiguration/FluxConfigurations) resources are supported in the target region.
+- Ensure that the Arc-enabled services you've deployed on top are supported in the target region.
+- Ensure you have network access to the api server of your underlying Kubernetes cluster.
+
+## Prepare
+
+Before you begin, it's important to understand what moving these resources mean.
+
+### Kubernetes configurations
+
+Source control configurations, Flux configurations and extensions are child resources to the connected cluster resource. In order to move these resources, you'll first need to move the parent connected cluster resource.
+
+### Connected cluster
+
+The connectedClusters resource is the ARM representation of your Kubernetes clusters outside of Azure (on-premises, another cloud, edge...). The underlying infrastructure lies in your environment and Arc provides a first-class representation of the cluster on Azure, by installing agents on your cluster.
+
+When it comes to "moving" your Arc connected cluster, it means deleting the ARM resource in the source region, cleaning up the agents on your cluster and re-onboarding your cluster again in the target region.
+
+## Move
+
+### Kubernetes configurations
+
+1. Do a LIST of all configuration resources in the source cluster (the cluster to be moved) and save the response body to be used as the request body when re-creating these resources.
+ - [Microsoft.KubernetesConfiguration/SourceControlConfigurations](/cli/azure/k8s-configuration?view=azure-cli-latest&preserve-view=true#az-k8sconfiguration-list)
+ - [Microsoft.KubernetesConfiguration/Extensions](/cli/azure/k8s-extension?view=azure-cli-latest&preserve-view=true#az-k8s-extension-list)
+ - [Microsoft.KubernetesConfiguration/FluxConfigurations](/cli/azure/k8s-configuration/flux?view=azure-cli-latest&preserve-view=true#az-k8s-configuration-flux-list)
+ > [!NOTE]
+ > LIST/GET of configuration resources **do not** return `ConfigurationProtectedSettings`.
+ > For such cases, the only option is to save the original request body and reuse them while creating the resources in the new region.
+2. [Delete](./move-regions.md#kubernetes-configurations-3) the above configuration resources.
+2. Ensure the Arc connected cluster is up and running in the new region. This is the target cluster.
+3. Re-create each of the configuration resources obtained in the LIST command from the source cluster on the target cluster.
+
+### Connected cluster
+
+1. [Delete](./move-regions.md#connected-cluster-3) the previous Arc deployment from the underlying Kubernetes cluster.
+2. With network access to the underlying Kubernetes cluster, run [this command](./quickstart-connect-cluster.md?tabs=azure-cli#connect-an-existing-kubernetes-cluster) to create the Arc connected cluster in the new region.
+> [!NOTE]
+> The above command creates the cluster by default in the same location as its resource group.
+> Use the `--location` parameter to explicitly provide the target region value.
+
+## Verify
+
+### Kubernetes configurations
+
+Do a LIST of all configuration resources in the target cluster. This should match the LIST response from the source cluster.
+
+### Connected cluster
+
+1. Run `az connectedk8s show -n <connected-cluster-name> -g <resource-group>` and ensure the `connectivityStatus` value is `Connected`.
+2. Run [this command](./quickstart-connect-cluster.md?tabs=azure-cli#view-azure-arc-agents-for-kubernetes) to verify all Arc agents are successfully deployed on the underlying cluster.
+
+## Clean up source resources
+
+### Kubernetes configurations
+
+Delete each of the configuration resources returned in the LIST command in the source cluster:
+- [Microsoft.KubernetesConfiguration/SourceControlConfigurations](/cli/azure/k8s-configuration?view=azure-cli-latest&preserve-view=true#az-k8s-configuration-delete)
+- [Microsoft.KubernetesConfiguration/Extensions](/cli/azure/k8s-extension?view=azure-cli-latest&preserve-view=true#az-k8s-extension-delete)
+- [Microsoft.KubernetesConfiguration/FluxConfigurations](/cli/azure/k8s-configuration/flux?view=azure-cli-latest&preserve-view=true#az-k8s-configuration-flux-delete)
+
+> [!NOTE]
+> This step may be skipped if the parent Arc connected cluster is also being deleted. Doing so would automatically remove the configuration resources on top.
+
+### Connected cluster
+
+With network access to the underlying Kubernetes cluster, run [this command](./quickstart-connect-cluster.md?tabs=azure-cli#clean-up-resources) to delete the Arc connected cluster. This command will clean up the Arc footprint on the underlying cluster as well as on ARM.
azure-arc Onboard Group Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-group-policy.md
Title: Connect machines at scale using group policy description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using group policy. Previously updated : 04/29/2022 Last updated : 05/25/2022
Before you can run the script to connect your machines, you'll need to do the fo
1. Modify and save the following configuration file to the remote share as `ArcConfig.json`. Edit the file with your Azure subscription, resource group, and location details. Use the service principal details from step 1 for the last two fields:
-```
+```json
{
- "tenant-id": "INSERT AZURE TENANTID",
- "subscription-id": "INSERT AZURE SUBSCRIPTION ID",
- "resource-group": "INSERT RESOURCE GROUP NAME",
- "location": "INSERT REGION",
- "service-principal-id": "INSERT SPN ID",
- "service-principal-secret": "INSERT SPN Secret"
- }
+ "tenant-id": "INSERT AZURE TENANTID",
+ "subscription-id": "INSERT AZURE SUBSCRIPTION ID",
+ "resource-group": "INSERT RESOURCE GROUP NAME",
+ "location": "INSERT REGION",
+ "service-principal-id": "INSERT SPN ID",
+ "service-principal-secret": "INSERT SPN Secret"
+ }
``` The group policy will project machines as Arc-enabled servers in the Azure subscription, resource group, and region specified in this configuration file.
azure-cache-for-redis Cache Redis Cache Bicep Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-redis-cache-bicep-provision.md
+
+ Title: Deploy Azure Cache for Redis using Bicep
+description: Learn how to use Bicep to deploy an Azure Cache for Redis resource.
+++++ Last updated : 05/24/2022++
+# Quickstart: Create an Azure Cache for Redis using Bicep
+
+Learn how to use Bicep to deploy a cache using Azure Cache for Redis. After you deploy the cache, use it with an existing storage account to keep diagnostic data. Learn how to define which resources are deployed and how to define parameters that are specified when the deployment is executed. You can use this Bicep file for your own deployments, or customize it to meet your requirements.
++
+## Prerequisites
+
+* **Azure subscription**: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+* **A storage account**: To create one, see [Create an Azure Storage account](../storage/common/storage-account-create.md?tabs=azure-portal). The storage account is used for diagnostic data. Create the storage account in a new resource group named **exampleRG**.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/redis-cache/).
++
+The following resources are defined in the Bicep file:
+
+* [Microsoft.Cache/Redis](/azure/templates/microsoft.cache/redis)
+* [Microsoft.Insights/diagnosticsettings](/azure/templates/microsoft.insights/diagnosticsettings)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters existingDiagnosticsStorageAccountName=<storage-name> existingDiagnosticsStorageAccountResourceGroup=<resource-group>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -existingDiagnosticsStorageAccountName "<storage-name>" -existingDiagnosticsStorageAccountResourceGroup "<resource-group>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<storage-name\>** with the name of the storage account you created at the beginning of this quickstart. Replace **\<resource-group\>** with the name of the resource group name in which your storage account is located.
+
+ When the deployment finishes, you see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, delete the resource group, which deletes the resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this tutorial, you learned how to use Bicep to deploy a cache using Azure Cache for Redis. To learn more about Azure Cache for Redis and Bicep, see the articles below:
+
+* Learn more about [Azure Cache for Redis](../azure-cache-for-redis/cache-overview.md).
+* Learn more about [Bicep](../../articles/azure-resource-manager/bicep/overview.md).
azure-cache-for-redis Cache Web App Bicep With Redis Cache Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-web-app-bicep-with-redis-cache-provision.md
+
+ Title: Provision Web App that uses Azure Cache for Redis using Bicep
+description: Use Bicep to deploy web app with Azure Cache for Redis.
+++ Last updated : 05/24/2022++++
+# Create a Web App plus Azure Cache for Redis using Bicep
+
+In this article, you use Bicep to deploy an Azure Web App that uses Azure Cache for Redis, as well as an App Service plan.
++
+You can use this Bicep file for your own deployments. The Bicep file provides unique names for the Azure Web App, the App Service plan, and the Azure Cache for Redis. If you'd like, you can customize the Bicep file after you save it to your local device to meet your requirements.
+
+For more information about creating Bicep files, see [Quickstart: Create Bicep files with Visual Studio Code](../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md). To learn about Bicep syntax, see [Understand the structure and syntax of Bicep files](../azure-resource-manager/bicep/file.md).
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.web/web-app-with-redis-cache/).
++
+With this Bicep file, you deploy:
+
+* [**Microsoft.Cache/Redis**](/azure/templates/microsoft.cache/redis)
+* [**Microsoft.Web/sites**](/azure/templates/microsoft.web/sites)
+* [**Microsoft.Web/serverfarms**](/azure/templates/microsoft.web/serverfarms)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
+
+
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+To learn more about Bicep, continue to the following article:
+
+* [Bicep overview](../azure-resource-manager/bicep/overview.md)
azure-functions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/deploy.md
To simplify management and removal, we recommend you deploy Start/Stop VMs v2 (p
> The naming format for the function app and storage account has changed. To guarantee global uniqueness, a random and unique string is now appended to the names of these resource. 1. Open your browser and navigate to the Start/Stop VMs v2 [GitHub organization](https://github.com/microsoft/startstopv2-deployments/blob/main/README.md).
-1. Select the deployment option based on the Azure cloud environment your Azure VMs are created in. This will open the custom Azure Resource Manager deployment page in the Azure portal.
+1. Select the deployment option based on the Azure cloud environment your Azure VMs are created in.
1. If prompted, sign in to the [Azure portal](https://portal.azure.com).
+1. Choose the appropriate **Plan** from the drop-down box. When choosing a Zone Redundant plan (**Start/StopV2-AZ**), you must create your deployment in one of the following regions:
+ + Australia East
+ + Brazil South
+ + Canada Central
+ + Central US
+ + East US
+ + East US 2
+ + France Central
+ + Germany West Central
+ + Japan East
+ + North Europe
+ + Southeast Asia
+ + UK South
+ + West Europe
+ + West US 2
+ + West US 3
+
+1. Select **Create**, which opens the custom Azure Resource Manager deployment page in the Azure portal.
+ 1. Enter the following values: |Name |Value |
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Previously updated : 05/11/2022 Last updated : 05/24/2022 # Overview of Azure Monitor agents
The following tables list the operating systems that are supported by the Azure
| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent <sup>1</sup> | Dependency agent | Diagnostics extension <sup>2</sup>| |:|::|::|::|::
+| AlmaLinux | X | | | |
| Amazon Linux 2017.09 | | X | | | | Amazon Linux 2 | | X | | | | CentOS Linux 8 | X <sup>3</sup> | X | X | |
The following tables list the operating systems that are supported by the Azure
| Red Hat Enterprise Linux Server 7 | X | X | X | X | | Red Hat Enterprise Linux Server 6 | | X | X | | | Red Hat Enterprise Linux Server 6.7+ | | X | X | X |
+| Rocky Linux | X | | | |
| SUSE Linux Enterprise Server 15.2 | X <sup>3</sup> | | | | | SUSE Linux Enterprise Server 15.1 | X <sup>3</sup> | X | | | | SUSE Linux Enterprise Server 15 SP1 | X | X | X | | | SUSE Linux Enterprise Server 15 | X | X | X | | | SUSE Linux Enterprise Server 12 SP5 | X | X | X | X | | SUSE Linux Enterprise Server 12 | X | X | X | X |
+| Ubuntu 22.04 LTS | X | | | |
| Ubuntu 20.04 LTS | X | X | X | X | | Ubuntu 18.04 LTS | X | X | X | X | | Ubuntu 16.04 LTS | X | X | X | X |
azure-monitor Activity Log Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/activity-log-alerts.md
- Title: Activity log alerts in Azure Monitor
-description: Be notified via SMS, webhook, SMS, email and more, when certain events occur in the activity log.
- Previously updated : 04/04/2022---
-# Alerts on activity log
-
-## Overview
-
-Activity log alerts allow you to be notified on events and operations that are logged in [Azure Activity Log](../essentials/activity-log.md). An alert is fired when a new [activity log event](../essentials/activity-log-schema.md) occurs that matches the conditions specified in the alert rule.
-
-Activity log alert rules are Azure resources, so they can be created by using an Azure Resource Manager template. They also can be created, updated, or deleted in the Azure portal. This article introduces the concepts behind activity log alerts. For more information on creating or usage of activity log alert rules, see [Create and manage activity log alerts](./alerts-activity-log.md).
-
-## Alerting on activity log event categories
-
-You can create activity log alert rules to receive notifications on one of the following activity log event categories:
-
-| Event Category | Category Description | Example |
-|-|-||
-| Administrative | ARM operation (e.g. create, update, delete, or action) was performed on resources in your subscription, resource group, or on a specific Azure resource.| A virtual machine in your resource group is deleted |
-| Service health | Service incidents (e.g. an outage or a maintenance event) occurred that may impact services in your subscription on a specific region.| An outage impacting VMs in your subscription in East US. |
-| Resource health | The health of a specific resource is degraded, or the resource becomes unavailable. | A VM in your subscription transitions to a degraded or unavailable state. |
-| Autoscale | An Azure Autoscale operation has occurred, resulting in success or failure | An autoscale action on a virtual machine scale set in your subscription failed. |
-| Recommendation | A new Azure Advisor recommendation is available for your subscription | A high-impact recommendation for your subscription was received. |
-| Security | Events detected by Microsoft Defender for Cloud | A suspicious double extension file executed was detected in your subscription |
-| Policy | Operations performed by Azure Policy | Policy Deny event occurred in your subscription. |
-
-> [!NOTE]
-> Alert rules **cannot** be created for events in Alert category of activity log.
--
-## Configuring activity log alert rules
-
-You can configure an activity log alert rule based on any top-level property in the JSON object for an activity log event. For more information, see [Categories in the Activity Log](../essentials/activity-log.md#view-the-activity-log).
-
-An alternative simple way for creating conditions for activity log alert rules is to explore or filter events via [Activity log in Azure portal](../essentials/activity-log.md#view-the-activity-log). In Azure Monitor - Activity log, one can filter and locate a required event and then create an alert rule to notify on similar events by using the **New alert rule** button.
-
-> [!NOTE]
-> An activity log alert rule monitors only for events in the subscription in which the alert rule is created.
-
-Activity log events have a few common properties which can be used to define an activity log alert rule condition:
--- **Category**: Administrative, Service Health, Resource Health, Autoscale, Security, Policy, or Recommendation. -- **Scope**: The individual resource or set of resource(s) for which the alert on activity log is defined. Scope for an activity log alert can be defined at various levels:
- - Resource Level: For example, for a specific virtual machine
- - Resource Group Level: For example, all virtual machines in a specific resource group
- - Subscription Level: For example, all virtual machines in a subscription (or) all resources in a subscription
-- **Resource group**: By default, the alert rule is saved in the same resource group as that of the target defined in Scope. The user can also define the Resource Group where the alert rule should be stored.-- **Resource type**: Resource Manager defined namespace for the target of the alert rule.-- **Operation name**: The [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md) name utilized for Azure role-based access control. Operations not registered with Azure Resource Manager cannot be used in an activity log alert rule.-- **Level**: The severity level of the event (Informational, Warning, Error, or Critical).-- **Status**: The status of the event, typically Started, Failed, or Succeeded.-- **Event initiated by**: Also known as the "caller." The email address or Azure Active Directory identifier of the user (or application) who performed the operation.-
-In addition to these comment properties, different activity log events have category-specific properties that can be used to configure an alert rule for events of each category. For example, when creating a service health alert rule you can configure a condition on the impacted region or service that appear in the event.
-
-## Using action groups
-
-When an activity log alert is fired, it uses an action group to trigger actions or send notifications. An action group is a reusable set of notification receivers, such as email addresses, webhook URLs, or SMS phone numbers. The receivers can be referenced from multiple alerts rules to centralize and group your notification channels. When you define your activity log alert rule, you have two options. You can:
-
-* Use an existing action group in your activity log alert rule.
-* Create a new action group.
-
-To learn more about action groups, see [Create and manage action groups in the Azure portal](./action-groups.md).
-
-## Activity log alert rules limit
-You can create up to 100 active activity log alert rules per subscription (including rules for all activity log event categories, such as resource health or service health). This limit can't be increased.
-If you are reaching near this limit, there are several guidelines you can follow to optimize the use of activity log alerts rules, so that you can cover more resources and events with the same number of rules:
-* A single activity log alert rule can be configured to cover the scope of a single resource, a resource group, or an entire subscription. To reduce the number of rules you're using, consider to replace multiple rules covering a narrow scope with a single rule covering a broad scope. For example, if you have multiple VMs in a subscription, and you want an alert to be triggered whenever one of them is restarted, you can use a single activity log alert rule to cover all the VMs in your subscription. The alert will be triggered whenever any VM in the subscription is restarted.
-* A single service health alert rule can cover all the services and Azure regions used by your subscription. If you're using multiple service health alert rules per subscription, you can replace them with a single rule (or with a small number of rules, if you prefer).
-* A single resource health alert rule can cover multiple resource types and resources in your subscription. If you're using multiple resource health alert rules per subscription, you can replace them with a smaller number of rules (or even a single rule) that covers multiple resource types.
--
-## Next steps
--- Get an [overview of alerts](./alerts-overview.md).-- Learn about [create and modify activity log alerts](alerts-activity-log.md).-- Review the [activity log alert webhook schema](../alerts/activity-log-alerts-webhook.md).-- Learn more about [service health alerts](../../service-health/service-notifications.md).-- Learn more about [Resource health alerts](../../service-health/resource-health-alert-monitor-guide.md).-- Learn more about [Recommendation alerts](../../advisor/advisor-alerts-portal.md).
azure-monitor Alerts Common Schema Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema-definitions.md
Last updated 07/20/2021
This article describes the [common alert schema definitions](./alerts-common-schema.md) for Azure Monitor, including those for webhooks, Azure Logic Apps, Azure Functions, and Azure Automation runbooks. Any alert instance describes the resource that was affected and the cause of the alert. These instances are described in the common schema in the following sections:
-* **Essentials**: A set of standardized fields, common across all alert types, which describe what resource the alert is on, along with additional common alert metadata (for example, severity or description). Definitions of severity can be found in the [alerts overview](alerts-overview.md#overview).
+* **Essentials**: A set of standardized fields, common across all alert types, which describe what resource the alert is on, along with additional common alert metadata (for example, severity or description).
* **Alert context**: A set of fields that describes the cause of the alert, with fields that vary based on the alert type. For example, a metric alert includes fields like the metric name and metric value in the alert context, whereas an activity log alert has information about the event that generated the alert. **Sample alert payload**
azure-monitor Alerts Common Schema Test Action Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema-test-action-definitions.md
Last updated 01/14/2022
This article describes the [common alert schema definitions](./alerts-common-schema.md) for Azure Monitor, including those for webhooks, Azure Logic Apps, Azure Functions, and Azure Automation runbooks. Any alert instance describes the resource that was affected and the cause of the alert. These instances are described in the common schema in the following sections:
-* **Essentials**: A set of standardized fields, common across all alert types, which describe what resource the alert is on, along with additional common alert metadata (for example, severity or description). Definitions of severity can be found in the [alerts overview](alerts-overview.md#overview).
+* **Essentials**: A set of standardized fields, common across all alert types, which describe what resource the alert is on, along with additional common alert metadata (for example, severity or description).
* **Alert context**: A set of fields that describes the cause of the alert, with fields that vary based on the alert type. For example, a metric alert includes fields like the metric name and metric value in the alert context, whereas an activity log alert has information about the event that generated the alert. **Sample alert payload**
azure-monitor Alerts Dynamic Thresholds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-dynamic-thresholds.md
To trigger an alert when there was a violation from a Dynamic Thresholds in 20 m
## How do you find out why a Dynamic Thresholds alert was triggered?
-You can explore triggered alert instances in the alerts view either by clicking on the link in the email or text message, or browser to see the alerts view in the Azure portal. [Learn more about the alerts view](./alerts-overview.md#alerts-experience).
+You can explore triggered alert instances by clicking on the link in the email or text message, or browse to see the alerts in the Azure portal. [Learn more about the alerts view](./alerts-page.md).
The alert view displays:
azure-monitor Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log.md
Title: Create, view, and manage log alert rules Using Azure Monitor | Microsoft Docs
-description: Use Azure Monitor to create, view, and manage log alert rules
+ Title: Create Azure Monitor log alert rules and manage alert instances | Microsoft Docs
+description: Create Azure Monitor log alert rules and manage your alert instances.
Previously updated : 2/23/2022 Last updated : 05/23/2022 +
-# Create, view, and manage log alerts using Azure Monitor
+# Create Azure Monitor log alert rules and manage alert instances
-This article shows you how to create and manage log alerts. Azure Monitor log alerts allow users to use a [Log Analytics](../logs/log-analytics-tutorial.md) query to evaluate resource logs at a set frequency and fire an alert based on the results. Rules can trigger one or more actions using [Action Groups](./action-groups.md). [Learn more about functionality and terminology of log alerts](./alerts-unified-log.md).
+This article shows you how to create log alert rules and manage your alert instances. Azure Monitor log alerts allow users to use a [Log Analytics](../logs/log-analytics-tutorial.md) query to evaluate resource logs at a set frequency and fire an alert based on the results. Rules can trigger one or more actions using [alert processing rules](alerts-action-rules.md) and [action groups](./action-groups.md). Learn the concepts behind log alerts [here](alerts-types.md#log-alerts).
- Alert rules are defined by three components:
+When an alert is triggered by an alert rule,
- Target: A specific Azure resource to monitor. - Criteria: Logic to evaluate. If met, the alert fires. - Action: Notifications or automation - email, SMS, webhook, and so on. You can also [create log alert rules using Azure Resource Manager templates](../alerts/alerts-log-create-templates.md). ## Create a new log alert rule in the Azure portal
-> [!NOTE]
-> This article describes creating alert rules using the new alert rule wizard.
-> The new alert rule experience is a little different than the old experience. Please note these changes:
-> - Previously, search results were included in the payloads of the triggered alert and its associated notifications. This was a limited and error prone solution. To get detailed context information about the alert so that you can decide on the appropriate action :
-> - The recommended best practice it to use [Dimensions](alerts-unified-log.md#split-by-alert-dimensions). Dimensions provide the column value that fired the alert, giving you context for why the alert fired and how to fix the issue.
-> - When you need to investigate in the logs, use the link in the alert to the search results in Logs.
-> - If you need the raw search results or for any other advanced customizations, use Logic Apps.
-> - The new alert rule wizard does not support customization of the JSON payload.
-> - Use custom properties in the [new API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules/create-or-update#actions) to add static parameters and associated values to the webhook actions triggered by the alert.
-> - For more advanced customizations, use Logic Apps.
-> - The new alert rule wizard does not support customization of the email subject.
-> - Customers often use the custom email subject to indicate the resource on which the alert fired, instead of using the Log Analytics workspace. Use the [new API](alerts-unified-log.md#split-by-alert-dimensions) to trigger an alert of the desired resource using the resource id column.
-> - For more advanced customizations, use Logic Apps.
-
-1. In the [portal](https://portal.azure.com/), select the relevant resource. We recommend monitoring at scale by using a subscription or resource group for the alert rule.
+1. In the [portal](https://portal.azure.com/), select the relevant resource. We recommend monitoring at scale by using a subscription or resource group.
1. In the Resource menu, select **Logs**. 1. Write a query that will find the log events for which you want to create an alert. You can use the [alert query examples article](../logs/queries.md) to understand what you can discover or [get started on writing your own query](../logs/log-analytics-tutorial.md). Also, [learn how to create optimized alert queries](alerts-log-query.md). 1. From the top command bar, Select **+ New Alert rule**.
You can also [create log alert rules using Azure Resource Manager templates](../
:::image type="content" source="media/alerts-log/alerts-create-new-alert-rule.png" alt-text="Create new alert rule." lightbox="media/alerts-log/alerts-create-new-alert-rule-expanded.png"::: 1. The **Condition** tab opens, populated with your log query.
+
+ By default, the rule counts the number of results in the last 5 minutes.
+
+ If the system detects summarized query results, the rule is automatically updated with that information.
:::image type="content" source="media/alerts-log/alerts-logs-conditions-tab.png" alt-text="Conditions Tab.":::
-1. In the **Measurement** section, select values for the [**Measure**](./alerts-unified-log.md#measure), [**Aggregation type**](./alerts-unified-log.md#aggregation-type), and [**Aggregation granularity**](./alerts-unified-log.md#aggregation-granularity) fields.
- - By default, the rule counts the number of results in the last 5 minutes.
- - If the system detects summarized query results, the rule is automatically updated to capture that.
-
+1. In the **Measurement** section, select values for these fields:
+
+ |Field |Description |
+ |||
+ |Measure|Log alerts can measure two different things, which can be used for different monitoring scenarios:<br> **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, syslog, application exceptions. <br>**Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. For example, CPU percentage. |
+ |Aggregation type| The calculation performed on multiple records to aggregate them to one numeric value using the aggregation granularity. For example: Total, Average, Minimum, or Maximum. |
+ |Aggregation granularity| The interval for aggregating multiple records to one numeric value.|
+
:::image type="content" source="media/alerts-log/alerts-log-measurements.png" alt-text="Measurements.":::
-1. (Optional) In the **Split by dimensions** section, select [alert splitting by dimensions](./alerts-unified-log.md#split-by-alert-dimensions):
- - If detected, The **Resource ID column** is selected automatically and changes the context of the fired alert to the record's resource.
- - Clear the **Resource ID column** to fire alerts on multiple resources in subscriptions or resource groups. For example, you can create a query that checks if 80% of the resource group's virtual machines are experiencing high CPU usage.
- - You can use the dimensions table to select up to six more splittings for any number or text columns types.
- - Alerts are fired individually for each unique splitting combination. The alert payload includes the combination that triggered the alert.
-1. In the **Alert logic** section, set the **Alert logic**: [**Operator**, **Threshold Value**](./alerts-unified-log.md#threshold-and-operator), and [**Frequency**](./alerts-unified-log.md#frequency).
+1. (Optional) In the **Split by dimensions** section, you can create resource-centric alerts at scale for a subscription or resource group. Splitting by dimensions groups combinations of numerical or string columns to monitor for the same condition on multiple Azure resources.
- :::image type="content" source="media/alerts-log/alerts-rule-preview-agg-params-and-splitting.png" alt-text="Preview alert rule parameters.":::
+ If you select more than one dimension value, each time series that results from the combination triggers its own alert and is charged separately. The alert payload includes the combination that triggered the alert.
-1. (Optional) In the **Advanced options** section, set the [**Number of violations to trigger the alert**](./alerts-unified-log.md#number-of-violations-to-trigger-alert).
+ You can select up to six more splittings for any number or text columns types.
+
+ You can also decide **not** to split when you want a condition applied to multiple resources in the scope. For example, if you want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
+
+ Select values for these fields:
+
+ |Field |Description |
+ |||
+ |Dimension name|Dimensions can be either number or string columns. Dimensions are used to monitor specific time series and provide context to a fired alert.<br>Splitting on the Azure Resource ID column makes the specified resource into the alert target. If an Resource ID column is detected, it is selected automatically and changes the context of the fired alert to the record's resource. |
+ |Operator|The operator used on the dimension name and value. |
+ |Dimension values|The dimension values are based on data from the last 48 hours. Select **Add custom value** to add custom dimension values. |
+
+ :::image type="content" source="media/alerts-log/alerts-create-log-rule-dimensions.png" alt-text="Screenshot of the splitting by dimensions section of a new log alert rule.":::
- :::image type="content" source="media/alerts-log/alerts-rule-preview-advanced-options.png" alt-text="Advanced options.":::
+1. In the **Alert logic** section, select values for these fields:
+
+ |Field |Description |
+ |||
+ |Operator| The query results are transformed into a number. In this field, select the operator to use to compare the number against the threshold.|
+ |Threshold value| A number value for the threshold. |
+ |Frequency of evaluation|The interval in which the query is run. Can be set from a minute to a day. |
+
+ :::image type="content" source="media/alerts-log/alerts-create-log-rule-logic.png" alt-text="Screenshot of alert logic section of a new log alert rule.":::
+
+1. (Optional) In the **Advanced options** section, you can specify the number of failures and the alert evaluation period required to trigger an alert. For example, if you set the **Aggregation granularity** to 5 minutes, you can specify that you only want to trigger an alert if there were three failures (15 minutes) in the last hour. This setting is defined by your application business policy.
+
+ Select values for these fields under **Number of violations to trigger the alert**:
+
+ |Field |Description |
+ |||
+ |Number of violations|The number of violations that have to occur to trigger the alert.|
+ |Evaluation period|The amount of time within which those violations have to occur. |
+ |Override query time range| Enter a value for this field if the alert evaluation period is different than the query time range.|
+
+ :::image type="content" source="media/alerts-log/alerts-rule-preview-advanced-options.png" alt-text="Screenshot of the advanced options section of a new log alert rule.":::
1. The **Preview** chart shows query evaluations results over time. You can change the chart period or select different time series that resulted from unique alert splitting by dimensions.
- :::image type="content" source="media/alerts-log/alerts-create-alert-rule-preview.png" alt-text="Alert rule preview.":::
+ :::image type="content" source="media/alerts-log/alerts-create-alert-rule-preview.png" alt-text="Screenshot of a preview of a new alert rule.":::
1. From this point on, you can select the **Review + create** button at any time. 1. In the **Actions** tab, select or create the required [action groups](./action-groups.md).
You can also [create log alert rules using Azure Resource Manager templates](../
:::image type="content" source="media/alerts-log/alerts-rule-actions-tab.png" alt-text="Actions tab."::: 1. In the **Details** tab, define the **Project details** and the **Alert rule details**.
-1. (Optional) In the **Advanced options** section, you can set several options, including whether to **Enable upon creation**, or to [**Mute actions**](./alerts-unified-log.md#state-and-resolving-alerts) for a period after the alert rule fires.
+1. (Optional) In the **Advanced options** section, you can set several options, including whether to **Enable upon creation**, or to **Mute actions** for a period of time after the alert rule fires.
:::image type="content" source="media/alerts-log/alerts-rule-details-tab.png" alt-text="Details tab.":::
-> [!NOTE]
-> If you, or your administrator assigned the Azure Policy **Azure Log Search Alerts over Log Analytics workspaces should use customer-managed keys**, you must select **Check workspace linked storage** option in **Advanced options**, or the rule creation will fail as it will not meet the policy requirements.
+ > [!NOTE]
+ > If you, or your administrator assigned the Azure Policy **Azure Log Search Alerts over Log Analytics workspaces should use customer-managed keys**, you must select **Check workspace linked storage** option in **Advanced options**, or the rule creation will fail as it will not meet the policy requirements.
1. In the **Tags** tab, set any required tags on the alert rule resource.
You can also [create log alert rules using Azure Resource Manager templates](../
:::image type="content" source="media/alerts-log/alerts-rule-review-create.png" alt-text="Review and create tab.":::
+> [!NOTE]
+> This section above describes creating alert rules using the new alert rule wizard.
+> The new alert rule experience is a little different than the old experience. Please note these changes:
+> - Previously, search results were included in the payloads of the triggered alert and its associated notifications. This was a limited and error prone solution. To get detailed context information about the alert so that you can decide on the appropriate action :
+> - The recommended best practice it to use [Dimensions](alerts-unified-log.md#split-by-alert-dimensions). Dimensions provide the column value that fired the alert, giving you context for why the alert fired and how to fix the issue.
+> - When you need to investigate in the logs, use the link in the alert to the search results in Logs.
+> - If you need the raw search results or for any other advanced customizations, use Logic Apps.
+> - The new alert rule wizard does not support customization of the JSON payload.
+> - Use custom properties in the [new API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules/create-or-update#actions) to add static parameters and associated values to the webhook actions triggered by the alert.
+> - For more advanced customizations, use Logic Apps.
+> - The new alert rule wizard does not support customization of the email subject.
+> - Customers often use the custom email subject to indicate the resource on which the alert fired, instead of using the Log Analytics workspace. Use the [new API](alerts-unified-log.md#split-by-alert-dimensions) to trigger an alert of the desired resource using the resource id column.
+> - For more advanced customizations, use Logic Apps.
+ ## Enable recommended out-of-the-box alert rules in the Azure portal (preview) > [!NOTE] > The alert rule recommendations feature is currently in preview and is only enabled for VMs.
az deployment group create \
On success for creation, 201 is returned. On success for update, 200 is returned. ## Next steps
-* Learn about [log alerts](./alerts-unified-log.md).
+* Learn about [Log alerts](alerts-types.md#log-alerts).
* Create log alerts using [Azure Resource Manager Templates](./alerts-log-create-templates.md). * Understand [webhook actions for log alerts](./alerts-log-webhook.md). * Learn more about [log queries](../logs/log-query-overview.md).
azure-monitor Alerts Metric Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-overview.md
- Title: Understand how metric alerts work in Azure Monitor.
-description: Get an overview of what you can do with metric alerts and how they work in Azure Monitor.
Previously updated : 10/14/2021----
-# Understand how metric alerts work in Azure Monitor
-
-Metric alerts in Azure Monitor work on top of multi-dimensional metrics. These metrics could be [platform metrics](alerts-metric-near-real-time.md#metrics-and-dimensions-supported), [custom metrics](../essentials/metrics-custom-overview.md), [popular logs from Azure Monitor converted to metrics](./alerts-metric-logs.md) and Application Insights metrics. Metric alerts evaluate at regular intervals to check if conditions on one or more metric time-series are true and notify you when the evaluations are met. Metric alerts are stateful by default, that is, they only send out notifications when the state changes (fired, resolved). If you want to make them stateless, see [make metric alerts occur every time my condition is met](alerts-troubleshoot-metric.md#make-metric-alerts-occur-every-time-my-condition-is-met).
-
-## How do metric alerts work?
-
-You can define a metric alert rule by specifying a target resource to be monitored, metric name, condition type (static or dynamic), and the condition (an operator and a threshold/sensitivity) and an action group to be triggered when the alert rule fires. Condition types affect the way thresholds are determined. [Learn more about Dynamic Thresholds condition type and sensitivity options](../alerts/alerts-dynamic-thresholds.md).
-
-### Alert rule with static condition type
-
-Let's say you have created a simple static threshold metric alert rule as follows:
--- Target Resource (the Azure resource you want to monitor): myVM-- Metric: Percentage CPU-- Condition Type: Static-- Aggregation type (a statistic that is run over raw metric values. [Supported aggregation types](../essentials/metrics-aggregation-explained.md#aggregation-types) are Minimum, Maximum, Average, Total, Count): Average-- Period (the look back window over which metric values are checked): Over the last 5 mins-- Frequency (the frequency with which the metric alert checks if the conditions are met): 1 min-- Operator: Greater Than-- Threshold: 70-
-From the time the alert rule is created, the monitor runs every 1 min and looks at metric values for the last 5 minutes and checks if the average of those values exceeds 70. If the condition is met that is, the average Percentage CPU for the last 5 minutes exceeds 70, the alert rule fires an activated notification. If you have configured an email or a web hook action in the action group associated with the alert rule, you will receive an activated notification on both.
-
-When you are using multiple conditions in one rule, the rule "ands" the conditions together. That is, an alert fires when all the conditions in the alert rule evaluate as true and resolve when one of the conditions is no longer true. An example for this type of alert rule would be to monitor an Azure virtual machine and alert when both "Percentage CPU is higher than 90%" and "Queue length is over 300 items".
-
-### Alert rule with dynamic condition type
-
-Let's say you have created a simple Dynamic Thresholds metric alert rule as follows:
--- Target Resource (the Azure resource you want to monitor): myVM-- Metric: Percentage CPU-- Condition Type: Dynamic-- Aggregation Type (a statistic that is run over raw metric values. [Supported aggregation types](../essentials/metrics-aggregation-explained.md#aggregation-types) are Minimum, Maximum, Average, Total, Count): Average-- Period (the look back window over which metric values are checked): Over the last 5 mins-- Frequency (the frequency with which the metric alert checks if the conditions are met): 1 min-- Operator: Greater Than-- Sensitivity: Medium-- Look Back Periods: 4-- Number of Violations: 4-
-Once the alert rule is created, the Dynamic Thresholds machine learning algorithm will acquire historical data that is available, calculate threshold that best fits the metric series behavior pattern and will continuously learn based on new data to make the threshold more accurate.
-
-From the time the alert rule is created, the monitor runs every 1 min and looks at metric values in the last 20 minutes grouped into 5 minutes periods and checks if the average of the period values in each of the 4 periods exceeds the expected threshold. If the condition is met that is, the average Percentage CPU in the last 20 minutes (four 5 minutes periods) deviated from expected behavior four times, the alert rule fires an activated notification. If you have configured an email or a web hook action in the action group associated with the alert rule, you will receive an activated notification on both.
-
-### View and resolution of fired alerts
-
-The above examples of alert rules firing can also be viewed in the Azure portal in the **All Alerts** blade.
-
-Say the usage on "myVM" continues being above the threshold in subsequent checks, the alert rule will not fire again until the conditions are resolved.
-
-After some time, the usage on "myVM" comes back down to normal (goes below the threshold). The alert rule monitors the condition for two more times, to send out a resolved notification. The alert rule sends out a resolved/deactivated message when the alert condition is not met for three consecutive periods to reduce noise in case of flapping conditions.
-
-As the resolved notification is sent out via web hooks or email, the status of the alert instance (called monitor state) in Azure portal is also set to resolved.
-
-> [!NOTE]
->
-> When an alert rule monitors multiple conditions, a fired alert will be resolved if at least one of the conditions is no longer met for three consecutive periods.
-
-### Using dimensions
-
-Metric alerts in Azure Monitor also support monitoring multiple dimensions value combinations with one rule. Let's understand why you might use multiple dimension combinations with the help of an example.
-
-Say you have an App Service plan for your website. You want to monitor CPU usage on multiple instances running your web site/app. You can do that using a metric alert rule as follows:
--- Target resource: myAppServicePlan-- Metric: Percentage CPU-- Condition Type: Static-- Dimensions
- - Instance = InstanceName1, InstanceName2
-- Aggregation Type: Average-- Period: Over the last 5 mins-- Frequency: 1 min-- Operator: GreaterThan-- Threshold: 70-
-Like before, this rule monitors if the average CPU usage for the last 5 minutes exceeds 70%. However, with the same rule you can monitor two instances running your website. Each instance will get monitored individually and you will get notifications individually.
-
-Say you have a web app that is seeing massive demand and you will need to add more instances. The above rule still monitors just two instances. However, you can create a rule as follows:
--- Target resource: myAppServicePlan-- Metric: Percentage CPU-- Condition Type: Static-- Dimensions
- - Instance = *
-- Aggregation Type: Average-- Period: Over the last 5 mins-- Frequency: 1 min-- Operator: GreaterThan-- Threshold: 70-
-This rule will automatically monitor all values for the instance i.e you can monitor your instances as they come up without needing to modify your metric alert rule again.
-
-When monitoring multiple dimensions, Dynamic Thresholds alerts rule can create tailored thresholds for hundreds of metric series at a time. Dynamic Thresholds results in fewer alert rules to manage and significant time saving on management and creation of alerts rules.
-
-Say you have a web app with many instances and you don't know what the most suitable threshold is. The above rules will always use threshold of 70%. However, you can create a rule as follows:
--- Target resource: myAppServicePlan-- Metric: Percentage CPU-- Condition Type: Dynamic-- Dimensions
- - Instance = *
-- Aggregation Type: Average-- Period: Over the last 5 mins-- Frequency: 1 min-- Operator: GreaterThan-- Sensitivity: Medium-- Look Back Periods: 1-- Number of Violations: 1-
-This rule monitors if the average CPU usage for the last 5 minutes exceeds the expected behavior for each instance. The same rule you can monitor instances as they come up without needing to modify your metric alert rule again. Each instance will get a threshold that fits the metric series behavior pattern and will continuously change based on new data to make the threshold more accurate. Like before, each instance will be monitored individually and you will get notifications individually.
-
-Increasing look-back periods and number of violations can also allow filtering alerts to only alert on your definition of a significant deviation. [Learn more about Dynamic Thresholds advanced options](../alerts/alerts-dynamic-thresholds.md#what-do-the-advanced-settings-in-dynamic-thresholds-mean).
-
-> [!NOTE]
->
-> We recommend choosing an *Aggregation granularity (Period)* that is larger than the *Frequency of evaluation*, to reduce the likelihood of missing the first evaluation of added time series in the following cases:
-> - Metric alert rule that monitors multiple dimensions ΓÇô When a new dimension value combination is added
-> - Metric alert rule that monitors multiple resources ΓÇô When a new resource is added to the scope
-> - Metric alert rule that monitors a metric that isnΓÇÖt emitted continuously (sparse metric) ΓÇô When the metric is emitted after a period longer than 24 hours in which it wasnΓÇÖt emitted
-
-## Monitoring at scale using metric alerts in Azure Monitor
-
-So far, you have seen how a single metric alert could be used to monitor one or many metric time-series related to a single Azure resource. Many times, you might want the same alert rule applied to many resources. Azure Monitor also supports monitoring multiple resources (of the same type) with one metric alert rule, for resources that exist in the same Azure region.
-
-This feature is currently supported for platform metrics (not custom metrics) for the following services in the following Azure clouds:
-
-| Service | Public Azure | Government | China |
-|:--|:--|:--|:--|
-| Virtual machines<sup>1</sup> | **Yes** | **Yes** | **Yes** |
-| SQL server databases | **Yes** | **Yes** | **Yes** |
-| SQL server elastic pools | **Yes** | **Yes** | **Yes** |
-| NetApp files capacity pools | **Yes** | **Yes** | **Yes** |
-| NetApp files volumes | **Yes** | **Yes** | **Yes** |
-| Key vaults | **Yes** | **Yes** | **Yes** |
-| Azure Cache for Redis | **Yes** | **Yes** | **Yes** |
-| Data box edge devices | **Yes** | **Yes** | **Yes** |
-| Recovery Services vaults | **Yes** | **No** | **No** |
-
-<sup>1</sup> Not supported for virtual machine network metrics (Network In Total, Network Out Total, Inbound Flows, Outbound Flows, Inbound Flows Maximum Creation Rate, Outbound Flows Maximum Creation Rate).
-
-You can specify the scope of monitoring by a single metric alert rule in one of three ways. For example, with virtual machines you can specify the scope as:
--- a list of virtual machines (in one Azure region) within a subscription-- all virtual machines (in one Azure region) in one or more resource groups in a subscription-- all virtual machines (in one Azure region) in a subscription-
-> [!NOTE]
->
-> The scope of a multi-resource metric alert rule must contain at least one resource of the selected resource type.
-
-Creating metric alert rules that monitor multiple resources is like [creating any other metric alert](../alerts/alerts-metric.md) that monitors a single resource. Only difference is that you would select all the resources you want to monitor. You can also create these rules through [Azure Resource Manager templates](./alerts-metric-create-templates.md#template-for-a-metric-alert-that-monitors-multiple-resources). You will receive individual notifications for each monitored resource.
-
-> [!NOTE]
->
-> In a metric alert rule that monitors multiple resources, only one condition is allowed.
-
-## Typical latency
-
-For metric alerts, typically you will get notified in under 5 minutes if you set the alert rule frequency to be 1 min. In cases of heavy load for notification systems, you might see a longer latency.
-
-## Supported resource types for metric alerts
-
-You can find the full list of supported resource types in this [article](./alerts-metric-near-real-time.md#metrics-and-dimensions-supported).
-
-## Pricing model
-
-Each Metrics Alert rule is billed based for time series monitored. Prices for Metric Alert rules are available on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
-
-## Next steps
--- [Learn how to create, view, and manage metric alerts in Azure](../alerts/alerts-metric.md)-- [Learn how to create alerts within Azure Monitor Metrics Explorer](../essentials/metrics-charts.md#alert-rules)-- [Learn how to deploy metric alerts using Azure Resource Manager templates](./alerts-metric-create-templates.md)-- [Learn more about action groups](./action-groups.md)-- [Learn more about Dynamic Thresholds condition type](../alerts/alerts-dynamic-thresholds.md)-- [Learn more about troubleshooting problems in metric alerts](alerts-troubleshoot-metric.md)
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
Title: Overview of alerting and notification monitoring in Azure
-description: Overview of alerting in Azure Monitor
- Previously updated : 02/14/2021-
+ Title: Overview of Azure Monitor Alerts
+description: Learn about Azure Monitor alerts, alert rules, action processing rules, and action groups. You will learn how all of these work together to monitor your system and notify you if something is wrong.
+++ Last updated : 04/26/2022++
+# What are Azure Monitor Alerts?
-# Overview of alerts in Microsoft Azure
-
-This article describes what alerts are, their benefits, and how to get started using them.
-
-## What are alerts in Microsoft Azure?
-
-Alerts proactively notify you when issues are found with your infrastructure or application using your monitoring data in Azure Monitor. They allow you to identify and address issues before the users of your system notice them.
-
-## Overview
-
-The diagram below represents the flow of alerts.
-
-![Diagram of alert flow](media/alerts-overview/Azure-Monitor-Alerts.svg)
-
-Alert rules are separated from alerts and the actions taken when an alert fires. The alert rule captures the target and criteria for alerting. The alert rule can be in an enabled or a disabled state. Alerts only fire when enabled.
-
-The following are key attributes of an alert rule:
-
-**Target Resource** - Defines the scope and signals available for alerting. A target can be any Azure resource. Example targets:
--- Virtual machines.-- Storage accounts.-- Log Analytics workspace.-- Application Insights. -
-For certain resources (like virtual machines), you can specify multiple resources as the target of the alert rule.
+Alerts help you detect and address issues before users notice them by proactively notifying you when Azure Monitor data indicates that there may be a problem with your infrastructure or application.
-**Signal** - Emitted by the target resource. Signals can be of the following types: metric, activity log, Application Insights, and log.
+You can alert on any metric or log data source in the Azure Monitor data platform.
-**Criteria** - A combination of signal and logic applied on a target resource. Examples:
+This diagram shows you how alerts work:
-- Percentage CPU > 70%-- Server Response Time > 4 ms -- Result count of a log query > 100
-**Alert Name** - A specific name for the alert rule configured by the user.
-
-**Alert Description** - A description for the alert rule configured by the user.
-
-**Severity** - The severity of the alert after the criteria specified in the alert rule is met. Severity can range from 0 to 4.
--- Sev 0 = Critical-- Sev 1 = Error-- Sev 2 = Warning-- Sev 3 = Informational-- Sev 4 = Verbose -
-**Action** - A specific action taken when the alert is fired. For more information, see [Action Groups](../alerts/action-groups.md).
-
-## What you can alert on
-
-You can alert on metrics and logs, as described in [monitoring data sources](./../agents/data-sources.md). Signals include but aren't limited to:
+An **alert rule** monitors your telemetry and captures a signal that indicates that something is happening on a specified target. The alert rule captures the signal and checks to see if the signal meets the criteria of the condition. If the conditions are met, an alert is triggered, which initiates the associated action group and updates the state of the alert.
+
+You create an alert rule by combining:
+ - The resource(s) to be monitored.
+ - The signal or telemetry from the resource
+ - Conditions
+
+If you're monitoring more than one resource, the condition is evaluated separately for each of the resources and alerts are fired for each resource separately.
+
+Once an alert is triggered, the alert is made up of:
+ - An **alert processing rule** allows you to apply processing on fired alerts. Alert processing rules modify the fired alerts as they are being fired. You can use alert processing rules to add or suppress action groups, apply filters or have the rule processed on a pre-defined schedule.
+ - An **action group** can trigger notifications or an automated workflow to let users know that an alert has been triggered. Action groups can include:
+ - Notification methods such as email, SMS, and push notifications.
+ - Automation Runbooks
+ - Azure functions
+ - ITSM incidents
+ - Logic Apps
+ - Secure webhooks
+ - Webhooks
+ - Event hubs
+- The **alert condition** is set by the system. When an alert fires, the alertΓÇÖs monitor condition is set to ΓÇÿfiredΓÇÖ, and when the underlying condition that caused the alert to fire clears, the monitor condition is set to ΓÇÿresolvedΓÇÖ.
+- The **user response** is set by the user and doesnΓÇÖt change until the user changes it.
+
+You can see all alert instances in all your Azure resources generated in the last 30 days on the **[Alerts page](alerts-page.md)** in the Azure portal.
+## Types of alerts
+
+There are four types of alerts. This table provides a brief description of each alert type.
+See [this article](alerts-types.md) for detailed information about each alert type and how to choose which alert type best suits your needs.
+
+|Alert type|Description|
+|:|:|
+|[Metric alerts](alerts-types.md#metric-alerts)|Metric alerts evaluate resource metrics at regular intervals. Metrics can be platform metrics, custom metrics, logs from Azure Monitor converted to metrics or Application Insights metrics. Metric alerts have several additional features (link), such as the ability to apply multiple conditions and dynamic thresholds.|
+|[Log alerts](alerts-types.md#log-alerts)|Log alerts allow users to use a Log Analytics query to evaluate resource logs at a predefined frequency.|
+|[Activity log alerts](alerts-types.md#activity-log-alerts)|Activity log alerts are triggered when a new activity log event occurs that matches the defined conditions.|
+|[Smart detection alerts](alerts-types.md#smart-detection-alerts)|Smart detection on an Application Insights resource automatically warns you of potential performance problems and failure anomalies in your web application. You can migrate smart detection on your Application Insights resource to create alert rules for the different smart detection modules.|
+## Out-of-the-box alert rules (preview)
+
+If you don't have alert rules defined for the selected resource, you can [enable recommended out-of-the-box alert rules in the Azure portal](alerts-log.md#enable-recommended-out-of-the-box-alert-rules-in-the-azure-portal-preview).
-- Metric values-- Log search queries-- Activity log events-- Health of the underlying Azure platform-- Tests for website availability
-## Alerts experience
-### Alerts page
-The Alerts page provides a summary of the alerts created in the last 24 hours.
-### Alert Recommendations (preview)
> [!NOTE] > The alert rule recommendations feature is currently in preview and is only enabled for VMs.
-If you don't have alert rules defined for the selected resource, either individually or as part of a resource group or subscription, you can [create a new alert rule](alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal), or [enable recommended out-of-the-box alert rules in the Azure portal (preview)](alerts-log.md#enable-recommended-out-of-the-box-alert-rules-in-the-azure-portal-preview).
-
-### Alerts summary pane
-If you have alerts configured for this resource, the alerts summary pane summarizes the alerts fired in the last 24 hours. You can filter the list by the subscription or any of the filter parameters at the top of the page. The page displays the total alerts for each severity. Select a severity to filter the alerts by that severity.
-> [!NOTE]
- > You can only access alerts generated in the last 30 days.
-
-You can also [programmatically enumerate the alert instances generated on your subscriptions by using REST APIs](#manage-your-alert-instances-programmatically).
-
+## Azure role-based access control (Azure RBAC) for alerts
-You can narrow down the list by selecting values from any of these filters at the top of the page:
+You can only access, create, or manage alerts for resources for which you have permissions.
+To create an alert rule, you need to have the following permissions:
+ - Read permission on the target resource of the alert rule
+ - Write permission on the resource group in which the alert rule is created (if youΓÇÖre creating the alert rule from the Azure portal, the alert rule is created by default in the same resource group in which the target resource resides)
+ - Read permission on any action group associated to the alert rule (if applicable)
+These built-in Azure roles, supported at all Azure Resource Manager scopes, have permissions to and access alerts information and create alert rules:
+ - monitoring contributor
+ - monitoring reader
-| Column | Description |
-|:|:|
-| Subscription | Select the Azure subscriptions for which you want to view the alerts. You can optionally choose to select all your subscriptions. Only alerts that you have access to in the selected subscriptions are included in the view. |
-| Resource group | Select a single resource group. Only alerts with targets in the selected resource group are included in the view. |
-| Resource type | Select one or more resource types. Only alerts with targets of the selected type are included in the view. This column is only available after a resource group has been specified. |
-| Resource | Select a resource. Only alerts with that resource as a target are included in the view. This column is only available after a resource type has been specified. |
-| Severity | Select an alert severity, or select **All** to include alerts of all severities. |
-| Alert condition | Select an alert condition, or select **All** to include alerts of all conditions. |
-| User response | Select a user response, or select **All** to include alerts of all user responses. |
-| Monitor service | Select a service, or select **All** to include all services. Only alerts created by rules that use service as a target are included. |
-| Time range | Only alerts fired within the selected time range are included in the view. Supported values are the past hour, the past 24 hours, the past seven days, and the past 30 days. |
+## Alerts and State
-Select **Columns** at the top of the page to select which columns to show.
-### Alert details pane
+You can configure whether log or metric alerts are stateful or stateless. Activity log alerts are stateless.
+- Stateless alerts fire each time the condition is met, even if fired previously.
+- Stateful alerts fire when the condition is met and then don't fire again or trigger any more actions until the conditions are resolved.
+For stateful alerts, the alert is considered resolved when:
-When you select an alert, this alert details pane provides details of the alert and enables you to change how you want to respond to the alert.
--
-The Alert details pane includes:
--
-|Section |Description |
+|Alert type |The alert is resolved when |
|||
-|Summary | Displays the properties and other significant information about the alert. |
-|History | Lists all actions on the alert and any changes made to the alert. |
-## Manage alerts
-
-You can set the user response of an alert to specify where it is in the resolution process. When the criteria specified in the alert rule is met, an alert is created or fired, and it has a status of *New*. You can change the status when you acknowledge an alert and when you close it. All user response changes are stored in the history of the alert.
-
-The following user responses are supported.
-
-| User Response | Description |
-|:|:|
-| New | The issue has been detected and hasn't yet been reviewed. |
-| Acknowledged | An administrator has reviewed the alert and started working on it. |
-| Closed | The issue has been resolved. After an alert has been closed, you can reopen it by changing it to another user response. |
-
-The *user response* is different and independent of the *alert condition*. The response is set by the user, while the alert condition is set by the system. When an alert fires, the alert's alert condition is set to *'fired'*, and when the underlying condition that caused the alert to fire clears, the alert condition is set to *'resolved'*.
-## Manage alert rules
-
-To show the **Rules** page, select **Manage alert rules**. The Rules page is a single place for managing all alert rules across your Azure subscriptions. It lists all alert rules and can be sorted based on target resources, resource groups, rule name, or status. You can also edit, enable, or disable alert rules from this page.
-
- :::image type="content" source="media/alerts-overview/alerts-rules.png" alt-text="Screenshot of alert rules page.":::
-## Create an alert rule
-You can author alert rules in a consistent manner, whatever of the monitoring service or signal type.
-
-> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4tflw]
-
-
-Here's how to create a new alert rule:
-1. Pick the _target_ for the alert.
-1. Select the _signal_ from the available signals for the target.
-1. Specify the _logic_ to be applied to data from the signal.
-
-This simplified authoring process no longer requires you to know the monitoring source or signals that are supported before selecting an Azure resource. The list of available signals is automatically filtered based on the target resource that you select. Also based on that target, you're guided through defining the logic of the alert rule automatically.
-
-You can learn more about how to create alert rules in [Create, view, and manage alerts using Azure Monitor](../alerts/alerts-metric.md).
-
-Alerts are available across several Azure monitoring services. For information about how and when to use each of these services, see [Monitoring Azure applications and resources](../overview.md).
-
-## Azure role-based access control (Azure RBAC) for your alert instances
-
-The consumption and management of alert instances requires the user to have the Azure built-in roles of either [monitoring contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) or [monitoring reader](../../role-based-access-control/built-in-roles.md#monitoring-reader). These roles are supported at any Azure Resource Manager scope, from the subscription level to granular assignments at a resource level. For example, if a user only has monitoring contributor access for virtual machine `ContosoVM1`, that user can consume and manage only alerts generated on `ContosoVM1`.
-
-## Manage your alert instances programmatically
-
-You might want to query programmatically for alerts generated against your subscription. Queries might be to create custom views outside of the Azure portal, or to analyze your alerts to identify patterns and trends.
-
-We recommended that you use [Azure Resource Graph](../../governance/resource-graph/overview.md) with the `AlertsManagementResources` schema for querying fired alerts. Resource Graph is recommended when you have to manage alerts generated across multiple subscriptions.
-
-The following sample request to the Resource Graph REST API returns alerts within one subscription in the last day:
-
-```json
-{
- "subscriptions": [
- <subscriptionId>
- ],
- "query": "alertsmanagementresources | where properties.essentials.lastModifiedDateTime > ago(1d) | project alertInstanceId = id, parentRuleId = tolower(tostring(properties['essentials']['alertRule'])), sourceId = properties['essentials']['sourceCreatedId'], alertName = name, severity = properties.essentials.severity, status = properties.essentials.monitorCondition, state = properties.essentials.alertState, affectedResource = properties.essentials.targetResourceName, monitorService = properties.essentials.monitorService, signalType = properties.essentials.signalType, firedTime = properties['essentials']['startDateTime'], lastModifiedDate = properties.essentials.lastModifiedDateTime, lastModifiedBy = properties.essentials.lastModifiedUserName"
-}
-```
+|Metric alerts|The alert condition isn't met for three consecutive checks.|
+|Log alerts|The alert condition isn't met for 30 minutes for a specific evaluation period (to account for log ingestion delay), and <br>the alert condition isn't met for three consecutive checks.|
-You can also see the result of this Resource Graph query in the portal with Azure Resource Graph Explorer: [portal.azure.com](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/alertsmanagementresources%0A%7C%20where%20properties.essentials.lastModifiedDateTime%20%3E%20ago(1d)%0A%7C%20project%20alertInstanceId%20%3D%20id%2C%20parentRuleId%20%3D%20tolower(tostring(properties%5B'essentials'%5D%5B'alertRule'%5D))%2C%20sourceId%20%3D%20properties%5B'essentials'%5D%5B'sourceCreatedId'%5D%2C%20alertName%20%3D%20name%2C%20severity%20%3D%20properties.essentials.severity%2C%20status%20%3D%20properties.essentials.monitorCondition%2C%20state%20%3D%20properties.essentials.alertState%2C%20affectedResource%20%3D%20properties.essentials.targetResourceName%2C%20monitorService%20%3D%20properties.essentials.monitorService%2C%20signalType%20%3D%20properties.essentials.signalType%2C%20firedTime%20%3D%20properties%5B'essentials'%5D%5B'startDateTime'%5D%2C%20lastModifiedDate%20%3D%20properties.essentials.lastModifiedDateTime%2C%20lastModifiedBy%20%3D%20properties.essentials.lastModifiedUserName)
+When the alert is considered resolved, the alert rule sends out a resolved notification using webhooks or email and the monitor state in the Azure portal is set to resolved.
-You can also use the [Alert Management REST API](/rest/api/monitor/alertsmanagement/alerts) in lower scale querying scenarios or to update fired alerts.
+## Manage your alerts programmatically
-## Smart groups
+You can programmatically query for alerts using:
+ - [Azure PowerShell](/powershell/module/az.monitor/)
+ - [The Azure CLI](/cli/azure/monitor?view=azure-cli-latest&preserve-view=true)
+ - The [Alert Management REST API](/rest/api/monitor/alertsmanagement/alerts)
+You can also use [Resource Graphs](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade). Resource graphs are good for managing alerts across multiple subscriptions.
-Smart groups are aggregations of alerts based on machine learning algorithms, which can help reduce alert noise and aid in troubleshooting. [Learn more about Smart Groups](./alerts-smartgroups-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json) and [how to manage your smart groups](./alerts-managing-smart-groups.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
+## Pricing
+See the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/) for information about pricing.
## Next steps -- [Learn more about Smart Groups](./alerts-smartgroups-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json)
+- [See your alert instances](./alerts-page.md)
+- [Create a new alert rule](alerts-log.md)
- [Learn about action groups](../alerts/action-groups.md)-- [Managing your alert instances in Azure](./alerts-managing-alert-instances.md?toc=%2fazure%2fazure-monitor%2ftoc.json)-- [Managing Smart Groups](./alerts-managing-smart-groups.md?toc=%2fazure%2fazure-monitor%2ftoc.json)-- [Learn more about Azure alerts pricing](https://azure.microsoft.com/pricing/details/monitor/)
+- [Learn about alert processing rules](alerts-action-rules.md)
azure-monitor Alerts Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-page.md
+
+ Title: View and manage your alert instances
+description: The alerts page summarizes all alert instances in all your Azure resources generated in the last 30 days.
+ Last updated : 2/23/2022+++
+# View and manage your alert instances
+
+The alerts page summarizes all alert instances in all your Azure resources generated in the last 30 days. You can see all your different types of alerts from multiple subscriptions in a single pane, and you can find specific alert instances for troubleshooting purposes.
+
+You can get to the alerts page in any of the following ways:
+
+- From the home page in the [Azure portal](https://portal.azure.com/), select **Monitor** > **Alerts**.
+
+ :::image type="content" source="media/alerts-managing-alert-instances/alerts-monitor-menu.png" alt-text="Screenshot of alerts link on monitor menu. ":::
+
+- From a specific resource, go to the **Monitoring** section, and choose **Alerts**. The landing page is pre-filtered for alerts on that specific resource.
+
+ :::image type="content" source="media/alerts-managing-alert-instances/alerts-resource-menu.png" alt-text="Screenshot of alerts link on a resource's menu.":::
+## Alert rule recommendations (preview)
+
+> [!NOTE]
+> The alert rule recommendations feature is currently in preview and is only enabled for VMs.
+
+If you don't have alert rules defined for the selected resource, either individually or as part of a resource group or subscription, you can [create a new alert rule](alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal), or [enable recommended out-of-the-box alert rules in the Azure portal (preview)](alerts-log.md#enable-recommended-out-of-the-box-alert-rules-in-the-azure-portal-preview).
++
+## The alerts summary pane
+
+If you have alerts configured for this resource, the alerts summary pane summarizes the alerts fired in the last 24 hours. You can modify the list of alert instances by selecting filters such as **time range**, **subscription**, **alert condition**, **severity**, and more. Select an alert instance.
+
+To see more details about a specific alert instance, select the alerts instance to open the **Alert Details** page.
+> [!NOTE]
+> If you navigated to the alerts page by selecting a specific alert severity, the list is pre-filtered for that severity.
+
+
+## The alerts details page
+
+ The **Alerts details** page provides details about the selected alert. Select **Change user response** to change the user response to the alert. You can see all closed alerts in the **History** tab.
++
+## Next steps
+
+- [Learn about Azure Monitor alerts](./alerts-overview.md)
+- [Create a new alert rule](alerts-log.md)
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
description: Common issues with Azure Monitor metric alerts and possible solutio
Previously updated : 2/23/2022 Last updated : 5/25/2022 # Troubleshooting problems in Azure Monitor metric alerts
To create a metric alert rule, youΓÇÖll need to have the following permissions:
- Write permission on the resource group in which the alert rule is created (if youΓÇÖre creating the alert rule from the Azure portal, the alert rule is created by default in the same resource group in which the target resource resides) - Read permission on any action group associated to the alert rule (if applicable)
+## Subscription registration to the Microsoft.Insights resource provider
+
+Metric alerts can only access resources in subscriptions registered to the Microsoft.Insights resource provider.
+Therefore, to create a metric alert rule, all involved subscriptions must be registered to this resource provider:
+
+- The subscription containing the alert rule's target resource (scope)
+- The subscription containing the action groups associated with the alert rule (if defined)
+- The subscription in which the alert rule is saved
+
+Learn more about [registering resource providers](https://docs.microsoft.com/azure/azure-resource-manager/management/resource-providers-and-types).
## Naming restrictions for metric alert rules
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
+
+ Title: Types of Azure Monitor Alerts
+description: This article explains the different types of Azure Monitor alerts and when to use each type.
+++ Last updated : 04/26/2022++++
+# Types of Azure Monitor alerts
+
+This article describes the kinds of Azure Monitor alerts you can create, and helps you understand when to use each type of alert.
+
+There are four types of alerts:
+- [Metric alerts](#metric-alerts)
+- [Log alerts](#log-alerts)
+- [Activity log alerts](#activity-log-alerts)
+- [Smart detection alerts](#smart-detection-alerts)
+
+## Choosing the right alert type
+
+This table can help you decide when to use what type of alert. For more detailed information about pricing, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+
+|Alert Type |When to Use |Pricing Information|
+||||
+|Metric alert|Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. Metric data is stored in the system already pre-computed, so metric alerts are less expensive than log alerts. If the data you want to monitor is available in metric data, you would want to metric alerts.|Each metrics alert rule is charged based on the number of time-series that are monitored. |
+|Log alert|Log alerts allow you to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of KQL for data manipulation using log alerts. Log alerts are more expensive than metric alerts.|Each Log Alert rule is billed based the interval at which the log query is evaluated (more frequent query evaluation results in a higher cost). Additionally, for Log Alerts configured for [at scale monitoring](#splitting-by-dimensions-in-log-alert-rules), the cost will also depend on the number of time series created by the dimensions resulting from your query. |
+|Activity Log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts if you want to be alerted when a specific event happens to a resource, for example, a restart, a shutdown, or the creation or deletion of a resource.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).|
+
+## Metric alerts
+
+A metric alert rule monitors a resource by evaluating conditions on the resource metrics at regular intervals. If the conditions are met, an alert is fired. A metric time-series is a series of metric values captured over a period of time.
+
+You can create rules using these metrics:
+- [Platform metrics](alerts-metric-near-real-time.md#metrics-and-dimensions-supported)
+- [Custom metrics](../essentials/metrics-custom-overview.md)
+- [Application Insights custom metrics](../app/api-custom-events-metrics.md)
+- [Selected logs from a Log Analytics workspace converted to metrics](alerts-metric-logs.md)
+
+Metric alert rules include these features:
+- You can use multiple conditions on an alert rule for a single resource.
+- You can add granularity by [monitoring multiple metric dimensions](#narrow-the-target-using-dimensions).
+- You can use [Dynamic thresholds](#dynamic-thresholds) driven by machine learning.
+- You can configure if metric alerts are [stateful or stateless](alerts-overview.md#alerts-and-state). Metric alerts are stateful by default.
+
+The target of the metric alert rule can be:
+- A single resource, such as a VM. See this article for supported resource types.
+- [Multiple resources](#monitor-multiple-resources) of the same type in the same Azure region, such as a resource group.
+
+### Multiple conditions
+
+When you create an alert rule for a single resource, you can apply multiple conditions. For example, you could create an alert rule to monitor an Azure virtual machine and alert when both "Percentage CPU is higher than 90%" and "Queue length is over 300 items". When an alert rule has multiple conditions, the alert fires when all the conditions in the alert rule are true and is resolved when at least one of the conditions is no longer true for three consecutive checks.
+### Narrow the target using Dimensions
+
+Dimensions are name-value pairs that contain additional data about the metric value. Using dimensions allows you to filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values.
+For example, the Transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, PutPage). You can choose to have an alert fired when there is a high number of transactions in any API name (which is the aggregated data), or you can use dimensions to further break it down to alert only when the number of transactions is high for specific API names.
+If you use more than one dimension, the metric alert rule can monitor multiple dimension values from different dimensions of a metric.
+The alert rule separately monitors all the dimensions value combinations.
+See [this article](alerts-metric-multiple-time-series-single-rule.md) for detailed instructions on using dimensions in metric alert rules.
+
+### Create resource-centric alerts using splitting by dimensions
+
+To monitor for the same condition on multiple Azure resources, you can use splitting by dimensions. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. Alerts are split into separate alerts by grouping combinations. Splitting on Azure resource ID column makes the specified resource into the alert target.
+
+You may also decide not to split when you want a condition applied to multiple resources in the scope. For example, if you want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
+
+### Monitor multiple resources
+
+You can monitor at scale by applying the same metric alert rule to multiple resources of the same type for resources that exist in the same Azure region. Individual notifications are sent for each monitored resource.
+
+These platform metrics for these services in the following Azure clouds are supported:
+
+| Service | Global Azure | Government | China |
+|:--|:-|:--|:--|
+| Virtual machines* | Yes |Yes | Yes |
+| SQL server databases | Yes | Yes | Yes |
+| SQL server elastic pools | Yes | Yes | Yes |
+| NetApp files capacity pools | Yes | Yes | Yes |
+| NetApp files volumes | Yes | Yes | Yes |
+| Key vaults | Yes | Yes | Yes |
+| Azure Cache for Redis | Yes | Yes | Yes |
+| Azure Stack Edge devices | Yes | Yes | Yes |
+| Recovery Services vaults | Yes | No | No |
+
+ > [!NOTE]
+ > Platform metrics are not supported for virtual machine network metrics (Network In Total, Network Out Total, Inbound Flows, Outbound Flows, Inbound Flows Maximum Creation Rate, Outbound Flows Maximum Creation Rate).
+
+You can specify the scope of monitoring with a single metric alert rule in one of three ways. For example, with virtual machines you can specify the scope as:
+
+- a list of virtual machines (in one Azure region) within a subscription
+- all virtual machines (in one Azure region) in one or more resource groups in a subscription
+- all virtual machines (in one Azure region) in a subscription
+
+### Dynamic thresholds
+
+Dynamic thresholds use advanced machine learning (ML) to:
+- Learn the historical behavior of metrics
+- Identify patterns and adapt to metric changes over time, such as hourly, daily or weekly patterns.
+- Recognize anomalies that indicate possible service issues
+- Calculate the most appropriate threshold for the metric
+
+Machine Learning continuously uses new data to learn more and make the threshold more accurate. Because the system adapts to the metricsΓÇÖ behavior over time, and alerts based on deviations from its pattern, you don't have to know the "right" threshold for each metric.
+
+Dynamic thresholds help you:
+- Create scalable alerts for hundreds of metric series with one alert rule. Fewer alert rules leads to to less time that you have to spend on creating and managing alerts rules.
+- Create rules without having to know what threshold to configure
+- Configure up metric alerts using high-level concepts without extensive domain knowledge about the metric
+- Prevent noisy (low precision) or wide (low recall) thresholds that donΓÇÖt have an expected pattern
+- Handle noisy metrics (such as machine CPU or memory) and metrics with low dispersion (such as availability and error rate).
+
+See [this article](alerts-dynamic-thresholds.md) for detailed instructions on using dynamic thresholds in metric alert rules.
+
+## Log alerts
+A log alert rule monitors a resource by using a Log Analytics query to evaluate resource logs at a set frequency. If the conditions are met, an alert is fired. Because you can use Log Analytics queries, log alerts allow you to perform advanced logic operations on your data and to use the robust features of KQL for data manipulation of log data.
+
+The target of the log alert rule can be:
+- A single resource, such as a VM.
+- Multiple resources of the same type in the same Azure region, such as a resource group. This is currently available for selected resource types.
+- Multiple resources using [cross-resource query](../logs/cross-workspace-query.md#querying-across-log-analytics-workspaces-and-from-application-insights).
+
+Log alerts can measure two different things, which can be used for different monitoring scenarios:
+- Table rows: The number of rows returned can be used to work with events such as Windows event logs, syslog, application exceptions.
+- Calculation of a numeric column: Calculations based on any numeric column can be used to include any number of resources. For example, CPU percentage.
+
+You can configure if log alerts are [stateful or stateless](alerts-overview.md#alerts-and-state) (currently in preview).
+
+> [!NOTE]
+> Log alerts work best when you are trying to detect specific data in the logs, as opposed to when you are trying to detect a **lack** of data in the logs. Since logs are semi-structured data, they are inherently more latent than metric data on information like a VM heartbeat. To avoid misfires when you are trying to detect a lack of data in the logs, consider using [metric alerts](#metric-alerts). You can send data to the metric store from logs using [metric alerts for logs](alerts-metric-logs.md).
+
+### Dimensions in log alert rules
+You can use dimensions when creating log alert rules to monitor the values of multiple instances of a resource with one rule. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually notifications are sent for each instance.
+
+### Splitting by dimensions in log alert rules
+To monitor for the same condition on multiple Azure resources, you can use splitting by dimensions. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. Alerts are split into separate alerts by grouping combinations using numerical or string columns. Splitting on the Azure resource ID column makes the specified resource into the alert target.
+You may also decide not to split when you want a condition applied to multiple resources in the scope. For example, if you want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
+
+### Using the API
+Manage new rules in your workspaces using the [ScheduledQueryRules](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) API.
+
+> [!NOTE]
+> Log alerts for Log Analytics used to be managed using the legacy [Log Analytics Alert API](api-alerts.md). Learn more about [switching to the current ScheduledQueryRules API](alerts-log-api-switch.md).
+## Log alerts on your Azure bill
+Log Alerts are listed under resource provider microsoft.insights/scheduledqueryrules with:
+- Log Alerts on Application Insights shown with exact resource name along with resource group and alert properties.
+- Log Alerts on Log Analytics shown with exact resource name along with resource group and alert properties; when created using scheduledQueryRules API.
+- Log alerts created from [legacy Log Analytics API](./api-alerts.md) aren't tracked [Azure Resources](../../azure-resource-manager/management/overview.md) and don't have enforced unique resource names. These alerts are still created on `microsoft.insights/scheduledqueryrules` as hidden resources, which have this resource naming structure `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Log Alerts on legacy API are shown with above hidden resource name along with resource group and alert properties.
+> [!Note]
+> Unsupported resource characters such as <, >, %, &, \, ?, / are replaced with _ in the hidden resource names and this will also reflect in the billing information.
+## Activity log alerts
+An activity log alert monitors a resource by checking the activity logs for a new activity log event that matches the defined conditions.
+
+You may want to use activity log alerts for these types of scenarios:
+- When a specific operation occurs on resources in a specific resource group or subscription. For example, you may want to be notified when:
+ - Any virtual machine in a production resource group is deleted.
+ - Any new roles are assigned to a user in your subscription.
+- A service health event occurs. Service health events include notifications of incidents and maintenance events that apply to resources in your subscription.
+
+You can create an activity log alert on:
+- Any of the activity log [event categories](../essentials/activity-log-schema.md), other than on alert events.
+- Any activity log event in top-level property in the JSON object.
+
+Activity log alert rules are Azure resources, so they can be created by using an Azure Resource Manager template. They also can be created, updated, or deleted in the Azure portal.
+
+An activity log alert only monitors events in the subscription in which the alert is created.
+
+## Smart Detection alerts
+After setting up Application Insights for your project, when your app generates a certain minimum amount of data, Smart Detection takes 24 hours to learn the normal behavior of your app. Your app's performance has a typical pattern of behavior. Some requests or dependency calls will be more prone to failure than others; and the overall failure rate may go up as load increases. Smart Detection uses machine learning to find these anomalies. Smart Detection monitors the data received from your app, and in particular the failure rates. Application Insights automatically alerts you in near real time if your web app experiences an abnormal rise in the rate of failed requests.
+
+As data comes into Application Insights from your web app, Smart Detection compares the current behavior with the patterns seen over the past few days. If there is an abnormal rise in failure rate compared to previous performance, an analysis is triggered. To help you triage and diagnose the problem, an analysis of the characteristics of the failures and related application data is provided in the alert details. There are also links to the Application Insights portal for further diagnosis. The feature needs no set-up nor configuration, as it uses machine learning algorithms to predict the normal failure rate.
+
+While metric alerts tell you there might be a problem, Smart Detection starts the diagnostic work for you, performing much of the analysis you would otherwise have to do yourself. You get the results neatly packaged, helping you to get quickly to the root of the problem.
+
+Smart detection works for any web app, hosted in the cloud or on your own servers, that generate application request or dependency data.
+
+## Next steps
+- Get an [overview of alerts](alerts-overview.md).
+- [Create an alert rule](alerts-log.md).
+- Learn more about [Smart Detection](../app/proactive-failure-diagnostics.md).
azure-monitor Alerts Unified Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-unified-log.md
- Title: Log alerts in Azure Monitor
-description: Trigger emails, notifications, call websites URLs (webhooks), or automation when the log query condition you specify is met
--- Previously updated : 2/23/2022--
-# Log alerts in Azure Monitor
-
-## Overview
-
-Log alerts are one of the alert types that are supported in [Azure Alerts](./alerts-overview.md). Log alerts allow users to use a [Log Analytics](../logs/log-analytics-tutorial.md) query to evaluate resources logs every set frequency, and fire an alert based on the results. Rules can trigger one or more actions using [Action Groups](./action-groups.md).
-
-> [!NOTE]
-> Log data from a [Log Analytics workspace](../logs/log-analytics-tutorial.md) can be sent to the Azure Monitor metrics store. Metrics alerts have [different behavior](alerts-metric-overview.md), which may be more desirable depending on the data you are working with. For information on what and how you can route logs to metrics, see [Metric Alert for Logs](alerts-metric-logs.md).
-
-## Prerequisites
-
-Log alerts run queries on Log Analytics data. First you should start [collecting log data](../essentials/resource-logs.md) and query the log data for issues. You can use the [alert query examples topic](../logs/queries.md) in Log Analytics to understand what you can discover or [get started on writing your own query](../logs/log-analytics-tutorial.md).
-
-[Azure Monitoring Contributor](../roles-permissions-security.md) is a common role that is needed for creating, modifying, and updating log alerts. Access & query execution rights for the resource logs are also needed. Partial access to resource logs can fail queries or return partial results. [Learn more about configuring log alerts in Azure](./alerts-log.md).
-
-> [!NOTE]
-> Log alerts for Log Analytics used to be managed using the legacy [Log Analytics Alert API](./api-alerts.md). [Learn more about switching to the current ScheduledQueryRules API](../alerts/alerts-log-api-switch.md).
-
-## Query evaluation definition
-
-Log search rules condition definition starts from:
--- What query to run?-- How to use the results?-
-The following sections describe the different parameters you can use to set the above logic.
-
-### Log query
-The [Log Analytics](../logs/log-analytics-tutorial.md) query used to evaluate the rule. The results returned by this query are used to determine whether an alert is to be triggered. The query can be scoped to:
--- A specific resource, such as a virtual machine.-- An at scale resource, such as a subscription or resource group.-- Multiple resources using [cross-resource query](../logs/cross-workspace-query.md#querying-across-log-analytics-workspaces-and-from-application-insights).
-
-> [!IMPORTANT]
-> Alert queries have constraints to ensure optimal performance and the relevance of the results. [Learn more here](./alerts-log-query.md).
-
-> [!IMPORTANT]
-> Resource centric and [cross-resource query](../logs/cross-workspace-query.md#querying-across-log-analytics-workspaces-and-from-application-insights) are only supported using the current scheduledQueryRules API. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you will need to switch. [Learn more about switching](./alerts-log-api-switch.md)
-
-#### Query time Range
-
-Time range is set in the rule condition definition. It's called **Override query time range** in the advance settings section.
-
-Unlike log analytics, the time range in alerts is limited to a maximum of two days of data. Even if longer range **ago** command is used in the query, the time range will apply. For example, a query scans up to 2 days, even if the text contains **ago(7d)**.
-
-If you use **ago** command in the query, the range is automatically set to two days. You can also change time range manually in cases the query requires more data than the alert evaluation even if there is no **ago** command in the query.
-
-### Measure
-
-Log alerts turn log into numeric values that can be evaluated. You can measure two different things:
-* Result count
-* Calculation of a value
-
-#### Result count
-
-Count of results is the default measure and is used when you set a **Measure** with a selection of **Table rows**. Ideal for working with events such as Windows event logs, syslog, application exceptions. Triggers when log records happen or doesn't happen in the evaluated time window.
-
-Log alerts work best when you try to detect data in the log. It works less well when you try to detect lack of data in the logs. For example, alerting on virtual machine heartbeat.
-
-> [!NOTE]
-> Since logs are semi-structured data, they are inherently more latent than metric, you may experience misfires when trying to detect lack of data in the logs, and you should consider using [metric alerts](alerts-metric-overview.md). You can send data to the metric store from logs using [metric alerts for logs](alerts-metric-logs.md).
-
-##### Example of result count use case
-
-You want to know when your application responded with error code 500 (Internal Server Error). You would create an alert rule with the following details:
--- **Query:** -
-```Kusto
-requests
-| where resultCode == "500"
-```
--- **Aggregation granularity:** 15 minutes-- **Alert frequency:** 15 minutes-- **Threshold value:** Greater than 0-
-Then alert rules monitors for any requests ending with 500 error code. The query runs every 15 minutes, over the last 15 minutes. If even one record is found, it fires the alert and triggers the actions configured.
-
-### Calculation of a value
-
-Calculation of a value is used when you select a column name of a numeric column for the **Measure**, and the result is a calculation that you perform on the values in that column. This would be used, for example, as CPU counter value.
-### Aggregation type
-
-The calculation that is done on multiple records to aggregate them to one numeric value using the [**Aggregation granularity**](#aggregation-granularity) defined. For example:
-- **Sum** returns the sum of measure column.-- **Average** returns the average of the measure column.-
-### Aggregation granularity
-
-Determines the interval that is used to aggregate multiple records to one numeric value. For example, if you specified **5 minutes**, records would be grouped by 5-minute intervals using the **Aggregation type** specified.
-
-> [!NOTE]
-> As [bin()](/azure/kusto/query/binfunction) can result in uneven time intervals, the alert service will automatically convert [bin()](/azure/kusto/query/binfunction) function to [bin_at()](/azure/kusto/query/binatfunction) function with appropriate time at runtime, to ensure results with a fixed point.
-
-### Split by alert dimensions
-
-Split alerts by number or string columns into separate alerts by grouping into unique combinations. It's configured in **Split by dimensions** section of the condition (limited to six splits). When creating resource-centric alerts at scale (subscription or resource group scope), you can split by Azure resource ID column. Splitting on Azure resource ID column will change the target of the alert to the specified resource.
-
-Splitting by Azure resource ID column is recommended when you want to monitor the same condition on multiple Azure resources. For example, monitoring all virtual machines for CPU usage over 80%. You may also decide not to split when you want a condition on multiple resources in the scope. Such as monitoring that at least five machines in the resource group scope have CPU usage over 80%.
-#### Example of splitting by alert dimensions
-
-For example, you want to monitor errors for multiple virtual machines running your web site/app in a specific resource group. You can do that using a log alert rule as follows:
--- **Query:** -
- ```Kusto
- // Reported errors
- union Event, Syslog // Event table stores Windows event records, Syslog stores Linux records
- | where EventLevelName == "Error" // EventLevelName is used in the Event (Windows) records
- or SeverityLevel== "err" // SeverityLevel is used in Syslog (Linux) records
- ```
--- **Resource ID Column:** _ResourceId-- **Dimensions:**
- - Computer = VM1, VM2 (Filtering values in alert rules definition isn't available currently for workspaces and Application Insights. Filter in the query text.)
-- **Aggregation granularity:** 15 minutes-- **Alert frequency:** 15 minutes-- **Threshold value:** Greater than 0-
-This rule monitors if any virtual machine had error events in the last 15 minutes. Each virtual machine is monitored separately and will trigger actions individually.
-
-> [!NOTE]
-> Split by alert dimensions is only available for the current scheduledQueryRules API. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you will need to switch. [Learn more about switching](./alerts-log-api-switch.md). Resource centric alerting at scale is only supported in the API version `2021-08-01` and above.
-
-## Alert logic definition
-
-Once you define the query to run and evaluation of the results, you need to define the alerting logic and when to fire actions. The following sections describe the different parameters you can use:
-
-### Threshold and operator
-
-The query results are transformed into a number that is compared against the threshold and operator.
-
-### Frequency
-
-The interval in which the query is run. Can be set from a minute to a day.
-
-### Number of violations to trigger alert
-
-You can specify the alert evaluation period and the number of failures needed to trigger an alert. Allowing you to better define an impact time to trigger an alert.
-
-For example, if your rule [**Aggregation granularity**](#aggregation-granularity) is defined as '5 minutes', you can trigger an alert only if three failures (15 minutes) of the last hour occurred. This setting is defined by your application business policy.
-
-## State and resolving alerts
-
-Log alerts can either be stateless or stateful (currently in preview).
-
-Stateless alerts fire each time the condition is met, even if fired previously. You can [mark the alert as closed](../alerts/alerts-managing-alert-states.md) once the alert instance is resolved. You can also mute actions to prevent them from triggering for a period after an alert rule fired using the **Mute Actions** option in the alert details section.
-
-See this alert stateless evaluation example:
-
-| Time | Log condition evaluation | Result
-| - | -| -| -
-| 00:05 | FALSE | Alert doesn't fire. No actions called.
-| 00:10 | TRUE | Alert fires and action groups called. New alert state ACTIVE.
-| 00:15 | TRUE | Alert fires and action groups called. New alert state ACTIVE.
-| 00:20 | FALSE | Alert doesn't fire. No actions called. Pervious alerts state remains ACTIVE.
-
-Stateful alerts fire once per incident and resolve. The alert rule resolves when the alert condition isn't met for 30 minutes for a specific evaluation period (to account for [log ingestion delay](../alerts/alerts-troubleshoot-log.md#data-ingestion-time-for-logs)), and for three consecutive evaluations to reduce noise if there is flapping conditions. For example, with a frequency of 5 minutes, the alert resolve after 40 minutes or with a frequency of 1 minute, the alert resolve after 32 minutes. The resolved notification is sent out via web-hooks or email, the status of the alert instance (called monitor state) in Azure portal is also set to resolved.
-
-Stateful alerts feature is currently in preview. You can set this using **Automatically resolve alerts** in the alert details section.
-
-## Location selection in log alerts
-
-Log alerts allow you to set a location for alert rules. You can select any of the supported locations, which align to [Log Analytics supported region list](https://azure.microsoft.com/global-infrastructure/services/?products=monitor).
-
-Location affects which region the alert rule is evaluated in. Queries are executed on the log data in the selected region, that said, the alert service end to end is global. Meaning alert rule definition, fired alerts, notifications, and actions aren't bound to the location in the alert rule. Data is transfer from the set region since the Azure Monitor alerts service is a [non-regional service](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=non-regional).
-
-## Pricing model
-
-Each Log Alert rule is billed based the interval at which the log query is evaluated (more frequent query evaluation results in a higher cost). Additionally, for Log Alerts configured for [at scale monitoring](#split-by-alert-dimensions), the cost will also depend on the number of time series created by the dimensions resulting from your query.
-
-Prices for Log Alert rules are available on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
-
-### Calculating the price for a Log Alert rule without dimensions
-
-The price of an alert rule which queries 1 resource event every 15-minutes can be calculated as:
-
-Total monthly price = 1 resource * 1 log alert rule * price per 15-minute internal log alert rule per month.
-
-### Calculating the price for a Log Alert rule with dimensions
-
-The price of an alert rule which monitors 10 VM resources at 1-minute frequency, using resource centric log monitoring, can be calculated as Price of alert rule + Price of number of dimensions. For example:
-
-Total monthly price = price per 1-minute log alert rule per month + ( 10 time series - 1 included free time series ) * price per 1-min interval monitored per month.
-
-Pricing of at scale log monitoring is applicable from Scheduled Query Rules API version 2021-02-01.
-
-## View log alerts usage on your Azure bill
-
-Log Alerts are listed under resource provider `microsoft.insights/scheduledqueryrules` with:
--- Log Alerts on Application Insights shown with exact resource name along with resource group and alert properties.-- Log Alerts on Log Analytics shown with exact resource name along with resource group and alert properties; when created using [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules).-- Log alerts created from [legacy Log Analytics API](./api-alerts.md) aren't tracked [Azure Resources](../../azure-resource-manager/management/overview.md) and don't have enforced unique resource names. These alerts are still created on `microsoft.insights/scheduledqueryrules` as hidden resources, which have this resource naming structure `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Log Alerts on legacy API are shown with above hidden resource name along with resource group and alert properties.-
-> [!NOTE]
-> Unsupported resource characters such as `<, >, %, &, \, ?, /` are replaced with `_` in the hidden resource names and this will also reflect in the billing information.
-
-> [!NOTE]
-> Log alerts for Log Analytics used to be managed using the legacy [Log Analytics Alert API](./api-alerts.md) and legacy templates of [Log Analytics saved searches and alerts](../insights/solutions.md). [Learn more about switching to the current ScheduledQueryRules API](../alerts/alerts-log-api-switch.md). Any alert rule management should be done using [legacy Log Analytics API](./api-alerts.md) until you decide to switch and you can't use the hidden resources.
-
-## Next steps
-
-* Learn about [creating in log alerts in Azure](./alerts-log.md).
-* Understand [webhooks in log alerts in Azure](../alerts/alerts-log-webhook.md).
-* Learn about [Azure Alerts](./alerts-overview.md).
-* Learn more about [Log Analytics](../logs/log-query-overview.md).
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
Adjust sensitivity to achieve the desired confidence level in highlighted edges.
### Limitations of Intelligent View
-The Intelligent View works well for large distributed applications but sometimes it can take around one minute to load.
+* Large distributed applications may take a minute to load Intelligent View.
+* Timeframes of up to seven days are supported.
+
+We would love to hear your feedback. ([Portal feedback](#portal-feedback))
## Troubleshooting
In a case where an edge is highlighted the explanation from the model should poi
#### Intelligent View doesn't load
-If Intelligent View doesn't load, ensure that you've opted into the preview on Application Map.
+Follow these steps if Intelligent View doesn't load.
+
+1. Set the configured time frame to six days or less.
+1. The `Try preview` button must be selected to opt in.
:::image type="content" source="media/app-map/intelligent-view-try-preview.png" alt-text="Screenshot of the Application Map user interface preview opt-in button." lightbox="media/app-map/intelligent-view-try-preview.png":::
azure-monitor Azure Web Apps Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-java.md
You can apply additional configurations, and then based on your specific scenari
### Auto-instrumentation through Azure portal
-You can turn on monitoring for your Java apps running in Azure App Service just with one click, no code change required.
-Application Insights for Java is integrated with Azure App Service on Linux - both code-based and custom containers, and with App Service on Windows for code-based apps.
-The integration adds [Application Insights Java 3.x](./java-in-process-agent.md) and you will get the telemetry auto-collected.
+You can turn on monitoring for your Java apps running in Azure App Service just with one click, no code change required. The integration adds [Application Insights Java 3.x](./java-in-process-agent.md) and you will get the telemetry auto-collected.
1. **Select Application Insights** in the Azure control panel for your app service, then select **Enable**.
For the latest updates and bug fixes, [consult the release notes](web-app-extens
* [Monitor service health metrics](../data-platform.md) to make sure your service is available and responsive. * [Receive alert notifications](../alerts/alerts-overview.md) whenever operational events happen or metrics cross a threshold. * Use [Application Insights for JavaScript apps and web pages](javascript.md) to get client telemetry from the browsers that visit a web page.
-* [Set up Availability web tests](monitor-web-app-availability.md) to be alerted if your site is down.
+* [Set up Availability web tests](monitor-web-app-availability.md) to be alerted if your site is down.
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
As we're adding new integrations, the auto-instrumentation capability matrix bec
|Environment/Resource Provider | .NET | .NET Core | Java | Node.js | Python | ||--|--|--|--|--|
-|Azure App Service on Windows | GA, OnBD* | GA, opt-in | Public Preview, Container and Custom Containers are GA | Public Preview | Not supported |
+|Azure App Service on Windows - Publish as Code | GA, OnBD* | GA | GA | GA, OnBD* | Not supported |
+|Azure App Service on Windows - Publish as Docker | Public Preview | Public Preview | Public Preview | Not supported | Not supported |
|Azure App Service on Linux | N/A | Public Preview | GA | GA | Not supported | |Azure Functions - basic | GA, OnBD* | GA, OnBD* | GA, OnBD* | GA, OnBD* | GA, OnBD* | |Azure Functions - dependencies | Not supported | Not supported | Public Preview | Not supported | Through [extension](monitor-functions.md#distributed-tracing-for-python-function-apps) |
azure-monitor Proactive Performance Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-performance-diagnostics.md
Last updated 05/04/2017
# Smart detection - Performance Anomalies >[!NOTE]
->You can migrate your Application Insight resources to alerts-bases smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
+>You can migrate your Application Insight resources to alerts-based smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
> > For more information on the migration process, see [Smart Detection Alerts migration](../alerts/alerts-smart-detections-migration.md).
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
Last updated 05/11/2022
# Enable ContainerLogV2 schema (preview)
-Azure Monitor Container Insights is now in Public Preview of new schema for container logs called ContainerLogV2. As part of this schema, there new fields to make common queries to view AKS (Azure Kubernetes Service) and Azure Arc enabled Kubernetes data. In addition, this schema is compatible as a part of [Basic Logs](../logs/basic-logs-configure.md), which offer a low cost alternative to standard analytics logs.
+Azure Monitor Container Insights is now in Public Preview of new schema for container logs called ContainerLogV2. As part of this schema, there are new fields to make common queries to view AKS (Azure Kubernetes Service) and Azure Arc enabled Kubernetes data. In addition, this schema is compatible as a part of [Basic Logs](../logs/basic-logs-configure.md), which offer a low cost alternative to standard analytics logs.
> [!NOTE]
-> The ContainerLogv2 schema is currently a preview feature, some features may be limited in the Portal experience from Container Insights
+> The ContainerLogv2 schema is currently a preview feature, Container Insights does not yet support the "View in Analytics" option, however the data is still available when queried directly from the [Log Analytics](./container-insights-log-query.md) interface.
>[!NOTE] >The new fields are:
Azure Monitor Container Insights is now in Public Preview of new schema for cont
3. Follow the instructions accordingly when configuring an existing ConfigMap or using a new one. ### Configuring an existing ConfigMap
-When configuring an existing ConfigMap, we have to append the following section in your existing ConfigMap yaml file:
+If your ConfigMap doesn't yet have the "[log_collection_settings.schema]" field, you'll need to append the following section in your existing ConfigMap yaml file:
```yaml [log_collection_settings.schema]
- # In the absense of this configmap, default value for containerlog_schema_version is "v1"
+ # In the absence of this configmap, default value for containerlog_schema_version is "v1"
# Supported values for this setting are "v1","v2"
- # See documentation for benefits of v2 schema over v1 schema before opting for "v2" schema
+ # See documentation at https://aka.ms/ContainerLogv2 for benefits of v2 schema over v1 schema before opting for "v2" schema
containerlog_schema_version = "v2" ``` ### Configuring a new ConfigMap
-1. Download the new ConfigMap from [here](https://aka.ms/container-azm-ms-agentconfig). For new downloaded configmapdefault the value for containerlog_schema_version is "v1"
+1. Download the new ConfigMap from [here](https://aka.ms/container-azm-ms-agentconfig). For the newly downloaded configmapdefault, the value for containerlog_schema_version is "v1"
1. Update the "containerlog_schema_version = "v2""
- ```yaml
- [log_collection_settings.schema]
- # In the absense of this configmap, default value for containerlog_schema_version is "v1"
- # Supported values for this setting are "v1","v2"
- # See documentation for benefits of v2 schema over v1 schema before opting for "v2" schema
- containerlog_schema_version = "v2"
- ```
-1. Once you have finished configuring the configmap Run the following kubectl command: kubectl apply -f `<configname>`
+```yaml
+[log_collection_settings.schema]
+ # In the absence of this configmap, default value for containerlog_schema_version is "v1"
+ # Supported values for this setting are "v1","v2"
+ # See documentation at https://aka.ms/ContainerLogv2 for benefits of v2 schema over v1 schema before opting for "v2" schema
+ containerlog_schema_version = "v2"
+```
+
+1. Once you have finished configuring the configmap, run the following kubectl command: kubectl apply -f `<configname>`
>[!TIP] >Example: kubectl apply -f container-azm-ms-agentconfig.yaml.
azure-monitor Alert Management Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/alert-management-solution.md
Last updated 01/02/2022
![Alert Management icon](media/alert-management-solution/icon.png) > [!CAUTION]
-> This solution is no longer in active development and may not work as expected. We suggest you try using [Azure Resource Graph to query Azure Monitor alerts](../alerts/alerts-overview.md#manage-your-alert-instances-programmatically).
+> This solution is no longer in active development and may not work as expected. We suggest you try using [Azure Resource Graph to query Azure Monitor alerts](../alerts/alerts-overview.md#manage-your-alerts-programmatically).
-The Alert Management solution helps you analyze all of the alerts in your Log Analytics repository. These alerts may have come from a variety of sources including those sources [created by Log Analytics](../alerts/alerts-overview.md) or [imported from Nagios or Zabbix](../vm/monitor-virtual-machine.md). The solution also imports alerts from any [connected System Center Operations Manager management groups](../agents/om-agents.md).
+The Alert Management solution helps you analyze all of the alerts in your Log Analytics repository. These alerts may have come from a variety of sources including those sources [created by Log Analytics](../alerts/alerts-types.md#log-alerts) or [imported from Nagios or Zabbix](../vm/monitor-virtual-machine.md). The solution also imports alerts from any [connected System Center Operations Manager management groups](../agents/om-agents.md).
## Prerequisites The solution works with any records in the Log Analytics repository with a type of **Alert**, so you must perform whatever configuration is required to collect these records. -- For Log Analytics alerts, [create alert rules](../alerts/alerts-overview.md) to create alert records directly in the repository.
+- For Log Analytics alerts, [create alert rules](../alerts/alerts-log.md) to create alert records directly in the repository.
- For Nagios and Zabbix alerts, [configure those servers](../vm/monitor-virtual-machine.md) to send alerts to Log Analytics. - For System Center Operations Manager alerts, [connect your Operations Manager management group to your Log Analytics workspace](../agents/om-agents.md). Any alerts created in System Center Operations Manager are imported into Log Analytics.
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
Last updated 03/24/2022
The most significant charges for most Azure Monitor implementations will typically be ingestion and retention of data in your Log Analytics workspaces. Several features in Azure Monitor do not have a direct cost but add to the workspace data that's collected. This article describes how data charges are calculated for your Log Analytics workspaces and Application Insights resources and the different configuration options that affect your costs. ## Pricing model
-The default pricing for Log Analytics is a Pay-As-You-Go model that's based on ingested data volume and data retention. Each Log Analytics workspace is charged as a separate service and contributes to the bill for your Azure subscription. The amount of data ingestion can be considerable, depending on the following factors:
+The default pricing for Log Analytics is a Pay-As-You-Go model that's based on ingested data volume and data retention. Each Log Analytics workspace is charged as a separate service and contributes to the bill for your Azure subscription. [Pricing for Log Analytics](https://azure.microsoft.com/pricing/details/monitor/) is set regionally. The amount of data ingestion can be considerable, depending on the following factors:
- The set of management solutions enabled and their configuration - The number and type of monitored resources-- Type of data collected from each monitored resource
+- The types of data collected from each monitored resource
## Data size calculation Data volume is measured as the size of the data that will be stored in GB (10^9 bytes). The data size of a single record is calculated from a string representation of the columns that are stored in the Log Analytics workspace for that record, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any custom columns added by the [custom logs API](custom-logs-overview.md), [ingestion-time transformations](ingestion-time-transformations.md), or [custom fields](custom-fields.md) that are added as data is collected and then stored in the workspace. >[!NOTE]
->The billable data volume calculation is substantially smaller than the size of the entire incoming JSON-packaged event, often less than 50%. It is essential to understand this calculation of billed data size when estimating costs and comparing to other pricing models.
+>The billable data volume calculation is substantially smaller than the size of the entire incoming JSON-packaged event, often less than 50% for small events. It is essential to understand this calculation of billed data size when estimating costs and comparing to other pricing models.
### Excluded columns The following [standard columns](log-standard-columns.md) that are common to all tables, are excluded in the calculation of the record size. All other columns stored in Log Analytics are included in the calculation of the record size.
azure-monitor Tutorial Ingestion Time Transformations Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-ingestion-time-transformations-api.md
In this tutorial, you learn to:
## Prerequisites To complete this tutorial, you need the following: -- Log Analytics workspace where you have at least [contributor rights](manage-access.md#manage-access-using-azure-permissions) .
+- Log Analytics workspace where you have at least [contributor rights](manage-access.md#manage-access-using-azure-permissions).
+
+
+ To configure this table for ingestion-time transformations, the table must already have some data.
+
+ The table can't be linked to the workspaceΓÇÖs default DCR.
+ - [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
azure-monitor Tutorial Ingestion Time Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-ingestion-time-transformations.md
In this tutorial, you learn to:
## Prerequisites To complete this tutorial, you need the following: -- Log Analytics workspace where you have at least [contributor rights](manage-access.md#manage-access-using-azure-permissions) .
+- A Log Analytics workspace where you have at least [contributor rights](manage-access.md#manage-access-using-azure-permissions).
+- A [supported Azure table](../logs/tables-feature-support.md) in the workspace.
+
+ To configure this table for ingestion-time transformations, the table must already have some data.
+
+ The table can't be linked to the workspaceΓÇÖs default DCR.
+
- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/usage-estimated-costs.md
Several other features don't have a direct cost, but you instead pay for the ing
|:|:| | Logs | Ingestion, retention, and export of data in Log Analytics workspaces and legacy Application insights resources. This will typically be the bulk of Azure Monitor charges for most customers. There is no charge for querying this data except in the case of [Basic Logs](logs/basic-logs-configure.md) or [Archived Logs](logs/data-retention-archive.md).<br><br>Charges for Logs can vary significantly on the configuration that you choose. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges for Logs data are calculated and the different pricing tiers available. | | Platform Logs | Processing of [diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there is a charge for the workspace data ingestion and collection. |
-| Metrics | There is no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There is a cost for cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). |
-| Alerts | Charged based on the type and number of [signals](alerts/alerts-overview.md#what-you-can-alert-on) used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [log alerts](alerts/alerts-unified-log.md) configured for [at scale monitoring](alerts/alerts-unified-log.md#split-by-alert-dimensions), the cost will also depend on the number of time series created by the dimensions resulting from your query. |
+| Metrics | There is no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There is a cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). |
+| Alerts | Charged based on the type and number of signals used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [Log alerts](alerts/alerts-types.md#log-alerts) configured for [at scale monitoring](alerts/alerts-types.md#splitting-by-dimensions-in-log-alert-rules), the cost will also depend on the number of time series created by the dimensions resulting from your query. |
| Web tests | There is a cost for [standard web tests](app/availability-standard-tests.md) and [multi-step web tests](app/availability-multistep.md) in Application Insights. Multi-step web tests have been deprecated. ## Data transfer charges
To view data allocation benefits from sources such as [Microsoft Defender for Se
Customers who purchased Microsoft Operations Management Suite E1 and E2 are eligible for per-node data ingestion entitlements for Log Analytics and Application Insights. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no extra cost.
-To receive these entitlements for Log Analytics workspaces or Application Insights resources in a subscription, they must be use the Per-Node (OMS) pricing tier. This entitlement isn't visible in the estimated costs shown in the Usage and estimated cost pane.
+To receive these entitlements for Log Analytics workspaces or Application Insights resources in a subscription, they must use the Per-Node (OMS) pricing tier. This entitlement isn't visible in the estimated costs shown in the Usage and estimated cost pane.
Depending on the number of nodes of the suite that your organization purchased, moving some subscriptions into a Per GB (pay-as-you-go) pricing tier might be advantageous, but this requires careful consideration.
azure-monitor Vminsights Health Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-health-alerts.md
An [Azure alert](../alerts/alerts-overview.md) will be created for each virtual
If an alert is already in **Fired** state when the virtual machine state changes, then a second alert won't be created, but the severity of the same alert will be changed to match the state of the virtual machine. For example, if the virtual machine changes to **Critical** state when a **Warning** alert was already in **Fired** state, that alert's severity will be changed to **Sev1**. If the virtual machine changes to a **Warning** state when a **Sev1** alert was already in **Fired** state, that alert's severity will be changed to **Sev2**. If the virtual machine moves back to a **Healthy** state, then the alert will be resolved with severity changed to **Sev4**. ## Viewing alerts
-View alerts created by VM insights guest health with other [alerts in the Azure portal](../alerts/alerts-overview.md#alerts-experience). You can select **Alerts** from the **Azure Monitor** menu to view alerts for all monitored resources, or select **Alerts** from a virtual machine's menu to view alerts for just that virtual machine.
+View alerts created by VM insights guest health with other [alerts in the Azure portal](../alerts/alerts-page.md). You can select **Alerts** from the **Azure Monitor** menu to view alerts for all monitored resources, or select **Alerts** from a virtual machine's menu to view alerts for just that virtual machine.
## Alert properties
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 04/29/2022 Last updated : 05/24/2022 # Create and manage Active Directory connections for Azure NetApp Files
This setting is configured in the **Active Directory Connections** under **NetAp
![Active Directory AES encryption](../media/azure-netapp-files/active-directory-aes-encryption.png)
- * **LDAP Signing**
+ * <a name="ldap-signing"></a>**LDAP Signing**
Select this checkbox to enable LDAP signing. This functionality enables secure LDAP lookups between the Azure NetApp Files service and the user-specified [Active Directory Domain Services domain controllers](/windows/win32/ad/active-directory-domain-services). For more information, see [ADV190023 | Microsoft Guidance for Enabling LDAP Channel Binding and LDAP Signing](https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/ADV190023). ![Active Directory LDAP signing](../media/azure-netapp-files/active-directory-ldap-signing.png)
- The **LDAP Signing** feature is currently in preview. If this is your first time using this feature, register the feature before using it:
-
- ```azurepowershell-interactive
- Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFLdapSigning
- ```
-
- Check the status of the feature registration:
-
- > [!NOTE]
- > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is `Registered` before continuing.
-
- ```azurepowershell-interactive
- Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFLdapSigning
- ```
-
- You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
--- * **LDAP over TLS** See [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md) for information about this option.
azure-netapp-files Snapshots Manage Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-manage-policy.md
na Previously updated : 01/05/2022 Last updated : 05/25/2022
A snapshot policy enables you to specify the snapshot creation frequency in hourly, daily, weekly, or monthly cycles. You also need to specify the maximum number of snapshots to retain for the volume.
-1. From the NetApp Account view, click **Snapshot policy**.
+1. From the NetApp Account view, select **Snapshot policy**.
![Screenshot that shows how to navigate to Snapshot Policy.](../media/azure-netapp-files/snapshot-policy-navigation.png) 2. In the Snapshot Policy window, set Policy State to **Enabled**.
-3. Click the **Hourly**, **Daily**, **Weekly**, or **Monthly** tab to create hourly, daily, weekly, or monthly snapshot policies. Specify the **Number of snapshots to keep**.
+3. Select the **Hourly**, **Daily**, **Weekly**, or **Monthly** tab to create hourly, daily, weekly, or monthly snapshot policies. Specify the **Number of snapshots to keep**.
> [!IMPORTANT] > For *monthly* snapshot policy definition, be sure to specify a day that will work for all intended months. If you intend for the monthly snapshot configuration to work for all months in the year, pick a day of the month between 1 and 28. For example, if you specify `31` (day of the month), the monthly snapshot configuration is skipped for the months that have less than 31 days.
A snapshot policy enables you to specify the snapshot creation frequency in hour
![Screenshot that shows the monthly snapshot policy.](../media/azure-netapp-files/snapshot-policy-monthly.png)
-4. Click **Save**.
+4. Select **Save**.
If you need to create additional snapshot policies, repeat Step 3. The policies you created appear in the Snapshot policy page.
You cannot apply a snapshot policy to a destination volume in cross-region repli
![Screenshot that shows the Volumes right-click menu.](../media/azure-netapp-files/volume-right-cick-menu.png)
-2. In the Edit window, under **Snapshot policy**, select a policy to use for the volume. Click **OK** to apply the policy.
+2. In the Edit window, under **Snapshot policy**, select a policy to use for the volume. Select **OK** to apply the policy.
![Screenshot that shows the Snapshot policy menu.](../media/azure-netapp-files/snapshot-policy-edit.png) ## Modify a snapshot policy
-You can modify an existing snapshot policy to change the policy state, snapshot frequency (hourly, daily, weekly, or monthly), or number of snapshots to keep.
+You can modify an existing snapshot policy to change the policy state, snapshot frequency (hourly, daily, weekly, or monthly), or number of snapshots to keep.
+
+When modifying a snapshot policy, snapshots created with an old schedule will not be deleted or overwritten by the new schedule or disable the schedule. If you proceed with the update, you will have to manually delete the old snapshots.
-1. From the NetApp Account view, click **Snapshot policy**.
+1. From the NetApp Account view, select **Snapshot policy**.
2. Right-click the snapshot policy you want to modify, then select **Edit**. ![Screenshot that shows the Snapshot policy right-click menu.](../media/azure-netapp-files/snapshot-policy-right-click-menu.png)
-3. Make the changes in the Snapshot Policy window that appears, then click **Save**.
+3. Make the changes in the Snapshot Policy window that appears, then select **Save**.
+
+4. You will receive a prompt asking you to confirm that you want to update the Snapshot Policy. Select **Yes** to confirm your choice.
## Delete a snapshot policy You can delete a snapshot policy that you no longer want to keep.
-1. From the NetApp Account view, click **Snapshot policy**.
+1. From the NetApp Account view, select **Snapshot policy**.
2. Right-click the snapshot policy you want to modify, then select **Delete**. ![Screenshot that shows the Delete menu item.](../media/azure-netapp-files/snapshot-policy-right-click-menu.png)
-3. Click **Yes** to confirm that you want to delete the snapshot policy.
+3. Select **Yes** to confirm that you want to delete the snapshot policy.
![Screenshot that shows snapshot policy delete confirmation.](../media/azure-netapp-files/snapshot-policy-delete-confirm.png)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 05/18/2022 Last updated : 05/25/2022
Azure NetApp Files is updated regularly. This article provides a summary about t
## May 2022
+* [LDAP signing](create-active-directory-connections.md#ldap-signing) now generally available (GA)
+
+ The LDAP signing feature is now generally available. You no longer need to register the feature before using it.
+ * [SMB Continuous Availability (CA) shares support for Citrix App Layering](enable-continuous-availability-existing-smb.md) (Preview) [Citrix App Layering](https://docs.citrix.com/en-us/citrix-app-layering/4.html) radically reduces the time it takes to manage Windows applications and images. App Layering separates the management of your OS and apps from your infrastructure. You can install each app and OS patch once, update the associated templates, and redeploy your images. You can publish layered images as open standard virtual disks, usable in any environment. App Layering can be used to provide dynamic access application layer virtual disks stored on SMB shared networked storage, including Azure NetApp Files. To enhance App Layering resiliency to events of storage service maintenance, Azure NetApp Files has extended support for [SMB Transparent Failover via SMB Continuous Availability (CA) shares on Azure NetApp Files](azure-netapp-files-create-volumes-smb.md#continuous-availability) for App Layering virtual disks. For more information, see [Azure NetApp Files Azure Virtual Desktop Infrastructure solutions | Citrix](azure-netapp-files-solution-architectures.md#citrix).
azure-relay Ip Firewall Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/ip-firewall-virtual-networks.md
By default, Relay namespaces are accessible from internet as long as the request
This feature is helpful in scenarios in which Azure Relay should be only accessible from certain well-known sites. Firewall rules enable you to configure rules to accept traffic originating from specific IPv4 addresses. For example, if you use Relay with [Azure Express Route](../expressroute/expressroute-faqs.md#supported-services), you can create a **firewall rule** to allow traffic from only your on-premises infrastructure IP addresses.
-> [!IMPORTANT]
-> This feature is currently in preview.
-- ## Enable IP firewall rules The IP firewall rules are applied at the namespace level. Therefore, the rules apply to all connections from clients using any supported protocol. Any connection attempt from an IP address that does not match an allowed IP rule on the namespace is rejected as unauthorized. The response does not mention the IP rule. IP filter rules are applied in order, and the first rule that matches the IP address determines the accept or reject action.
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
description: Shows the rules and restrictions for naming Azure resources.
Previously updated : 05/17/2022 Last updated : 05/25/2022 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> [!div class="mx-tableFixed"] > | Entity | Scope | Length | Valid Characters | > | | | | |
-> | mediaservices | resource group | 3-24 | Lowercase letters and numbers. |
+> | mediaservices | Azure region | 3-24 | Lowercase letters and numbers. |
> | mediaservices / liveEvents | Media service | 1-32 | Alphanumerics and hyphens.<br><br>Start with alphanumeric. | > | mediaservices / liveEvents / liveOutputs | Live event | 1-256 | Alphanumerics and hyphens.<br><br>Start with alphanumeric. | > | mediaservices / streamingEndpoints | Media service | 1-24 | Alphanumerics and hyphens.<br><br>Start with alphanumeric. |
In the following tables, the term alphanumeric refers to:
> | | | | | > | certificates | resource group | 1-260 | Can't use:<br>`/` <br><br>Can't end with space or period. | > | serverfarms | resource group | 1-40 | Alphanumeric, hyphens and Unicode characters that can be mapped to Punycode |
-> | sites / functions / slots | global or per domain. See note below. | 2-60 | Alphanumeric, hyphens and Unicode characters that can be mapped to Punycode<br><br>Can't start or end with hyphen. |
+> | sites | global or per domain. See note below. | 2-60 | Alphanumeric, hyphens and Unicode characters that can be mapped to Punycode<br><br>Can't start or end with hyphen. |
+> | sites / slots | site | 2-59 | Alphanumeric, hyphens and Unicode characters that can be mapped to Punycode |
> [!NOTE] > A web site must have a globally unique URL. When you create a web site that uses a hosting plan, the URL is `http://<app-name>.azurewebsites.net`. The app name must be globally unique. When you create a web site that uses an App Service Environment, the app name must be unique within the [domain for the App Service Environment](../../app-service/environment/using-an-ase.md#app-access). For both cases, the URL of the site is globally unique. > > Azure Functions has the same naming rules and restrictions as Microsoft.Web/sites. When generating the host ID, the function app name is truncated to 32 characters. This can cause host ID collision when a shared storage account is used. For more information, see [Host ID considerations](../../azure-functions/storage-considerations.md#host-id-considerations). >
-> Unicode characters are parsed to Punycode using the following method: https://docs.microsoft.com/dotnet/api/system.globalization.idnmapping.getascii
+> Unicode characters are parsed to Punycode using the [IdnMapping.GetAscii method](/dotnet/api/system.globalization.idnmapping.getascii)
## Next steps
azure-resource-manager Template Spec Convert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-spec-convert.md
Title: Convert portal template to template spec description: Describes how to convert an existing template in the Azure portal gallery to a template specs. Previously updated : 02/04/2021 Last updated : 05/25/2022
The Azure portal provides a way to store Azure Resource Manager templates (ARM t
To see if you have any templates to convert, view the [template gallery in the portal](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Gallery%2Fmyareas%2Fgalleryitems). These templates have the resource type `Microsoft.Gallery/myareas/galleryitems`.
+## Deprecation of portal feature
+
+**The template gallery in the portal is being deprecated on March 31, 2025**. To continue using a template in the template gallery, you need to migrate it to a template spec. Use one of the methods shown in this article to migrate the template.
+ ## Convert with PowerShell script To simplify converting templates in the template gallery, use a PowerShell script from the Azure Quickstart Templates repo. When you run the script, you can either create a new template spec for each template or download a template that creates the template spec. The script doesn't delete the template from the template gallery.
azure-video-indexer Concepts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/concepts-overview.md
The confidence score indicates the confidence in an insight. It is a number betw
Use textual and visual content moderation models to keep your users safe from inappropriate content and validate that the content you publish matches your organization's values. You can automatically block certain videos or alert your users about the content. For more information, see [Insights: visual and textual content moderation](video-indexer-output-json-v2.md#visualcontentmoderation).
-## Blocks
-
-Blocks are meant to make it easier to go through the data. For example, block might be broken down based on when speakers change or there is a long pause.
- ## Project and editor The [Azure Video Indexer](https://www.videoindexer.ai/) website enables you to use your video's deep insights to: find the right media content, locate the parts that youΓÇÖre interested in, and use the results to create an entirely new project. Once created, the project can be rendered and downloaded from Azure Video Indexer and be used in your own editing applications or downstream workflows.
azure-video-indexer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-arm-template.md
Title: Deploy Azure Video Indexer with ARM template description: In this tutorial you will create an Azure Video Indexer account by using Azure Resource Manager (ARM) template. Previously updated : 12/01/2021 Last updated : 05/23/2022
The resource will be deployed to your subscription and will create the Azure Vid
### Option 1: Click the "Deploy To Azure Button", and fill in the missing parameters
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fmedia-services-video-indexer%2Fmaster%2FARM-Samples%2FCreate-Account%2Favam.template.json)
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fmedia-services-video-indexer%2Fmaster%2FARM-Quick-Start%2Favam.template.json)
- ### Option 2 : Deploy using PowerShell Script
-1. Open The [template file](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/ARM-Samples/Create-Account/avam.template.json) file and inspect its content.
+1. Open The [template file](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/ARM-Quick-Start/avam.template.json) file and inspect its content.
2. Fill in the required parameters (see below) 3. Run the Following PowerShell commands:
If you're new to template deployment, see:
## Next steps
-[Connect an existing classic paid Azure Video Indexer account to ARM-based account](connect-classic-account-to-arm.md)
+[Connect an existing classic paid Azure Video Indexer account to ARM-based account](connect-classic-account-to-arm.md)
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Loc
### Line breaking in transcripts
-Improved line break logic to better split transcript into sentences. New editing capabilities are now available through the Azure Video Indexer portal, such as adding a new line and editing the lineΓÇÖs timestamp.
+Improved line break logic to better split transcript into sentences. New editing capabilities are now available through the Azure Video Indexer portal, such as adding a new line and editing the lineΓÇÖs timestamp. For more information, see [Insert or remove transcript lines](edit-transcript-lines-portal.md).
### Azure Monitor integration
azure-video-indexer Upload Index Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/upload-index-videos.md
When you're creating an Azure Video Indexer account, you choose between:
For more information about account types, see [Media Services pricing](https://azure.microsoft.com/pricing/details/azure/media-services/).
+After you upload and index a video, you can use [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer Developer Portal](video-indexer-use-apis.md) to see the insights of the video (see [Examine the Azure Video Indexer output](video-indexer-output-json-v2.md)).
+ When you're uploading videos by using the API, you have the following options: * Upload your video from a URL (preferred).
When you're uploading videos by using the API, you have the following options:
## Supported file formats
-For a list of file formats that you can use with Azure Video Indexer, see [Standard Encoder formats and codecs](/azure/azure/media-services/latest/encode-media-encoder-standard-formats-reference).
+For a list of file formats that you can use with Azure Video Indexer, see [Standard Encoder formats and codecs](/azure/media-services/latest/encode-media-encoder-standard-formats-reference).
## Storage of video files
azure-video-indexer Video Indexer Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-get-started.md
See the [input container/file formats](/azure/media-services/latest/encode-media
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-get-started/video-indexer-upload.png" alt-text="Upload":::
-1. Once your video has been uploaded, Azure Video Indexer starts indexing and analyzing the video. You see the progress.
+1. Once your video has been uploaded, Azure Video Indexer starts indexing and analyzing the video. As a result a JSON output with insights is produced.
+
+ You see the progress.
> [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/video-indexer-get-started/progress.png" alt-text="Progress of the upload":::
+ > :::image type="content" source="./media/video-indexer-get-started/progress.png" alt-text="Progress of the upload":::
+
+ The produced JSON output contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility).
1. Once Azure Video Indexer is done analyzing, you'll get an email with a link to your video and a short description of what was found in your video. For example: people, spoken and written words, topics, and named entities. 1. You can later find your video in the library list and perform different operations. For example: search, reindex, edit. > [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-get-started/uploaded.png" alt-text="Uploaded the upload":::
-## Supported browsers
+After you upload and index a video, you can continue using [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer Developer Portal](video-indexer-use-apis.md) to see the insights of the video (see [Examine the Azure Video Indexer output](video-indexer-output-json-v2.md)).
-For more information, see [supported browsers](video-indexer-overview.md#supported-browsers).
+For more details, see [Upload and index videos](upload-index-videos.md).
-## See also
+To start using the APIs, see [use APIs](video-indexer-use-apis.md)
-See [Upload and index videos](upload-index-videos.md) for more details.
-
-After you upload and index a video, you can start using [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer Developer Portal](video-indexer-use-apis.md) to see the insights of the video.
+## Supported browsers
-[Start using APIs](video-indexer-use-apis.md)
+For more information, see [supported browsers](video-indexer-overview.md#supported-browsers).
## Next steps
azure-video-indexer Video Indexer Output Json V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-output-json-v2.md
Previously updated : 11/16/2020 Last updated : 05/19/2022
When a video is indexed, Azure Video Indexer produces the JSON content that contains details of the specified video insights. The insights include transcripts, optical character recognition elements (OCRs), faces, topics, blocks, and similar details. Each insight type includes instances of time ranges that show when the insight appears in the video.
-You can visually examine the video's summarized insights by pressing the **Play** button on the video on the [Azure Video Indexer](https://www.videoindexer.ai/) website.
+The produced JSON output contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility).
-You can also use the Get Video Index API. If the response status is `OK`, you get a detailed JSON output as the response content.
+To visually examine the video's insights, press the **Play** button on the video on the [Azure Video Indexer](https://www.videoindexer.ai/) website.
![Screenshot of the Insights tab in Azure Video Indexer.](./media/video-indexer-output-json/video-indexer-summarized-insights.png)
+When indexing with an API and the response status is OK, you get a detailed JSON output as the response content. When calling the [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index) API, we recommend passing `&includeSummarizedInsights=false` to save time and reduce response length.
+ This article examines the Azure Video Indexer output (JSON content). For information about what features and insights are available to you, see [Azure Video Indexer insights](video-indexer-overview.md#video-insights). > [!NOTE]
A face might have an ID, a name, a thumbnail, other metadata, and a list of its
|`transcript`|The [transcript](#transcript) insight.| |`ocr`|The [OCR](#ocr) insight.| |`keywords`|The [keywords](#keywords) insight.|
-|`blocks`|Might contain one or more [blocks](#blocks).|
+|`transcripts`|Might contain one or more [transcript](#transcript).|
|`faces/animatedCharacters`|The [faces/animatedCharacters](#facesanimatedcharacters) insight.| |`labels`|The [labels](#labels) insight.| |`shots`|The [shots](#shots) insight.|
Example:
} ```
-#### blocks
-
-Attribute | Description
-|
-`id`|The ID of the block.|
-`instances`|A list of time ranges for this block.|
- #### transcript |Name|Description|
Sentiments are aggregated by their `sentimentType` field (`Positive`, `Neutral`,
#### visualContentModeration
-The `visualContentModeration` block contains time ranges that Azure Video Indexer found to potentially have adult content. If `visualContentModeration` is empty, no adult content was identified.
+The `visualContentModeration` transcript contains time ranges that Azure Video Indexer found to potentially have adult content. If `visualContentModeration` is empty, no adult content was identified.
Videos that contain adult or racy content might be available for private view only. Users have the option to submit a request for a human review of the content. In that case, the `IsAdult` attribute will contain the result of the human review.
backup Backup Support Matrix Mars Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-mars-agent.md
The operating systems must be 64 bit and should be running the latest services p
**Operating system** | **Files/folders** | **System state** | **Software/Module requirements** | | |
+Windows 11 (Enterprise, Pro, Home) | Yes | No | Check the corresponding server version for software/module requirements
Windows 10 (Enterprise, Pro, Home) | Yes | No | Check the corresponding server version for software/module requirements Windows Server 2022 (Standard, Datacenter, Essentials) | Yes | Yes | Check the corresponding server version for software/module requirements Windows 8.1 (Enterprise, Pro)| Yes |No | Check the corresponding server version for software/module requirements
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 05/24/2022 Last updated : 05/26/2022
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary - May 2022
- - [Multi-user authorization using Resource Guard is now generally available](#multi-user-authorization-using-resource-guard-is-now-generally-available)
- [Archive tier support for Azure Virtual Machines is now generally available](#archive-tier-support-for-azure-virtual-machines-is-now-generally-available) - February 2022 - [Multiple backups per day for Azure Files is now generally available](#multiple-backups-per-day-for-azure-files-is-now-generally-available)
You can learn more about the new releases by bookmarking this page or by [subscr
- February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
-## Multi-user authorization using Resource Guard is now generally available
-
-Azure Backup now supports multi-user authorization (MUA) that allows you to add an additional layer of protection to critical operations on your Recovery Services vaults. For MUA, Azure Backup uses the Azure resource, Resource Guard, to ensure critical operations are performed only with applicable authorization.
-
-For more information, see [how to protect Recovery Services vault and manage critical operations with MUA](multi-user-authorization.md).
- ## Archive tier support for Azure Virtual Machines is now generally available Azure Backup now supports the movement of recovery points to the Vault-archive tier for Azure Virtual Machines from the Azure portal. This allows you to move the archivable/recommended recovery points (corresponding to a backup item) to the Vault-archive tier at one go.
cloud-services-extended-support Deploy Prerequisite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-prerequisite.md
To ensure a successful Cloud Services (extended support) deployment review the b
Cloud Service (extended support) deployments must be in a virtual network. Virtual network can be created through [Azure portal](../virtual-network/quick-create-portal.md), [PowerShell](../virtual-network/quick-create-powershell.md), [Azure CLI](../virtual-network/quick-create-cli.md) or [ARM Template](../virtual-network/quick-create-template.md). The virtual network and subnets must also be referenced in the Service Configuration (.cscfg) under the [NetworkConfiguration](schema-cscfg-networkconfiguration.md) section. For a virtual networks belonging to the same resource group as the cloud service, referencing only the virtual network name in the Service Configuration (.cscfg) file is sufficient. If the virtual network and cloud service are in two different resource groups, then the complete Azure Resource Manager ID of the virtual network needs to be specified in the Service Configuration (.cscfg) file.+
+> [!NOTE]
+> Virtual Network and cloud service located in a different resource groups is not supported in Visual Studio 2019. Please consider using the ARM template or Portal for successful deployments in such scenarios
#### Virtual Network located in same resource group ```xml
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/language-support.md
> [!NOTE] > Language code `pt` will default to `pt-br`, Portuguese (Brazil).
->
-> Γÿ╝ Indicates the language is not available for scanned PDF document translation.
-|Language | Language code | Γÿ╝ Cloud ΓÇô Text Translation and Document Translation | Containers ΓÇô Text Translation|Custom Translator|Auto Language Detection|Dictionary
+|Language | Language code | Cloud ΓÇô Text Translation and Document Translation | Containers ΓÇô Text Translation|Custom Translator|Auto Language Detection|Dictionary
|:-|:-:|:-:|:-:|:-:|:-:|:-:| | Afrikaans | `af` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Albanian | `sq` |Γ£ö|Γ£ö||Γ£ö||
-| Amharic Γÿ╝ | `am` |Γ£ö|Γ£ö||||
+| Amharic | `am` |Γ£ö|Γ£ö||||
| Arabic | `ar` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Armenian Γÿ╝ | `hy` |Γ£ö|Γ£ö||Γ£ö||
-| Assamese Γÿ╝ | `as` |Γ£ö|Γ£ö|Γ£ö|||
+| Armenian | `hy` |Γ£ö|Γ£ö||Γ£ö||
+| Assamese | `as` |Γ£ö|Γ£ö|Γ£ö|||
| Azerbaijani (Latin) | `az` |Γ£ö|Γ£ö||||
-| Bangla Γÿ╝ | `bn` |Γ£ö|Γ£ö|Γ£ö||Γ£ö|
-| Bashkir Γÿ╝ | `ba` |Γ£ö|||||
+| Bangla | `bn` |Γ£ö|Γ£ö|Γ£ö||Γ£ö|
+| Bashkir | `ba` |Γ£ö|||||
| Basque | `eu` |Γ£ö||||| | Bosnian (Latin) | `bs` |Γ£ö|Γ£ö|Γ£ö||Γ£ö| | Bulgarian | `bg` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Cantonese (Traditional) Γÿ╝ | `yue` |Γ£ö|Γ£ö||||
+| Cantonese (Traditional) | `yue` |Γ£ö|Γ£ö||||
| Catalan | `ca` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Chinese (Literary) | `lzh` |Γ£ö||||| | Chinese Simplified | `zh-Hans` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Czech | `cs` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Danish | `da` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Dari | `prs` |Γ£ö|Γ£ö||||
-| Divehi Γÿ╝ | `dv` |Γ£ö|||Γ£ö||
+| Divehi | `dv` |Γ£ö|||Γ£ö||
| Dutch | `nl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | English | `en` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Estonian | `et` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
| French | `fr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | French (Canada) | `fr-ca` |Γ£ö|Γ£ö|||| | Galician | `gl` |Γ£ö|||||
-| Georgian Γÿ╝ | `ka` |Γ£ö|||Γ£ö||
+| Georgian | `ka` |Γ£ö|||Γ£ö||
| German | `de` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Greek Γÿ╝ | `el` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Gujarati Γÿ╝ | `gu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
+| Greek | `el` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Gujarati | `gu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
| Haitian Creole | `ht` |Γ£ö|Γ£ö||Γ£ö|Γ£ö|
-| Hebrew Γÿ╝ | `he` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Hebrew | `he` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Hindi | `hi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Hmong Daw (Latin) | `mww` |Γ£ö|Γ£ö|||Γ£ö| | Hungarian | `hu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Icelandic | `is` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Indonesian | `id` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Inuinnaqtun Γÿ╝ | `ikt` |Γ£ö|||||
-| Inuktitut Γÿ╝ | `iu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
+| Inuinnaqtun | `ikt` |Γ£ö|||||
+| Inuktitut | `iu` |Γ£ö|Γ£ö|Γ£ö|Γ£ö||
| Inuktitut (Latin) | `iu-Latn` |Γ£ö||||| | Irish | `ga` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|| | Italian | `it` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Japanese | `ja` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Kannada Γÿ╝ | `kn` |Γ£ö|Γ£ö|Γ£ö|||
+| Kannada | `kn` |Γ£ö|Γ£ö|Γ£ö|||
| Kazakh | `kk` |Γ£ö|Γ£ö||||
-| Khmer Γÿ╝ | `km` |Γ£ö|Γ£ö||Γ£ö||
+| Khmer | `km` |Γ£ö|Γ£ö||Γ£ö||
| Klingon | `tlh-Latn` |Γ£ö| ||Γ£ö|Γ£ö|
-| Klingon (plqaD) Γÿ╝ | `tlh-Piqd` |Γ£ö| ||Γ£ö||
+| Klingon (plqaD) | `tlh-Piqd` |Γ£ö| ||Γ£ö||
| Korean | `ko` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Kurdish (Central) | `ku` |Γ£ö|Γ£ö||Γ£ö||
-| Kurdish (Northern) Γÿ╝ | `kmr` |Γ£ö|Γ£ö||||
+| Kurdish (Northern) | `kmr` |Γ£ö|Γ£ö||||
| Kyrgyz (Cyrillic) | `ky` |Γ£ö|||||
-| Lao Γÿ╝ | `lo` |Γ£ö|Γ£ö||Γ£ö||
-| Latvian Γÿ╝| `lv` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Lao | `lo` |Γ£ö|Γ£ö||Γ£ö||
+| Latvian | `lv` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Lithuanian | `lt` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Macedonian Γÿ╝ | `mk` |Γ£ö|||Γ£ö||
-| Malagasy Γÿ╝ | `mg` |Γ£ö|Γ£ö|Γ£ö|||
+| Macedonian | `mk` |Γ£ö|||Γ£ö||
+| Malagasy | `mg` |Γ£ö|Γ£ö|Γ£ö|||
| Malay (Latin) | `ms` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Malayalam Γÿ╝ | `ml` |Γ£ö|Γ£ö|Γ£ö|||
+| Malayalam | `ml` |Γ£ö|Γ£ö|Γ£ö|||
| Maltese | `mt` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Maori | `mi` |Γ£ö|Γ£ö|Γ£ö||| | Marathi | `mr` |Γ£ö|Γ£ö|Γ£ö|||
-| Mongolian (Cyrillic) Γÿ╝| `mn-Cyrl` |Γ£ö|||||
-| Mongolian (Traditional) Γÿ╝ | `mn-Mong` |Γ£ö|||Γ£ö||
-| Myanmar Γÿ╝ | `my` |Γ£ö|Γ£ö||Γ£ö||
+| Mongolian (Cyrillic) | `mn-Cyrl` |Γ£ö|||||
+| Mongolian (Traditional) | `mn-Mong` |Γ£ö|||Γ£ö||
+| Myanmar | `my` |Γ£ö|Γ£ö||Γ£ö||
| Nepali | `ne` |Γ£ö|Γ£ö|||| | Norwegian | `nb` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Odia Γÿ╝ | `or` |Γ£ö|Γ£ö|Γ£ö|||
+| Odia | `or` |Γ£ö|Γ£ö|Γ£ö|||
| Pashto | `ps` |Γ£ö|Γ£ö||Γ£ö|| | Persian | `fa` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Polish | `pl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Portuguese (Brazil) | `pt` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Portuguese (Portugal) | `pt-pt` |Γ£ö|Γ£ö|||| | Punjabi | `pa` |Γ£ö|Γ£ö|Γ£ö|||
-| Queretaro Otomi Γÿ╝ | `otq` |Γ£ö|Γ£ö||||
+| Queretaro Otomi | `otq` |Γ£ö|Γ£ö||||
| Romanian | `ro` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Russian | `ru` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Samoan (Latin) | `sm` |Γ£ö| |Γ£ö|||
| Spanish | `es` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Swahili (Latin) | `sw` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Swedish | `sv` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Tahitian Γÿ╝ | `ty` |Γ£ö| |Γ£ö|Γ£ö||
-| Tamil Γÿ╝ | `ta` |Γ£ö|Γ£ö|Γ£ö||Γ£ö|
+| Tahitian | `ty` |Γ£ö| |Γ£ö|Γ£ö||
+| Tamil | `ta` |Γ£ö|Γ£ö|Γ£ö||Γ£ö|
| Tatar (Latin) | `tt` |Γ£ö|||||
-| Telugu Γÿ╝ | `te` |Γ£ö|Γ£ö|Γ£ö|||
-| Thai Γÿ╝ | `th` |Γ£ö| |Γ£ö|Γ£ö|Γ£ö|
-| Tibetan Γÿ╝ | `bo` |Γ£ö||||
-| Tigrinya Γÿ╝ | `ti` |Γ£ö|Γ£ö||||
+| Telugu | `te` |Γ£ö|Γ£ö|Γ£ö|||
+| Thai | `th` |Γ£ö| |Γ£ö|Γ£ö|Γ£ö|
+| Tibetan | `bo` |Γ£ö||||
+| Tigrinya | `ti` |Γ£ö|Γ£ö||||
| Tongan | `to` |Γ£ö|Γ£ö|Γ£ö||| | Turkish | `tr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Turkmen (Latin) | `tk` |Γ£ö||||
| Urdu | `ur` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Uyghur (Arabic) | `ug` |Γ£ö|||| | Uzbek (Latin | `uz` |Γ£ö|||Γ£ö||
-| Vietnamese Γÿ╝ | `vi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Vietnamese | `vi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Welsh | `cy` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Yucatec Maya | `yua` |Γ£ö|Γ£ö||Γ£ö|| | Zulu | `zu` |Γ£ö|||||
cognitive-services Cognitive Services And Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-and-machine-learning.md
Last updated 10/28/2021
# Cognitive Services and machine learning
-Cognitive Services provides machine learning capabilities to solve general problems such as analyzing text for emotional sentiment or analyzing images to recognize objects or faces. You don't need special machine learning or data science knowledge to use these services.
+Cognitive Services provides machine learning capabilities to solve general problems such as analyzing text for emotional sentiment or analyzing images to recognize objects or faces. You don't need special machine learning or data science knowledge to use these services.
[Cognitive Services](./what-are-cognitive-services.md) is a group of services, each supporting different, generalized prediction capabilities. The services are divided into different categories to help you find the right service.
Cognitive Services provides machine learning capabilities to solve general probl
Use Cognitive Services when you: * Can use a generalized solution.
-* Access solution from a programming REST API or SDK.
+* Access solution from a programming REST API or SDK.
-Use another machine-learning solution when you:
+Use other machine-learning solutions when you:
* Need to choose the algorithm and need to train on very specific data.
Both have the end-goal of applying artificial intelligence (AI) to enhance busin
Generally, the audiences are different: * Cognitive Services are for developers without machine-learning experience.
-* Azure Machine Learning is tailored for data scientists.
+* Azure Machine Learning is tailored for data scientists.
## How is a Cognitive Service different from machine learning?
cognitive-services Cognitive Services Apis Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-apis-create-account.md
keywords: cognitive services, cognitive intelligence, cognitive solutions, ai services Previously updated : 03/03/2022 Last updated : 05/24/2022
cognitive-services Create Account Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/create-account-bicep.md
# Quickstart: Create a Cognitive Services resource using Bicep
-This quickstart describes how to use Bicep to create Cognitive Services.
+Follow this quickstart to create Cognitive Services resource using Bicep.
Azure Cognitive Services are cloud-base services with REST APIs, and client library SDKs available to help developers build cognitive intelligence into applications without having direct artificial intelligence (AI) or data science skills or knowledge. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, understand, and even begin to reason.
-Create a resource using Bicep. This multi-service resource lets you:
+
+## Things to consider
+
+Using Bicep to create a Cognitive Service resource lets you create a multi-service resource. This enables you to:
* Access multiple Azure Cognitive Services with a single key and endpoint. * Consolidate billing from the services you use. * [!INCLUDE [terms-azure-portal](./includes/quickstarts/terms-azure-portal.md)] - ## Prerequisites * If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/cognitive-services).
cognitive-services Bot Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/tutorials/bot-framework.md
Previously updated : 05/17/2022 Last updated : 05/25/2022 # Integrate conversational language understanding with Bot Framework
This tutorial will explain how to integrate your own conversational language und
- Create a [Language resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**. - You will need the key and endpoint from the resource you create to connect your bot to the API. You'll paste your key and endpoint into the code below later in the tutorial.-- Download the **Core Bot** for CLU [sample in C#](https://aka.ms/clu-botframework-overview).
- - Clone the entire Bot Framework Samples repository to get access to this sample project.
-
+- Download the **CoreBotWithCLU** [sample](https://aka.ms/clu-botframework-overview).
+ - Clone the entire samples repository to get access to this solution.
## Import a project in conversational language understanding
-1. Copy the [FlightBooking.json](https://aka.ms/clu-botframework-json) file in the **Core Bot** for CLU sample.
+1. Download the [FlightBooking.json](https://aka.ms/clu-botframework-json) file in the **Core Bot with CLU** sample, in the _Cognitive Models_ folder.
2. Sign into the [Language Studio](https://language.cognitive.azure.com/) and select your Language resource. 3. Navigate to [Conversational Language Understanding](https://language.cognitive.azure.com/clu/projects) and click on the service. This will route you the projects page. Click the Import button next to the Create New Project button. Import the FlightBooking.json file with the project name as **FlightBooking**. This will automatically import the CLU project with all the intents, entities, and utterances. :::image type="content" source="../media/import.png" alt-text="A screenshot showing where to import a J son file." lightbox="../media/import.png":::
-4. Once the project is loaded, click on **Training** on the left. Press on Start a training job, provide the model name **v1** and press Train. All other settings such as **Standard Training** and the evaluation settings can be left as is.
+4. Once the project is loaded, click on **Training jobs** on the left. Press on Start a training job, provide the model name **v1** and press Train. All other settings such as **Standard Training** and the evaluation settings can be left as is.
:::image type="content" source="../media/train-model.png" alt-text="A screenshot of the training page in C L U." lightbox="../media/train-model.png":::
-5. Once training is complete, click to **Deployments** on the left. Click on Add Deployment and create a new deployment with the name **Testing**, and assign model **v1** to the deployment.
+5. Once training is complete, click to **Deploying a model** on the left. Click on Add Deployment and create a new deployment with the name **Testing**, and assign model **v1** to the deployment.
:::image type="content" source="../media/deploy-model-tutorial.png" alt-text="A screenshot of the deployment page within the deploy model screen in C L U." lightbox="../media/deploy-model-tutorial.png":::
In the **Core Bot** sample, update your [appsettings.json](https://aka.ms/clu-bo
- The _CluProjectName_ is **FlightBooking**. - The _CluDeploymentName_ is **Testing** - The _CluAPIKey_ can be either of the keys in the **Keys and Endpoint** section for your Language resource in the [Azure portal](https://portal.azure.com). You can also copy your key from the Project Settings tab in CLU. -- The _CluAPIHostName_ is the endpoint found in the **Keys and Endpoint** section for your Language resource in the Azure portal. Note the format should be ```<Language_Resource_Name>.cognitiveservices.azure.com``` without `https://`
+- The _CluAPIHostName_ is the endpoint found in the **Keys and Endpoint** section for your Language resource in the Azure portal. Note the format should be ```<Language_Resource_Name>.cognitiveservices.azure.com``` without `https://`.
```json {
In the **Core Bot** sample, update your [appsettings.json](https://aka.ms/clu-bo
## Identify integration points
-In the Core Bot sample, under the CLU folder, you can check out the **FlightBookingRecognizer.cs** file. Here is where the CLU API call to the deployed endpoint is made to retrieve the CLU prediction for intents and entities.
+In the Core Bot sample, you can check out the **FlightBookingRecognizer.cs** file. Here is where the CLU API call to the deployed endpoint is made to retrieve the CLU prediction for intents and entities.
```csharp public FlightBookingRecognizer(IConfiguration configuration)
In the Core Bot sample, under the CLU folder, you can check out the **FlightBook
```
-Under the folder Dialogs folder, find the **MainDialog** which uses the following to make a CLU prediction.
+Under the Dialogs folder, find the **MainDialog** which uses the following to make a CLU prediction.
```csharp var cluResult = await _cluRecognizer.RecognizeAsync<FlightBooking>(stepContext.Context, cancellationToken);
Run the sample locally on your machine **OR** run the bot from a terminal or fro
### Run the bot from a terminal
-From a terminal, navigate to `samples/csharp_dotnetcore/90.core-bot-with-clu/90.core-bot-with-clu`
+From a terminal, navigate to the `cognitive-service-language-samples/CoreBotWithCLU` folder.
Then run the following command
dotnet run
1. Launch Visual Studio 1. From the top navigation menu, select **File**, **Open**, then **Project/Solution**
-1. Navigate to the `samples/csharp_dotnetcore/90.core-bot-with-clu/90.core-bot-with-clu` folder
-1. Select the `CoreBotWithCLU.csproj` file
+1. Navigate to the `cognitive-service-language-samples/CoreBotWithCLU` folder
+1. Select the `CoreBotCLU.csproj` file
1. Press `F5` to run the project
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/overview.md
As you use custom NER, see the following reference documentation and samples for
## Responsible AI
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for custom NER]() to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for custom NER](/legal/cognitive-services/language-service/cner-transparency-note?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
[!INCLUDE [Responsible AI links](../includes/overview-responsible-ai-links.md)]
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/overview.md
As you use custom text classification, see the following reference documentation
## Responsible AI
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for custom text classification]() to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for custom text classification](/legal/cognitive-services/language-service/ctc-transparency-note?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
[!INCLUDE [Responsible AI links](../includes/overview-responsible-ai-links.md)]
cognitive-services Connect Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/tutorials/connect-services.md
Title: Intergate custom question answering and conversational language understanding into orchestration workflows
+ Title: Integrate custom question answering and conversational language understanding with orchestration workflow
description: Learn how to connect different projects with orchestration workflow. keywords: conversational language understanding, bot framework, bot, language understanding, nlu
Previously updated : 05/17/2022 Last updated : 05/25/2022
-# Connect different services with orchestration workflow
+# Connect different services with Orchestration workflow
Orchestration workflow is a feature that allows you to connect different projects from LUIS, conversational language understanding, and custom question answering in one project. You can then use this project for predictions under one endpoint. The orchestration project makes a prediction on which project should be called and automatically routes the request to that project, and returns with its response. In this tutorial, you will learn how to connect a custom question answering knowledge base with a conversational language understanding project. You will then call the project using the .NET SDK sample for orchestration.
-This tutorial will include creating a **chit chat** knowledge base and **email commands** project. Chit chat will deal with common niceties and greetings with static responses.
+This tutorial will include creating a **chit chat** knowledge base and **email commands** project. Chit chat will deal with common niceties and greetings with static responses. Email commands will predict among a few simple actions for an email assistant. The tutorial will then teach you to call the Orchestrator using the SDK in a .NET environment using a sample solution.
## Prerequisites
This tutorial will include creating a **chit chat** knowledge base and **email c
- You will need the key and endpoint from the resource you create to connect your bot to the API. You'll paste your key and endpoint into the code below later in the tutorial. Copy them from the **Keys and Endpoint** tab in your resource. - When you enable custom question answering, you must select an Azure search resource to connect to. - Make sure the region of your resource is supported by [conversational language understanding](../../conversational-language-understanding/service-limits.md#regional-availability).-- Download the **OrchestrationWorkflowSample** sample in [**.NET**](https://aka.ms/orchestration-sample).
+- Download the **OrchestrationWorkflowSample** [sample](https://aka.ms/orchestration-sample).
## Create a custom question answering knowledge base 1. Sign into the [Language Studio](https://language.cognitive.azure.com/) and select your Language resource.
-2. Find and select the [custom question answering](https://language.cognitive.azure.com/questionAnswering/projects/) card in the homepage.
+2. Find and select the [Custom question answering](https://language.cognitive.azure.com/questionAnswering/projects/) card in the homepage.
3. Click on **Create new project** and add the name **chitchat** with the language _English_ before clicking on **Create project**. 4. When the project loads, click on **Add source** and select _Chit chat_. Select the professional personality for chit chat before
This tutorial will include creating a **chit chat** knowledge base and **email c
5. Go to **Deploy knowledge base** from the left navigation menu and click on **Deploy** and confirm the popup that shows up.
-You are now done with deploying your knowledge base for chit chat. You can explore the type of questions and answers to expect in the **Edit knowledge base** tab.
+You are now done with deploying your knowledge base for chit chat. You can explore the type of questions and answers to expect in the **Edit knowledge base** page.
## Create a conversational language understanding project
-1. In Language Studio, go to the [conversational language understanding](https://language.cognitive.azure.com/clu/projects) service.
+1. In Language Studio, go to the [Conversational language understanding](https://language.cognitive.azure.com/clu/projects) service.
2. Download the **EmailProject.json** sample file [here](https://aka.ms/clu-sample-json).
-3. Click on the arrow next to **Create new project** and select **Import**. Browse to the downloaded EmailProject.json file you downloaded and press Done.
+3. Click on the **Import** button. Browse to the EmailProject.json file you downloaded and press Done.
:::image type="content" source="../media/import-export.png" alt-text="A screenshot showing where to import a J son file." lightbox="../media/import-export.png":::
-4. Once the project is loaded, click on **Training** on the left. Press on Start a training job, provide the model name **v1** and press Train. All other settings such as **Standard Training** and the evaluation settings can be left as is.
+4. Once the project is loaded, click on **Training jobs** on the left. Press on Start a training job, provide the model name **v1** and press Train.
:::image type="content" source="../media/train-model.png" alt-text="A screenshot of the training page." lightbox="../media/train-model.png":::
-5. Once training is complete, click to **Deployments** on the left. Click on Add Deployment and create a new deployment with the name **Testing**, and assign model **v1** to the deployment.
+5. Once training is complete, click to **Deploying a model** on the left. Click on Add Deployment and create a new deployment with the name **Testing**, and assign model **v1** to the deployment.
:::image type="content" source="../media/deploy-model-tutorial.png" alt-text="A screenshot showing the model deployment page." lightbox="../media/deploy-model-tutorial.png":::
-You are now done with deploying a conversational language understanding project for email commands. You can explore the different commands in the **Utterances** page.
+You are now done with deploying a conversational language understanding project for email commands. You can explore the different commands in the **Data labeling** page.
-## Create an orchestration workflow project
+## Create an Orchestration workflow project
-1. In Language Studio, go to the [orchestration workflow](https://language.cognitive.azure.com/orchestration/projects) service.
+1. In Language Studio, go to the [Orchestration workflow](https://language.cognitive.azure.com/orchestration/projects) service.
2. Click on **Create new project**. Use the name **Orchestrator** and the language _English_ before clicking next then done.
-3. Once the project is created, click on **Add** in the **Build schema** page.
+3. Once the project is created, click on **Add** in the **Schema definition** page.
4. Select _Yes, I want to connect it to an existing project_. Add the intent name **EmailIntent** and select **Conversational Language Understanding** as the connected service. Select the recently created **EmailProject** project for the project name before clicking on **Add Intent**. :::image type="content" source="../media/connect-intent-tutorial.png" alt-text="A screenshot of the connect intent popup in orchestration workflow." lightbox="../media/connect-intent-tutorial.png"::: 5. Add another intent but now select **Question Answering** as the service and select **chitchat** as the project name.
-6. Similar to conversational language understanding, go to **Training** and start a new training job with the name **v1** and press Train.
-7. Once training is complete, click to **Deployments** on the left. Click on Add deployment and create a new deployment with the name **Testing**, and assign model **v1** to the deployment and press Next.
+6. Similar to conversational language understanding, go to **Training jobs** and start a new training job with the name **v1** and press Train.
+7. Once training is complete, click to **Deploying a model** on the left. Click on Add deployment and create a new deployment with the name **Testing**, and assign model **v1** to the deployment and press Next.
8. On the next page, select the deployment name **Testing** for the **EmailIntent**. This tells the orchestrator to call the **Testing** deployment in **EmailProject** when it routes to it. Custom question answering projects only have one deployment by default. :::image type="content" source="../media/deployment-orchestrator-tutorial.png" alt-text="A screenshot of the deployment popup for orchestration workflow." lightbox="../media/deployment-orchestrator-tutorial.png":::
Now your orchestration project is ready to be used. Any incoming request will be
## Call the orchestration project with the Conversations SDK
-1. In the downloaded **OrchestrationWorkflowSample** solution, make sure to install all the required packages. In Visual Studio, go to _Tools_, _NuGet Package Manager_ and select _Package Manager Console_ and run the following command.
+1. In the downloaded sample, open OrchestrationWorkflowSample.sln in Visual Studio.
+
+2. In the OrchestrationWorkflowSample solution, make sure to install all the required packages. In Visual Studio, go to _Tools_, _NuGet Package Manager_ and select _Package Manager Console_ and run the following command.
```powershell dotnet add package Azure.AI.Language.Conversations ```
-2. In `Program.cs`, replace `{api-key}` and the placeholder endpoint. Use the key and endpoint for the Language resource you created earlier. You can find them in the **Keys and Endpoint** tab in your Language resource in Azure.
+3. In `Program.cs`, replace `{api-key}` and the `{endpoint}` variables. Use the key and endpoint for the Language resource you created earlier. You can find them in the **Keys and Endpoint** tab in your Language resource in Azure.
```csharp
-Uri endpoint = new Uri("https://myaccount.api.cognitive.microsoft.com");
+Uri endpoint = new Uri("{endpoint}");
AzureKeyCredential credential = new AzureKeyCredential("{api-key}"); ```
-3. Replace the orchestrationProject parameters to **Orchestrator** and **Testing** as below if they are not set already.
+4. Replace the orchestrationProject parameters to **Orchestrator** and **Testing** as below if they are not set already.
```csharp ConversationsProject orchestrationProject = new ConversationsProject("Orchestrator", "Testing"); ```
-4. Run the project or press F5 in Visual Studio.
-5. Input a query such as "read the email from matt" or "hello how are you". You'll now observe different responses for each, a conversational language understanding **EmailProject** response from the first, and the answer from the **chitchat** for the second query.
+5. Run the project or press F5 in Visual Studio.
+6. Input a query such as "read the email from matt" or "hello how are you". You'll now observe different responses for each, a conversational language understanding **EmailProject** response from the first query, and the answer from the **chitchat** knowledge base for the second query.
**Conversational Language Understanding**: :::image type="content" source="../media/clu-response-orchestration.png" alt-text="A screenshot showing the sample response from conversational language understanding." lightbox="../media/clu-response-orchestration.png":::
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/overview.md
Conversation summarization feature would simplify the text into the following:
-## Get started with text summarization
+## Get started with summarization
# [Document summarization](#tab/document-summarization)
-To use this feature, you submit raw unstructured text for analysis and handle the API output in your application. Analysis is performed as-is, with no additional customization to the model used on your data. There are two ways to use text summarization:
+To use this feature, you submit raw unstructured text for analysis and handle the API output in your application. Analysis is performed as-is, with no additional customization to the model used on your data. There are two ways to use summarization:
|Development option |Description | Links |
To use this feature, you submit raw text for analysis and handle the API output
# [Document summarization](#tab/document-summarization)
-* Text summarization takes raw unstructured text for analysis. See [Data and service limits](../concepts/data-limits.md) in the how-to guide for more information.
-* Text summarization works with a variety of written languages. See [language support](language-support.md) for more information.
+* Summarization takes raw unstructured text for analysis. See [Data and service limits](../concepts/data-limits.md) in the how-to guide for more information.
+* Summarization works with a variety of written languages. See [language support](language-support.md) for more information.
# [Conversation summarization](#tab/conversation-summarization)
cognitive-services Responsible Characteristics And Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-characteristics-and-limitations.md
+
+ Title: Characteristics and limitations of Personalizer
+
+description: Characteristics and limitations of Personalizer
+++++ Last updated : 05/23/2022++++
+# Characteristics and limitations of Personalizer
+
+Azure Personalizer can work in many scenarios. To understand where you can apply Personalizer, make sure the requirements of your scenario meet the [expectations for Personalizer to work](where-can-you-use-personalizer.md#expectations-required-to-use-personalizer). To understand whether Personalizer should be used and how to integrate it into your applications, see [Use Cases for Personalizer](responsible-use-cases.md). You'll find criteria and guidance on choosing use cases, designing features, and reward functions for your uses of Personalizer.
+
+Before you read this article, it's helpful to understand some background information about [how Personalizer works](how-personalizer-works.md).
++
+## Select features for Personalizer
+
+Personalizing content depends on having useful information about the content and the user. For some applications and industries, some user features can be directly or indirectly considered discriminatory and potentially illegal. See the [Personalizer integration and responsible use guidelines](responsible-guidance-integration.md) on assessing features to use with Personalizer.
++
+## Computing rewards for Personalizer
+
+Personalizer learns to improve action choices based on the reward score provided by your application business logic.
+A well-built reward score will act as a short-term proxy to a business goal that's tied to an organization's mission.
+For example, rewarding on clicks will make Personalizer seek clicks at the expense of everything else, even if what's clicked is distracting to the user or not tied to a business outcome.
+In contrast, a news site might want to set rewards tied to something more meaningful than clicks, such as "Did the user spend enough time to read the content?" or "Did the user click relevant articles or references?" With Personalizer, it's easy to tie metrics closely to rewards. However, you will need to be careful not to confound short-term user engagement with desired outcomes.
++
+## Unintended consequences from reward scores
+
+Even if built with the best intentions reward scores might create unexpected consequences or unintended results because of how Personalizer ranks content.
+
+Consider the following examples:
+
+- Rewarding video content personalization on the percentage of the video length watched will probably tend to rank shorter videos higher than longer videos.
+- Rewarding social media shares, without sentiment analysis of how it's shared or the content itself, might lead to ranking offensive, unmoderated, or inflammatory content. This type of content tends to incite a lot of engagement but is often damaging.
+- Rewarding the action on user interface elements that users don't expect to change might interfere with the usability and predictability of the user interface. For example, buttons that change location or purpose without warning might make it harder for certain groups of users to stay productive.
+
+Implement these best practices:
+
+- Run offline experiments with your system by using different reward approaches to understand impact and side effects.
+- Evaluate your reward functions, and ask yourself how a naïve person might alter its interpretation which may result in unintentional or undesirable outcomes.
+- Archive information and assets, such as models, learning policies, and other data, that Personalizer uses to function, so that results can be reproducible.
++
+## General guidelines to understand and improve performance
+
+Because Personalizer is based on Reinforcement Learning and learns from rewards to make better choices over time, performance isn't measured in traditional supervised learning terms used in classifiers, such as precision and recall. The performance of Personalizer is directly measured as the sum of reward scores it receives from your application via the Reward API.
+
+When you use Personalizer, the product user interface in the Azure portal provides performance information so you can monitor and act on it. The performance can be seen in the following ways:
+
+- If Personalizer is in Online Learning mode, you can perform [offline evaluations](concepts-offline-evaluation.md).
+- If Personalizer is in [Apprentice mode](concept-apprentice-mode.md), you can see the performance metrics (events imitated and rewards imitated) in the Evaluation pane in the Azure portal.
+
+We recommend you perform frequent offline evaluations to maintain oversight. This task will help you monitor trends and ensure effectiveness. For example, you could decide to temporarily put Personalizer in Apprentice Mode if reward performance has a dip.
+
+### Personalizer performance estimates shown in Offline Evaluations: Limitations
+
+We define the "performance" of Personalizer as the total rewards it obtains during use. Personalizer performance estimates shown in Offline Evaluations are computed instead of measured. It is important to understand the limitations of these estimates:
+
+- The estimates are based on past data, so future performance may vary as the world and your users change.
+- The estimates for baseline performance are computed probabilistically. For this reason, the confidence band for the baseline average reward is important. The estimate will get more precise with more events. If you use a smaller number of actions in each Rank call the performance estimate may increase in confidence as there is a higher probability that Personalizer may choose any one of them (including the baseline action) for every event.
+- Personalizer constantly trains a model in near real time to improve the actions chosen for each event, and as a result, it will affect the total rewards obtained. The model performance will vary over time, depending on the recent past training data.
+- Exploration and action choice are stochastic processes guided by the Personalizer model. The random numbers used for these stochastic processes are seeded from the Event Id. To ensure reproducibility of explore-exploit and other stochastic processes, use the same Event Id.
+- Online performance may be capped by [exploration](concepts-exploration.md). Lowering exploration settings will limit how much information is harvested to stay on top of changing trends and usage patterns, so the balance depends on each use case. Some use cases merit starting off with higher exploration settings and reducing them over time (e.g., start with 30% and reduce to 10%).
++
+### Check existing models that might accidentally bias Personalizer
+
+Existing recommendations, customer segmentation, and propensity model outputs can be used by your application as inputs to Personalizer. Personalizer learns to disregard features that don't contribute to rewards. Review and evaluate any propensity models to determine if they're good at predicting rewards and contain strong biases that might generate harm as a side effect. For example, look for recommendations that might be based on harmful stereotypes. Consider using tools such as [FairLearn](https://fairlearn.org/) to facilitate the process.
++
+## Proactive assessments during your project lifecycle
+
+Consider creating methods for team members, users, and business owners to report concerns regarding responsible use and a process that prioritizes their resolution. Consider treating tasks for responsible use just like other crosscutting tasks in the application lifecycle, such as tasks related to user experience, security, or DevOps. Tasks related to responsible use and their requirements shouldnΓÇÖt be afterthoughts. Responsible use should be discussed and implemented throughout the application lifecycle.
++
+## Next steps
+
+- [Responsible use and integration](responsible-guidance-integration.md)
+- [Offline evaluations](concepts-offline-evaluation.md)
+- [Features for context and actions](concepts-features.md)
cognitive-services Responsible Data And Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-data-and-privacy.md
+
+ Title: Data and privacy for Personalizer
+
+description: Data and privacy for Personalizer
+++++ Last updated : 05/23/2022+++
+# Data and privacy for Personalizer
+
+This article provides information about what data Azure Personalizer uses to work, how it processes that data, and how you can control that data. It assumes basic familiarity with [what Personalizer is](what-is-personalizer.md) and [how Personalizer works](how-personalizer-works.md). Specific terms can be found in Terminology.
++
+## What data does Personalizer process?
+
+Personalizer processes the following types of data:
+- **Context features and Action features**: Your application sends information about users, and the products or content to personalize, in aggregated form. This data is sent to Personalizer in each Rank API call in arguments for Context and Actions. You decide what to send to the API and how to aggregate it. The data is expressed as attributes or features. You provide information about your users, such as their device and their environment, as Context features. You shouldn't send features specific to a user like a phone number or email or User IDs. Action features include information about your content and product, such as movie genre or product price. For more information, see [Features for Actions and Context](concepts-features.md).
+- **Reward information**: A reward score (a number between 0 and 1) ranks how well the user interaction resulting from the personalization choice mapped to a business goal. For example, an event might get a reward of "1" if a recommended article was clicked on. For more information, see [Rewards](concept-rewards.md).
+
+To understand more about what information you typically use with Personalizer, see [Features are information about Actions and Context](concepts-features.md).
+
+[!TIP] You decide which features to use, how to aggregate them, and where the information comes from when you call the Personalizer Rank API in your application. You also determine how to create reward scores. To make informed decisions about what information to use with Personalizer, see the [Personalizer responsible use guidelines](responsible-use-cases.md).
++
+## How does Personalizer process data?
+
+The following diagram illustrates how your data is processed.
+
+![Diagram that shows how Personalizer processes data.](media/how-personalizer-works/personalization-how-it-works.png)
+
+Personalizer processes data as follows:
+
+1. Personalizer receives data each time the application calls the Rank API for a personalization event. The data is sent via the arguments for the Context and Actions.
+
+2. Personalizer uses the information in the Context and Actions, its internal AI models, and service configuration to return the rank response for the ID of the action to use. The contents of the Context and Actions are stored for no more than 48 hours in transient caches with the EventID used or generated in the Rank API.
+3. The application then calls the Reward API with one or more reward scores. This information is also stored in transient caches and matched with the Actions and Context information.
+4. After the rank and reward information for events is correlated, it's removed from transient caches and placed in more permanent storage. It remains in permanent storage until the number of days specified in the Data Retention setting has gone by, at which time the information is deleted. If you choose not to specify a number of days in the Data Retention setting, this data will be saved as long as the Personalizer Azure Resource is not deleted or until you choose to Clear Data via the UI or APIs. You can change the Data Retention setting at any time.
+5. Personalizer continuously trains internal Personalizer AI models specific to this Personalizer loop by using the data in the permanent storage and machine learning configuration parameters in [Learning settings](concept-active-learning.md).
+6. Personalizer creates [offline evaluations either](concepts-offline-evaluation.md) automatically or on demand.
+Offline evaluations contain a report of rewards obtained by Personalizer models during a past time period. An offline evaluation embeds the models active at the time of their creation, and the learning settings used to create them, as well as a historical aggregate of average reward per event for that time window. Evaluations also include [feature importance](concept-feature-evaluation.md), which is a list of features observed in the time period, and their relative importance in the model.
++
+### Independence of Personalizer loops
+
+Each Personalizer loop is separate and independent from others, as follows:
+
+- **No external data augmentation**: Each Personalizer loop only uses the data supplied to it by you via Rank and Reward API calls to train models. Personalizer doesn't use any additional information from any origin, such as other Personalizer loops in your own Azure subscription, Microsoft, third-party sources or subprocessors.
+- **No data, model, or information sharing**: A Personalizer loop won't share information about events, features, and models with any other Personalizer loop in your subscription, Microsoft, third parties or subprocessors.
++
+## How is data retained and what customer controls are available?
+
+Personalizer retains different types of data in different ways and provides the following controls for each.
++
+### Personalizer rank and reward data
+
+Personalizer stores the features about Actions and Context sent via rank and reward calls for the number of days specified in configuration under Data Retention.
+To control this data retention, you can:
+
+1. Specify the number of days to retain log storage in the [Azure portal for the Personalizer resource](how-to-settings.md)under **Configuration** > **Data Retention** or via the API. The default **Data Retention** setting is seven days. Personalizer deletes all Rank and Reward data older than this number of days automatically.
+
+2. Clear data for logged personalization and reward data in the Azure portal under **Model and learning settings** > **Clear data** > **Logged personalization and reward data** or via the API.
+
+3. Delete the Personalizer loop from your subscription in the Azure portal or via Azure resource management APIs.
+
+You can't access past data from Rank and Reward API calls in the Personalizer resource directly. If you want to see all the data that's being saved, configure log mirroring to create a copy of this data on an Azure Blob Storage resource you've created and are responsible for managing.
++
+### Personalizer transient cache
+
+Personalizer stores partial data about an event separate from rank and reward calls in transient caches. Events are automatically purged from the transient cache 48 hours from the time the event occurred.
+
+To delete transient data, you can:
+
+1. Clear data for logged personalization and reward data in the Azure portal under **Model and learning settings** > **Clear data** or via the API.
+
+2. Delete the Personalizer loop from your subscription in the Azure portal or via Azure resource management APIs.
++
+### Personalizer models and learning settings
+
+A Personalizer loop trains models with data from Rank and Reward API calls, driven by the hyperparameters and configuration specified in **Model and learning settings** in the Azure portal. Models are volatile. They're constantly changing and being trained on additional data in near real time. Personalizer doesn't automatically save older models and keeps overwriting them with the latest models. For more information, see ([How to manage models and learning settings](how-to-manage-model.md)). To clear models and learning settings:
+
+1. Reset them in the Azure portal under **Model and learning settings** > **Clear data** or via the API.
+
+2. Delete the Personalizer loop from your subscription in the Azure portal or via Azure resource management APIs.
++
+### Personalizer evaluation reports
+
+Personalizer also retains the information generated in [offline evaluations](concepts-offline-evaluation.md) for reports.
+
+To delete offline evaluation reports, you can:
+
+1. Go to the Personalizer loop under the Azure portal. Go to **Evaluations** and delete the relevant evaluation.
+
+2. Delete evaluations via the Evaluations API.
+
+3. Delete the Personalizer loop from your subscription in the Azure portal or via Azure resource management APIs.
++
+### Further storage considerations
+
+- **Customer managed keys**: Customers can configure the service to encrypt data at rest with their own managed keys. This second layer of encryption is on top of Microsoft's own encryption.
+- **Geography**: In all cases, the incoming data, models, and evaluations are processed and stored in the same geography where the Personalizer resource was created.
+
+Also see:
+
+- [How to manage model and learning settings](how-to-manage-model.md)
+- [Configure Personalizer learning loop](how-to-settings.md)
++
+## Next steps
+
+- [See Responsible use guidelines for Personalizer](responsible-use-cases.md).
+
+To learn more about Microsoft's privacy and security commitments, see the[Microsoft Trust Center](https://www.microsoft.com/trust-center).
cognitive-services Responsible Guidance Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-guidance-integration.md
+
+ Title: Guidance for integration and responsible use of Personalizer
+
+description: Guidance for integration and responsible use of Personalizer
+++++ Last updated : 05/23/2022++++
+# Guidance for integration and responsible use of Personalizer
+
+Microsoft works to help customers responsibly develop and deploy solutions by using Azure Personalizer. Our principled approach upholds personal agency and dignity by considering the AI system's:
+
+- Fairness, reliability, and safety.
+- Privacy and security.
+- Inclusiveness.
+- Transparency.
+- Human accountability.
+
+These considerations reflect our commitment to developing responsible AI.
++
+## General guidelines for integration and responsible use principles
+
+When you get ready to integrate and responsibly use AI-powered products or features, the following activities will help to set you up for success:
+
+- Understand what it can do. Fully assess the potential of Personalizer to understand its capabilities and limitations. Understand how it will perform in your particular scenario and context by thoroughly testing it with real-life conditions and data.
+
+- **Respect an individual's right to privacy**. Only collect data and information from individuals for lawful and justifiable purposes. Only use data and information that you have consent to use for this purpose.
+
+- **Obtain legal review**. Obtain appropriate legal advice to review Personalizer and how you are using it in your solution, particularly if you will use it in sensitive or high-risk applications. Understand what restrictions you might need to work within and your responsibility to resolve any issues that might come up in the future.
+
+- **Have a human in the loop**. Include human oversight as a consistent pattern area to explore. Ensure constant human oversight of the AI-powered product or feature. Maintain the role of humans in decision making. Make sure you can have real-time human intervention in the solution to prevent harm and manage situations when the AI system doesnΓÇÖt perform as expected.
+
+- **Build trust with affected stakeholders**. Communicate the expected benefits and potential risks to affected stakeholders. Help people understand why the data is needed and how the use of the data will lead to their benefit. Describe data handling in an understandable way.
+
+- **Create a customer feedback loop**. Provide a feedback channel that allows users and individuals to report issues with the service after it's deployed. After you've deployed an AI-powered product or feature, it requires ongoing monitoring and improvement. Be ready to implement any feedback and suggestions for improvement. Establish channels to collect questions and concerns from affected stakeholders. People who might be directly or indirectly affected by the system include employees, visitors, and the general public.
+
+- **Feedback**: Seek feedback from a diverse sampling of the community during the development and evaluation process (for example, historically marginalized groups, people with disabilities, and service workers). For more information, see Community jury.
+
+- **User Study**: Any consent or disclosure recommendations should be framed in a user study. Evaluate the first and continuous-use experience with a representative sample of the community to validate that the design choices lead to effective disclosure. Conduct user research with 10-20 community members (affected stakeholders) to evaluate their comprehension of the information and to determine if their expectations are met.
+
+- **Transparency**: Consider providing users with information about how the content was personalized. For example, you can give your users a button labeled Why These Suggestions? that shows which top features of the user and actions played a role in producing the Personalizer results.
+
+- **Adversarial use**: consider establishing a process to detect and act on malicious manipulation. There are actors that will take advantage of machine learning and AI systems' ability to learn from their environment. With coordinated attacks, they can artificially fake patterns of behavior that shift the data and AI models toward their goals. If your use of Personalizer could influence important choices, make sure you have the appropriate means to detect and mitigate these types of attacks in place.
++
+## Your responsibility
+
+All guidelines for responsible implementation build on the foundation that developers and businesses using Personalizer are responsible and accountable for the effects of using these algorithms in society. If you're developing an application that your organization will deploy, you should recognize your role and responsibility for its operation and how it affects people. If you're designing an application to be deployed by a third party, come to a shared understanding of who is ultimately responsible for the behavior of the application. Make sure to document that understanding.
++
+## Questions and feedback
+
+Microsoft is continuously upgrading tools and documents to help you act on these responsibilities. Our team invites you to [provide feedback to Microsoft](mailto:cogsvcs-RL-feedback@microsoft.com?subject%3DPersonalizer%20Responsible%20Use%20Feedback&body%3D%5BPlease%20share%20any%20question%2C%20idea%20or%20concern%5D) if you believe other tools, product features, and documents would help you implement these guidelines for using Personalizer.
++
+## Recommended reading
+- See Microsoft's six principles for the responsible development of AI published in the January 2018 book, [The Future Computed](https://news.microsoft.com/futurecomputed/).
++
+## Next steps
+
+Understand how the Personalizer API receives features: [Features: Action and Context](concepts-features.md)
cognitive-services Responsible Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-use-cases.md
+
+ Title: Transparency note for Personalizer
+
+description: Transparency Note for Personalizer
+++++ Last updated : 05/23/2022+++
+# Use cases for Personalizer
+
+## What is a Transparency Note?
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Creating a system that is fit for its intended purpose requires an understanding of how the technology works, its capabilities and limitations, and how to achieve the best performance.
+
+Microsoft provides *Transparency Notes* to help you understand how our AI technology works. This includes the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment. You can use Transparency Notes when developing or deploying your own system, or share them with the people who will use or be affected by your system.
+
+Transparency Notes are part of a broader effort at Microsoft to put our AI principles into practice. To find out more, see [Microsoft AI Principles](https://
+www.microsoft.com/ai/responsible-ai).
+
+## Introduction to Personalizer
+
+Azure Personalizer is a cloud-based service that helps your applications choose the best content item to show your users. You can use Personalizer to determine what product to suggest to shoppers or to figure out the optimal position for an advertisement. After the content is shown to the user, your application monitors the user's reaction and reports a reward score back to Personalizer. The reward score is used to continuously improve the machine learning model using reinforcement learning. This enhances the ability of Personalizer to select the best content item in subsequent interactions based on the contextual information it receives for each.
+
+For more information, see:
+
+- [What is Personalizer?](what-is-personalizer.md)
+- [Where can you use Personalizer](where-can-you-use-personalizer.md)
+- [How Personalizer works](how-personalizer-works.md)
+
+## Key terms
+
+|Term| Definition|
+|:--|:-|
+|**Learning Loop** | You create a Personalizer resource, called a learning loop, for every part of your application that can benefit from personalization. If you have more than one experience to personalize, create a loop for each. |
+|**Online model** | The default [learning behavior](terminology.md#learning-behavior) for Personalizer where your learning loop, uses machine learning to build the model that predicts the **top action** for your content. |
+|**Apprentice mode** | A [learning behavior](terminology.md#learning-behavior) that helps warm-start a Personalizer model to train without impacting the applications outcomes and actions. |
+|**Rewards**| A measure of how the user responded to the Rank API's returned reward action ID, as a score between 0 to 1. The 0 to 1 value is set by your business logic, based on how the choice helped achieve your business goals of personalization. The learning loop doesn't store this reward as individual user history. |
+|**Exploration**| The Personalizer service is exploring when, instead of returning the best action, it chooses a different action for the user. The Personalizer service avoids drift, stagnation, and can adapt to ongoing user behavior by exploring. |
+
+For more information, and additional key terms, please refer to the [Personalizer Terminology](/terminology.md) and [conceptual documentation](how-personalizer-works.md).
+
+## Example use cases
+
+Some common customer motivations for using Personalizer are to:
+
+- **User engagement**: Capture user interest by choosing content to increase clickthrough, or to prioritize the next best action to improve average revenue. Other mechanisms to increase user engagement might include selecting videos or music in a dynamic channel or playlist.
+- **Content optimization**: Images can be optimized for a product (such as selecting a movie poster from a set of options) to optimize clickthrough, or the UI layout, colors, images, and blurbs can be optimized on a web page to increase conversion and purchase.
+- **Maximize conversions using discounts and coupons**: To get the best balance of margin and conversion choose which discounts the application will provide to users, or decide which product to highlight from the results of a recommendation engine to maximize conversion.
+- **Maximize positive behavior change**: Select which wellness tip question to send in a notification, messaging, or SMS push to maximize positive behavior change.
+- **Increase productivity** in customer service and technical support by highlighting the most relevant next best actions or the appropriate content when users are looking for documents, manuals, or database items.
+
+## Considerations when choosing a use case
+
+- Using a service that learns to personalize content and user interfaces is useful. However, it can also be misapplied if the personalization creates harmful side effects in the real world. Consider how personalization also helps your users achieve their goals.
+- Consider what the negative consequences in the real world might be if Personalizer isn't suggesting particular items because the system is trained with a bias to the behavior patterns of the majority of the system users.
+- Consider situations where the exploration behavior of Personalizer might cause harm.
+- Carefully consider personalizing choices that are consequential or irreversible, and that should not be determined by short-term signals and rewards.
+- Don't provide actions to Personalizer that shouldn't be chosen. For example, inappropriate movies should be filtered out of the actions to personalize if making a recommendation for an anonymous or underage user.
+
+Here are some scenarios where the above guidance will play a role in whether, and how, to apply Personalizer:
+
+- Avoid using Personalizer for ranking offers on specific loan, financial, and insurance products, where personalization features are regulated, based on data the individuals don't know about, can't obtain, or can't dispute; and choices needing years and information ΓÇ£beyond the clickΓÇ¥ to truly assess how good recommendations were for the business and the users.
+- Carefully consider personalizing highlights of school courses and education institutions where recommendations without enough exploration might propagate biases and reduce users' awareness of other options.
+- Avoid using Personalizer to synthesize content algorithmically with the goal of influencing opinions in democracy and civic participation, as it is consequential in the long term, and can be manipulative if the user's goal for the visit is to be informed, not influenced.
++
+## Next steps
+
+* [Characteristics and limitations for Personalizer](responsible-characteristics-and-limitations.md)
+* [Where can you use Personalizer?](where-can-you-use-personalizer.md)
communication-services Real Time Inspection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/real-time-inspection.md
communicationMonitoring.close()
```
+## Download logs
+
+The tool includes the ability to download the logs captured using the `Download logs` button on the top right. The tool will generate a compressed log file that can be provided to our customer support team for debugging.
+ ## Next Steps - [Explore User-Facing Diagnostic APIs](../voice-video-calling/user-facing-diagnostics.md)
communication-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/reference.md
For each area, we have external pages to track and review our SDKs. You can cons
| Calling | [npm](https://www.npmjs.com/package/@azure/communication-calling) | - | - | - | [GitHub](https://github.com/Azure/Communication/releases) ([docs](/objectivec/communication-services/calling/)) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-calling/) | - | | Chat | [npm](https://www.npmjs.com/package/@azure/communication-chat) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Chat) | [PyPi](https://pypi.org/project/azure-communication-chat/) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | - | | Common | [npm](https://www.npmjs.com/package/@azure/communication-common) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Common/) | N/A | [Maven](https://search.maven.org/search?q=a:azure-communication-common) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-common) | - |
+| Email | [npm](https://www.npmjs.com/package/@azure/communication-email) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Email) | - | - | - | - | - |
| Identity | [npm](https://www.npmjs.com/package/@azure/communication-identity) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Identity) | [PyPi](https://pypi.org/project/azure-communication-identity/) | [Maven](https://search.maven.org/search?q=a:azure-communication-identity) | - | - | - | | Network Traversal | [npm](https://www.npmjs.com/package/@azure/communication-network-traversal) | [NuGet](https://www.nuget.org/packages/Azure.Communication.NetworkTraversal) | [PyPi](https://pypi.org/project/azure-communication-networktraversal/) | [Maven](https://search.maven.org/search?q=a:azure-communication-networktraversal) | - | - | - | | Phone numbers | [npm](https://www.npmjs.com/package/@azure/communication-phone-numbers) | [NuGet](https://www.nuget.org/packages/Azure.Communication.phonenumbers) | [PyPi](https://pypi.org/project/azure-communication-phonenumbers/) | [Maven](https://search.maven.org/search?q=a:azure-communication-phonenumbers) | - | - | - |
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
The maximum call duration is 30 hours, participants that reach the maximum call
## JavaScript Calling SDK support by OS and browser
-The following table represents the set of supported browsers which are currently available. **We support the most recent three versions of the browser** unless otherwise indicated.
+The following table represents the set of supported browsers which are currently available. **We support the most recent three major versions of the browser (most recent three minor versions for Safari)** unless otherwise indicated.
| Platform | Chrome | Safari | Edge (Chromium) | | | | | -- |
communication-services Create Email Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/create-email-communication-resource.md
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]
-Get started with Email by provisioning your first Email Communication Services resource. Communication services resources can be provisioned through the [Azure portal](https://portal.azure.com) or with the .NET management client library. The management client library and the Azure portal allow you to create, configure, update and delete your resources and interface with [Azure Resource Manager](../../../azure-resource-manager/management/overview.md), Azure's deployment and management service. All functionality available in the client libraries is available in the Azure portal.
+Get started with Email by provisioning your first Email Communication Services resource. Communication services resources can be provisioned through the [Azure portal](https://portal.azure.com/) or with the .NET management client library. The management client library and the Azure portal allow you to create, configure, update and delete your resources and interface with [Azure Resource Manager](../../../azure-resource-manager/management/overview.md), Azure's deployment and management service. All functionality available in the client libraries is available in the Azure portal.
## Create the Email Communications Service resource using portal
communication-services Handle Sms Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/handle-sms-events.md
Title: Quickstart - Handle SMS events for Delivery Reports and Inbound Messages-
-description: Learn how to handle SMS events using Azure Communication Services.
+ Title: Quickstart - Handle SMS and delivery report events
+
+description: "In this quickstart, you'll learn how to handle Azure Communication Services events. See how to create, receive, and subscribe to SMS and delivery report events."
Previously updated : 06/30/2021 Last updated : 05/25/2022 -+
+ - mode-other
+ - kr2b-contr-experiment
-# Quickstart: Handle SMS events for Delivery Reports and Inbound Messages
+# Quickstart: Handle SMS and delivery report events
+
+Get started with Azure Communication Services by using Azure Event Grid to handle Communication Services SMS events. After subscribing to SMS events such as inbound messages and delivery reports, you generate and receive these events. Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
[!INCLUDE [Regional Availability Notice](../../includes/regional-availability-include.md)]
-Get started with Azure Communication Services by using Azure Event Grid to handle Communication Services SMS events.
+## Prerequisites
-## About Azure Event Grid
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A Communication Services resource. For detailed information, see [Create an Azure Communication Services resource](../create-communication-resource.md).
+- An SMS-enabled telephone number. [Get a phone number](../telephony/get-phone-number.md).
-[Azure Event Grid](../../../event-grid/overview.md) is a cloud-based eventing service. In this article, you'll learn how to subscribe to events for [communication service events](../../../event-grid/event-schema-communication-services.md), and trigger an event to view the result. Typically, you send events to an endpoint that processes the event data and takes actions. In this article, we'll send the events to a web app that collects and displays the messages.
+## About Event Grid
-## Prerequisites
-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An Azure Communication Service resource. Further details can be found in the [Create an Azure Communication Services resource](../create-communication-resource.md) quickstart.-- An SMS enabled telephone number. [Get a phone number](../telephony/get-phone-number.md).
+[Event Grid](../../../event-grid/overview.md) is a cloud-based eventing service. In this article, you'll learn how to subscribe to [communication service events](../../../event-grid/event-schema-communication-services.md), and trigger an event to view the result. Typically, you send events to an endpoint that processes the event data and takes actions. In this article, we'll send the events to a web app that collects and displays the messages.
+
+## Set up the environment
+
+To set up the environment that we'll use to generate and receive events, take the steps in the following sections.
+
+### Register an Event Grid resource provider
-## Setting up
+If you haven't previously used Event Grid in your Azure subscription, you might need to register your Event Grid resource provider. To register the provider, follow these steps:
-### Enable Event Grid resource provider
+1. Go to the Azure portal.
+1. On the left menu, select **Subscriptions**.
+1. Select the subscription that you use for Event Grid.
+1. On the left menu, under **Settings**, select **Resource providers**.
+1. Find **Microsoft.EventGrid**.
+1. If your resource provider isn't registered, select **Register**.
-If you haven't previously used Event Grid in your Azure subscription, you may need to register the Event Grid resource provider following the steps below:
+It might take a moment for the registration to finish. Select **Refresh** to update the status. When **Registered** appears under **Status**, you're ready to continue.
-In the Azure portal:
+### Deploy the Event Grid viewer
-1. Select **Subscriptions** on the left menu.
-2. Select the subscription you're using for Event Grid.
-3. On the left menu, under **Settings**, select **Resource providers**.
-4. Find **Microsoft.EventGrid**.
-5. If not registered, select **Register**.
+For this quickstart, we'll use an Event Grid viewer to view events in near-real time. The viewer provides the user with the experience of a real-time feed. Also, the payload of each event should be available for inspection.
-It may take a moment for the registration to finish. Select **Refresh** to update the status. When **Status** is **Registered**, you're ready to continue.
+To set up the viewer, follow the steps in [Azure Event Grid Viewer](/samples/azure-samples/azure-event-grid-viewer/azure-event-grid-viewer/).
-### Event Grid Viewer deployment
+## Subscribe to SMS events by using web hooks
-For this quickstart, we will use the [Azure Event Grid Viewer Sample](/samples/azure-samples/azure-event-grid-viewer/azure-event-grid-viewer/) to view events in near-real time. This will provide the user with the experience of a real-time feed. In addition, the payload of each event should be available for inspection as well.
+You can subscribe to specific events to provide Event Grid with information about where to send the events that you want to track.
-## Subscribe to the SMS events using web hooks
+1. In the portal, go to the Communication Services resource that you created.
-In the portal, navigate to your Azure Communication Services Resource that you created. Inside the Communication Service resource, select **Events** from the left menu of the **Communication Services** page.
+1. Inside the Communication Services resource, on the left menu of the **Communication Services** page, select **Events**.
+1. Select **Add Event Subscription**.
-Press **Add Event Subscription** to enter the creation wizard.
+ :::image type="content" source="./media/handle-sms-events/select-events.png" alt-text="Screenshot that shows the Events page of an Azure Communication Services resource. The Event Subscription button is called out.":::
-On the **Create Event Subscription** page, Enter a **name** for the event subscription.
+1. On the **Create Event Subscription** page, enter a **name** for the event subscription.
-You can subscribe to specific events to tell Event Grid which of the SMS events you want to track, and where to send the events. Select the events you'd like to subscribe to from the dropdown menu. For SMS you'll have the option to choose `SMS Received` and `SMS Delivery Report Received`.
+1. Under **Event Types**, select the events that you'd like to subscribe to. For SMS, you can choose `SMS Received` and `SMS Delivery Report Received`.
-If you're prompted to provide a **System Topic Name**, feel free to provide a unique string. This field has no impact on your experience and is used for internal telemetry purposes.
+1. If you're prompted to provide a **System Topic Name**, feel free to provide a unique string. This field has no impact on your experience and is used for internal telemetry purposes.
-Check out the full list of [events supported by Azure Communication Services](../../../event-grid/event-schema-communication-services.md).
+ :::image type="content" source="./media/handle-sms-events/select-events-create-eventsub.png" alt-text="Screenshot that shows the Create Event Subscription dialog. Under Event Types, SMS Received and SMS Delivery Report Received are selected.":::
+1. For **Endpoint type**, select **Web Hook**.
-Select **Web Hook** for **Endpoint type**.
+ :::image type="content" source="./media/handle-sms-events/select-events-create-linkwebhook.png" alt-text="Screenshot that shows a detail of the Create Event Subscription dialog. In the Endpoint Type list, Web Hook is selected.":::
+1. For **Endpoint**, select **Select an endpoint**, and then enter the URL of your web app.
-For **Endpoint**, click **Select an endpoint**, and enter the URL of your web app.
+ In this case, we'll use the URL from the [Event Grid viewer](/samples/azure-samples/azure-event-grid-viewer/azure-event-grid-viewer/) that we set up earlier in the quickstart. The URL for the sample has this format: `https://{{site-name}}.azurewebsites.net/api/updates`
-In this case, we will use the URL from the [Azure Event Grid Viewer Sample](/samples/azure-samples/azure-event-grid-viewer/azure-event-grid-viewer/) we set up earlier in the quickstart. The URL for the sample will be in the format: `https://{{site-name}}.azurewebsites.net/api/updates`
+1. Select **Confirm Selection**.
-Then select **Confirm Selection**.
+ :::image type="content" source="./media/handle-sms-events/select-events-create-selectwebhook-epadd.png" alt-text="Screenshot that shows the Select Web Hook dialog. The Subscriber Endpoint box contains a U R L, and a Confirm Selection button is visible.":::
+## View SMS events
-## Viewing SMS events
+To generate and receive SMS events, take the steps in the following sections.
-### Triggering SMS events
+### Trigger SMS events
-To view event triggers, we must generate events in the first place.
+To view event triggers, we need to generate some events.
-- `SMS Received` events are generated when the Communication Services phone number receives a text message. To trigger an event, just send a message from your phone to the phone number attached to your Communication Services resource.-- `SMS Delivery Report Received` events are generated when you send an SMS to a user using a Communication Services phone number. To trigger an event, you are required to enable `Delivery Report` in the options of the [sent SMS](../sms/send.md). Try sending a message to your phone with `Delivery Report`. Completing this action incurs a small cost of a few USD cents or less in your Azure account.
+- `SMS Received` events are generated when the Communication Services phone number receives a text message. To trigger an event, send a message from your phone to the phone number that's attached to your Communication Services resource.
+- `SMS Delivery Report Received` events are generated when you send an SMS to a user by using a Communication Services phone number. To trigger an event, you need to turn on the `Delivery Report` option of the [SMS that you send](../sms/send.md). Try sending a message to your phone with `Delivery Report` turned on. Completing this action incurs a small cost of a few USD cents or less in your Azure account.
-Check out the full list of [events supported by Azure Communication Services](../../../event-grid/event-schema-communication-services.md).
+Check out the full list of [events that Communication Services supports](../../../event-grid/event-schema-communication-services.md).
-### Receiving SMS events
+### Receive SMS events
-Once you complete either action above you will notice that `SMS Received` and `SMS Delivery Report Received` events are sent to your endpoint. These events will show up in the [Azure Event Grid Viewer Sample](/samples/azure-samples/azure-event-grid-viewer/azure-event-grid-viewer/) we set up at the beginning. You can press the eye icon next to the event to see the entire payload. Events will look like this:
+After you generate an event, you'll notice that `SMS Received` and `SMS Delivery Report Received` events are sent to your endpoint. These events show up in the [Event Grid viewer](/samples/azure-samples/azure-event-grid-viewer/azure-event-grid-viewer/) that we set up at the beginning of this quickstart. Select the eye icon next to the event to see the entire payload. Events should look similar to the following data:
Learn more about the [event schemas and other eventing concepts](../../../event-grid/event-schema-communication-services.md).
In this quickstart, you learned how to consume SMS events. You can receive SMS m
> [!div class="nextstepaction"] > [Send SMS](../sms/send.md)
-You may also want to:
+You might also want to:
- [Learn about event handling concepts](../../../event-grid/event-schema-communication-services.md) - [Learn about Event Grid](../../../event-grid/overview.md)
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/send.md
Title: Quickstart - Send an SMS message-
-description: Learn how to send an SMS message using Azure Communication Services.
+
+description: "In this quickstart, you'll learn how to send an SMS message by using Azure Communication Services. See code examples in C#, JavaScript, Java, and Python."
Previously updated : 06/30/2021 Last updated : 05/25/2022 -+
+ - tracking-python
+ - devx-track-js
+ - mode-other
+ - kr2b-contr-experiment
zone_pivot_groups: acs-js-csharp-java-python # Quickstart: Send an SMS message
zone_pivot_groups: acs-js-csharp-java-python
[!INCLUDE [Regional Availability Notice](../../includes/regional-availability-include.md)] > [!IMPORTANT]
-> SMS messages can be sent to and received from United States phone numbers. Phone numbers located in other geographies are not yet supported by Communication Services SMS.
-> For more information, see **[Phone number types](../../concepts/telephony/plan-solution.md)**.
+> SMS messages can be sent to and received from United States phone numbers. Phone numbers that are located in other geographies are not yet supported by Azure Communication Services SMS.
+> For more information, see [Phone number types](../../concepts/telephony/plan-solution.md).
<br/> <br/> >[!VIDEO https://www.youtube.com/embed/YEyxSZqzF4o]
If you want to clean up and remove a Communication Services subscription, you ca
## Next steps
-In this quickstart, you learned how to send SMS messages using Azure Communication Services.
+In this quickstart, you learned how to send SMS messages by using Communication Services.
> [!div class="nextstepaction"] > [Receive SMS and Delivery Report Events](./handle-sms-events.md)
communication-services File Sharing Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial.md
# Enable file sharing using UI Library and Azure Blob Storage + In this tutorial, we'll be configuring the Azure Communication Services UI Library Chat Composite to enable file sharing. The UI Library Chat Composite provides a set of rich components and UI controls that can be used to enable file sharing. We will be leveraging Azure Blob Storage to enable the storage of the files that are shared through the chat thread. >[!IMPORTANT]
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
# Networking architecture in Azure Container Apps
-Azure Container Apps run in the context of an [environment](environment.md), which is supported by a virtual network (VNET). When you create an environment, you can provide a custom VNET, otherwise a VNET is automatically generated for you. Generated VNETs are inaccessible to you as they're created in Microsoft's tenent. To take full control over your VNET, provide an existing VNET to Container Apps as you create your environment.
+Azure Container Apps run in the context of an [environment](environment.md), which is supported by a virtual network (VNET). When you create an environment, you can provide a custom VNET, otherwise a VNET is automatically generated for you. Generated VNETs are inaccessible to you as they're created in Microsoft's tenant. To take full control over your VNET, provide an existing VNET to Container Apps as you create your environment.
The following articles feature step-by-step instructions for creating Container Apps environments with different accessibility levels.
Once you're satisfied with the latest revision, you can lock traffic to that rev
#### Update existing revision
-Consider a situation where you have a known good revision that's serving 100% of your traffic, but you want to issue and update to your app. You can deploy and test new revisions using their direct endpoints without affecting the main revision serving the app.
+Consider a situation where you have a known good revision that's serving 100% of your traffic, but you want to issue an update to your app. You can deploy and test new revisions using their direct endpoints without affecting the main revision serving the app.
Once you're satisfied with the updated revision, you can shift a portion of traffic to the new revision for testing and verification.
container-apps Storage Mounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/storage-mounts.md
A container app has access to different types of storage. A single app can take
| [Temporary storage](#temporary-storage) | Temporary storage scoped to an individual replica | Sharing files between containers in a replica. For instance, the main app container can write log files that are processed by a sidecar container. | | [Azure Files](#azure-files) | Permanent storage | Writing files to a file share to make data accessible by other systems. |
+> [!NOTE]
+> The volume mounting features in Azure Container Apps are in preview.
+ ## Container file system A container can write to its own file system.
container-instances Container Instances Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-troubleshooting.md
This error indicates that due to heavy load in the region in which you are attem
## Issues during container group runtime ### Container had an isolated restart without explicit user input
-There are two broad categories for why a container group may restart without explicit user input. First, containers may experience restarts caused by an application process crash. The ACI service recommends leveraging observability solutions such as Application Insights SDK, container group metrics, and container group logs to determine why the application experienced issues. Second, customers may experience restarts initiated by the ACI infrastructure due to maintenance events. To increase the availability of your application, run multiple container groups behind an ingress component such as an Application Gateway or Traffic Manager.
+There are two broad categories for why a container group may restart without explicit user input. First, containers may experience restarts caused by an application process crash. The ACI service recommends leveraging observability solutions such as [Application Insights SDK](../azure-monitor/app/app-insights-overview.md), [container group metrics](container-instances-monitor.md), and [container group logs](container-instances-get-logs.md) to determine why the application experienced issues. Second, customers may experience restarts initiated by the ACI infrastructure due to maintenance events. To increase the availability of your application, run multiple container groups behind an ingress component such as an [Application Gateway](../application-gateway/overview.md) or [Traffic Manager](../traffic-manager/traffic-manager-overview.md).
### Container continually exits and restarts (no long-running process)
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/pay-bill.md
On 1 October 2021, automatic payments in India may block some credit card transa
[Learn more about the Reserve Bank of India regulation for recurring payments](https://www.rbi.org.in/Scripts/NotificationUser.aspx?Id=11668&Mode=0)
+On 1 July 2022, Microsoft and other online merchants will no longer be storing credit card information. To comply with this regulation Microsoft will be removing all stored card details from Microsoft Azure. To avoid service interruption, you will need to add a payment method and make a one-time payment for all invoices.
+
+[Learn about the Reserve Bank of India regulation for card storage](https://www.rbi.org.in/Scripts/NotificationUser.aspx?Id=12211)
+ ## Pay by default payment method The default payment method of your billing profile can either be a credit card, debit card, or check wire transfer.
data-factory Connector Asana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-asana.md
+
+ Title: Transform data in Asana (Preview)
+
+description: Learn how to transform data in Asana (Preview) by using Data Factory or Azure Synapse Analytics.
++++++ Last updated : 05/20/2022++
+# Transform data in Asana (Preview) using Azure Data Factory or Synapse Analytics
++
+This article outlines how to use Data Flow to transform data in Asana (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+
+> [!IMPORTANT]
+> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/).
+
+## Supported capabilities
+
+This Asana connector is supported for the following activities:
+
+- [Mapping data flow](concepts-data-flow-overview.md)
+
+## Create an Asana linked service using UI
+
+Use the following steps to create an Asana linked service in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory U I.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse U I.":::
+
+2. Search for Asana (Preview) and select the Asana (Preview) connector.
+
+ :::image type="content" source="media/connector-asana/asana-connector.png" alt-text="Screenshot showing selecting Asana connector.":::
+
+3. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-asana/configure-asana-linked-service.png" alt-text="Screenshot of configuration for Asana linked service.":::
+
+## Connector configuration details
+
+The following sections provide information about properties that are used to define Data Factory and Synapse pipeline entities specific to Asana.
+
+## Linked service properties
+
+The following properties are supported for the Asana linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to **Asana**. |Yes |
+| apiToken | Specify an API token for the Asana. Mark this field as **SecureString** to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
+
+**Example:**
+
+```json
+{
+ "name": "AsanaLinkedService",
+ "properties": {
+ "type": "Asana",
+ "typeProperties": {
+ "apiToken": {
+ "type": "SecureString",
+ "value": "<API token>"
+ }
+ }
+ }
+}
+```
+
+## Mapping data flow properties
+
+When transforming data in mapping data flow, you can read tables from Asana. For more information, see the [source transformation](data-flow-source.md) in mapping data flows. You can only use an [inline dataset](data-flow-source.md#inline-datasets) as source type.
+
+### Source transformation
+
+The below table lists the properties supported by Asana source. You can edit these properties in the **Source options** tab.
+
+| Name | Description | Required | Allowed values | Data flow script property |
+| - | -- | -- | -- | - |
+| Workspace | The ID of the workspace in Asana. | Yes | String | workspaceId |
+| Entity | The ID of the entity in Asana.| Yes | String | entityId |
+| Entity Type | The type of the entity in Asana. | Yes | `teams`<br>`portfolios`<br>`projects` | entityType |
++
+#### Asana source script examples
+
+When you use Asana as source type, the associated data flow script is:
+
+```
+source(allowSchemaDrift: true,
+ validateSchema: false,
+ store: 'asana',
+ format: 'rest',
+ workspaceId: '9876543210',
+ entityId: '1234567890',
+ entityType: 'teams') ~> AsanaSource
+```
+
+## Next steps
+
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Continuous Integration Delivery Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-improvements.md
Follow these steps to get started:
inputs: command: 'custom' workingDir: '$(Build.Repository.LocalPath)/<folder-of-the-package.json-file>' #replace with the package.json folder
- customCommand: 'run build validate $(Build.Repository.LocalPath)/<Root-folder-from-Git-configuration-settings-in-ADF> /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/<Your-Factory-Name>'
+ customCommand: 'run build validate $(Build.Repository.LocalPath)/<Root-folder-from-Git-configuration-settings-in-ADF> /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/<Your-ResourceGroup-Name>/providers/Microsoft.DataFactory/factories/<Your-Factory-Name>'
displayName: 'Validate' # Validate and then generate the ARM template into the destination folder, which is the same as selecting "Publish" from the UX.
Follow these steps to get started:
inputs: command: 'custom' workingDir: '$(Build.Repository.LocalPath)/<folder-of-the-package.json-file>' #replace with the package.json folder
- customCommand: 'run build export $(Build.Repository.LocalPath)/<Root-folder-from-Git-configuration-settings-in-ADF> /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/<Your-Factory-Name> "ArmTemplate"'
+ customCommand: 'run build export $(Build.Repository.LocalPath)/<Root-folder-from-Git-configuration-settings-in-ADF> /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/<Your-ResourceGroup-Name>/providers/Microsoft.DataFactory/factories/<Your-Factory-Name> "ArmTemplate"'
displayName: 'Validate and Generate ARM template' # Publish the artifact to be used as a source for a release pipeline.
databox-online Azure Stack Edge Gpu Deploy Set Up Device Update Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time.md
Previously updated : 02/15/2022 Last updated : 05/24/2022
-zone_pivot_groups: azure-stack-edge-device-deployment
# Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure. # Tutorial: Configure the device settings for Azure Stack Edge Pro GPU -
-This tutorial describes how to configure device related settings for your 1-node Azure Stack Edge Pro GPU device. You can set up your device name, update server, and time server via the local web UI.
+This tutorial describes how to configure device related settings for your Azure Stack Edge Pro GPU device. You can set up your device name, update server, and time server via the local web UI.
The device settings can take around 5-7 minutes to complete. --
-This tutorial describes how to configure device related settings for your 2-node Azure Stack Edge Pro GPU device. You can set up your device name, update server, and time server via the local web UI.
-
-The device settings can take around 5-7 minutes to complete.
-- In this tutorial, you learn about: > [!div class="checklist"]
Follow these steps to configure device related settings:
![Local web UI "Device" page 3](./media/azure-stack-edge-gpu-deploy-set-up-device-update-time/device-4.png) -
-Repeat all the above steps for the second node of your device. Make sure that the same DNS domain is used for both the nodes.
--- ## Configure update 1. On the **Update** page, you can now configure the location from where to download the updates for your device.
Repeat all the above steps for the second node of your device. Make sure that th
1. Select **Apply**. 1. After the update server is configured, select **Next: Time**. -
-Repeat all the above steps for the second node of your device. Make sure that the same update server is used for both the nodes.
-
-
- ## Configure time Follow these steps to configure time settings on your device.
NTP servers are required because your device must synchronize time so that it ca
1. After the settings are applied, select **Next: Certificates**. -
-Repeat all the above steps for the second node of your device. Make sure that the same NTP server is used for both the nodes.
-- ## Next steps In this tutorial, you learn about:
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Previously updated : 05/11/2022 Last updated : 05/19/2022 # What is Microsoft Defender for Cloud?
Use the advanced protection tiles in the [workload protections dashboard](worklo
> [!TIP] > Microsoft Defender for IoT is a separate product. You'll find all the details in [Introducing Microsoft Defender for IoT](../defender-for-iot/overview.md).
+## Learn More
+
+If you would like to learn more about Defender for Cloud from a cybersecurity expert, check out [Lessons Learned from the Field](episode-six.md).
+
+You can also check out the following blogs:
+
+- [A new name for multi-cloud security: Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/a-new-name-for-multi-cloud-security-microsoft-defender-for-cloud/ba-p/2943020)
+- [Microsoft Defender for Cloud - Use cases](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-use-cases/ba-p/2953619)
+- [Microsoft Defender for Cloud PoC Series - Microsoft Defender for Containers](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-poc-series-microsoft-defender-for/ba-p/3064644)
+ ## Next steps - To get started with Defender for Cloud, you need a subscription to Microsoft Azure. If you don't have a subscription, [sign up for a free trial](https://azure.microsoft.com/free/).
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Learn about this plan in [Overview of Microsoft Defender for Containers](defende
::: zone pivot="defender-for-container-arc,defender-for-container-eks,defender-for-container-gke" > [!NOTE] > Defender for Containers' support for Arc-enabled Kubernetes clusters, AWS EKS, and GCP GKE. This is a preview feature.
->
+>
> [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] ::: zone-end
A full list of supported alerts is available in the [reference table of all Defe
1. In the Azure portal, open Microsoft Defender for Cloud's security alerts page and look for the alert on the relevant resource: :::image type="content" source="media/defender-for-kubernetes-azure-arc/sample-kubernetes-security-alert.png" alt-text="Sample alert from Microsoft Defender for Kubernetes." lightbox="media/defender-for-kubernetes-azure-arc/sample-kubernetes-security-alert.png":::
-
+ ::: zone pivot="defender-for-container-arc,defender-for-container-eks,defender-for-container-gke" [!INCLUDE [Remove the extension](./includes/defender-for-containers-remove-extension.md)] ::: zone-end
A full list of supported alerts is available in the [reference table of all Defe
[!INCLUDE [FAQ](./includes/defender-for-containers-override-faq.md)] ::: zone-end
+## Learn More
+
+Learn more from the product manager about [Microsoft Defender for Containers in a multi-cloud environment](episode-nine.md).
+You can also learn how to [Protect Containers in GCP with Defender for Containers](episode-ten.md).
+
+You can also check out the following blogs:
+
+- [Protect your Google Cloud workloads with Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/protect-your-google-cloud-workloads-with-microsoft-defender-for/ba-p/3073360)
+- [Introducing Microsoft Defender for Containers](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/introducing-microsoft-defender-for-containers/ba-p/2952317)
+- [A new name for multi-cloud security: Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/a-new-name-for-multi-cloud-security-microsoft-defender-for-cloud/ba-p/2943020)
+
## Next steps
-[Use Defender for Containers to scan your ACR images for vulnerabilities](defender-for-container-registries-usage.md).
+[Use Defender for Containers to scan your ACR images for vulnerabilities](defender-for-container-registries-usage.md).
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers Previously updated : 05/15/2022 Last updated : 05/25/2022 # Overview of Microsoft Defender for Containers
On this page, you'll learn how you can use Defender for Containers to improve, m
Defender for Containers helps with the core aspects of container security: -- **Environment hardening** - Defender for Containers protects your Kubernetes clusters whether they're running on Azure Kubernetes Service, Kubernetes on-prem / IaaS, or Amazon EKS. By continuously assessing clusters, Defender for Containers provides visibility into misconfigurations and guidelines to help mitigate identified threats. Learn more in [Hardening](#hardening).
+- **Environment hardening** - Defender for Containers protects your Kubernetes clusters whether they're running on Azure Kubernetes Service, Kubernetes on-premises / IaaS, or Amazon EKS. By continuously assessing clusters, Defender for Containers provides visibility into misconfigurations and guidelines to help mitigate identified threats. Learn more in [Hardening](#hardening).
- **Vulnerability assessment** - Vulnerability assessment and management tools for images **stored** in ACR registries and **running** in Azure Kubernetes Service. Learn more in [Vulnerability assessment](#vulnerability-assessment).
The following describes the components necessary in order to receive the full pr
## FAQ - Defender for Containers - [What are the options to enable the new plan at scale?](#what-are-the-options-to-enable-the-new-plan-at-scale)-- [Does Microsoft Defender for Containers support AKS clusters with virtual machines scale set (VMSS)?](#does-microsoft-defender-for-containers-support-aks-clusters-with-virtual-machines-scale-set-vmss)
+- [Does Microsoft Defender for Containers support AKS clusters with virtual machines scale set?](#does-microsoft-defender-for-containers-support-aks-clusters-with-virtual-machines-scale-set)
- [Does Microsoft Defender for Containers support AKS without scale set (default)?](#does-microsoft-defender-for-containers-support-aks-without-scale-set-default) - [Do I need to install the Log Analytics VM extension on my AKS nodes for security protection?](#do-i-need-to-install-the-log-analytics-vm-extension-on-my-aks-nodes-for-security-protection) ### What are the options to enable the new plan at scale? WeΓÇÖve rolled out a new policy in Azure Policy, **Configure Microsoft Defender for Containers to be enabled**, to make it easier to enable the new plan at scale.
-### Does Microsoft Defender for Containers support AKS clusters with virtual machines scale set (VMSS)?
-Yes.
+### Does Microsoft Defender for Containers support AKS clusters with virtual machines scale set?
+Yes
### Does Microsoft Defender for Containers support AKS without scale set (default)? No. Only Azure Kubernetes Service (AKS) clusters that use virtual machine scale sets for the nodes is supported.
No. Only Azure Kubernetes Service (AKS) clusters that use virtual machine scale
### Do I need to install the Log Analytics VM extension on my AKS nodes for security protection? No, AKS is a managed service, and manipulation of the IaaS resources isn't supported. The Log Analytics VM extension is not needed and may result in additional charges.
+## Learn More
+
+If you would like to learn more from the product manager about Microsoft Defender for Containers, check out [Microsoft Defender for Containers](episode-three.md).
+
+You can also check out the following blogs:
+
+- [How to demonstrate the new containers features in Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/how-to-demonstrate-the-new-containers-features-in-microsoft/ba-p/3281172)
+- [Introducing Microsoft Defender for Containers](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/introducing-microsoft-defender-for-containers/ba-p/2952317)
+ ## Next steps In this overview, you learned about the core elements of container security in Microsoft Defender for Cloud. To enable the plan, see:
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
Title: Microsoft Defender for Servers - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Servers. Previously updated : 03/28/2022 Last updated : 05/11/2022 -- # Introduction to Microsoft Defender for Servers
You can simulate alerts by downloading one of the following playbooks:
- For Linux: [Microsoft Defender for Cloud Playbook: Linux Detections](https://github.com/Azure/Azure-Security-Center/blob/master/Simulations/Azure%20Security%20Center%20Linux%20Detections_v2.pdf).
+## Learn more
+If you would like to learn more from the product manager about Defender for Servers, check out [Microsoft Defender for Servers](episode-five.md). You can also learn about the [Enhanced workload protection features in Defender for Servers](episode-twelve.md).
+You can also check out the following blogs:
+
+- [Security posture management and server protection for AWS and GCP are now generally available](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/security-posture-management-and-server-protection-for-aws-and/ba-p/3271388)
+
+- [Microsoft Defender for Cloud Server Monitoring Dashboard](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-server-monitoring-dashboard/ba-p/2869658)
## Next steps
defender-for-cloud Deploy Vulnerability Assessment Tvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-tvm.md
Title: Use Microsoft Defender for Endpoint's threat and vulnerability management capabilities with Microsoft Defender for Cloud description: Enable, deploy, and use Microsoft Defender for Endpoint's threat and vulnerability management capabilities with Microsoft Defender for Cloud to discover weaknesses in your Azure and hybrid machines -- Previously updated : 03/23/2022 Last updated : 05/11/2022 # Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management
For a quick overview of threat and vulnerability management, watch this video:
|Required roles and permissions:|[Owner](../role-based-access-control/built-in-roles.md#owner) (resource group level) can deploy the scanner<br>[Security Reader](../role-based-access-control/built-in-roles.md#security-reader) can view findings| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)| -- ## Onboarding your machines to threat and vulnerability management The integration between Microsoft Defender for Endpoint and Microsoft Defender for Cloud takes place in the background, so it doesn't involve any changes at the endpoint level.
The integration between Microsoft Defender for Endpoint and Microsoft Defender f
The findings for **all** vulnerability assessment tools are in the Defender for Cloud recommendation **Vulnerabilities in your virtual machines should be remediated**. Learn about how to [view and remediate findings from vulnerability assessment solutions on your VMs](remediate-vulnerability-findings-vm.md)
+## Learn more
+
+If you would like to learn more from the product manager about security posture, check out [Microsoft Defender for Servers](episode-five.md).
+
+You can also check out the following blogs:
+
+- [Security posture management and server protection for AWS and GCP are now generally available](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/security-posture-management-and-server-protection-for-aws-and/ba-p/3271388)
+- [Microsoft Defender for Cloud Server Monitoring Dashboard](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-server-monitoring-dashboard/ba-p/2869658)
## Next steps > [!div class="nextstepaction"]
defender-for-cloud Episode Eight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-eight.md
+
+ Title: Microsoft Defender for IoT
+description: Learn how Defender for IoT discovers devices to monitor and how it fits in the Microsoft Security portfolio.
+ Last updated : 05/25/2022++
+# Microsoft Defender for IoT
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Dolev Zemer joins Yuri Diogenes to talk about how Defender for IoT works. Dolev explains the difference between OT Security and IT Security and how Defender for IoT fulfills this gap. Dolev also demonstrates how Defender for IoT discovers devices to monitor and how it fits in the Microsoft Security portfolio.
+
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=05fdecf5-f6a1-4162-b95d-1e34478d1d60" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+
+- [1:20](/shows/mdc-in-the-field/defender-for-iot#time=01m20s) - Overview of the Defender for IoT solution
+
+- [2:15](/shows/mdc-in-the-field/defender-for-iot#time=02m15s) - Difference between OT and IoT
+
+- [3:30](/shows/mdc-in-the-field/defender-for-iot#time=03m30s) - Prerequisites to use Defender for IoT
+
+- [4:30](/shows/mdc-in-the-field/defender-for-iot#time=04m30s) - Security posture and threat detection
+
+- [5:17](/shows/mdc-in-the-field/defender-for-iot#time=05m17s) - Automating alert response
+
+- [6:15](/shows/mdc-in-the-field/defender-for-iot#time=06m15s) - Integration with Microsoft Sentinel
+
+- [6:50](/shows/mdc-in-the-field/defender-for-iot#time=06m50s) - Architecture
+
+- [8:40](/shows/mdc-in-the-field/defender-for-iot#time=08m40s) - Demonstration
+
+## Recommended resources
+
+Learn more about [Defender for IoT](../defender-for-iot/index.yml).
+
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+
+- Follow us on social media:
+ [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Microsoft Defender for Containers in a Multi-Cloud Environment](episode-nine.md)
defender-for-cloud Episode Eleven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-eleven.md
+
+ Title: Threat landscape for Defender for Containers
+description: Learn about the new detections that are available for different attacks and how Defender for Containers can help to quickly identify malicious activities in containers.
+ Last updated : 05/25/2022++
+# Threat landscape for Defender for Containers
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Yossi Weizman joins Yuri Diogenes to talk about the evolution of the threat matrix for Containers and how attacks against Kubernetes have evolved. Yossi also demonstrates new detections that are available for different attacks and how Defender for Containers can help to quickly identify malicious activities in containers.
+
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=646c2b9a-3f15-4705-af23-7802bd9549c5" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+
+- [01:15](/shows/mdc-in-the-field/threat-landscape-containers#time=01m15s) - The evolution of attacks against Kubernetes
+
+- [02:50](/shows/mdc-in-the-field/threat-landscape-containers#time=02m50s) - Identity related attacks against Kubernetes
+
+- [04:00](/shows/mdc-in-the-field/threat-landscape-containers#time=04m00s) - Threat detection beyond audit logs
+
+- [05:48](/shows/mdc-in-the-field/threat-landscape-containers#time=5m48s) - Demonstration
+
+## Recommended resources
+
+Learn how to [detect identity attacks in Kubernetes](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/detecting-identity-attacks-in-kubernetes/ba-p/3232340).
+
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+
+- Follow us on social media:
+ [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Enhanced workload protection features in Defender for Servers](episode-twelve.md)
defender-for-cloud Episode Five https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-five.md
+
+ Title: Microsoft Defender for Servers
+description: Learn all about Microsoft Defender for Servers from the product manager.
+ Last updated : 05/25/2022++
+# Microsoft Defender for Servers
+
+**Episode description**: In this episode of Defender for Cloud in the field, Aviv Mor joins Yuri Diogenes to talk about Microsoft Defender for Servers updates, including the new integration with TVM. Aviv explains how this new integration with TVM works, the advantages of this integration, which includes software inventory and easy experience to onboard. Aviv also covers the integration with MDE for Linux and the Defender for Servers support for the new multi-cloud connector for AWS.
+
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=f62e1199-d0a8-4801-9793-5318fde27497" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+
+- [1:22](/shows/mdc-in-the-field/defender-for-containers#time=01m22s) - Overview of the announcements for Microsoft Defender for Servers
+
+- [5:50](/shows/mdc-in-the-field/defender-for-containers#time=05m50s) - Migration path from Qualys VA to TVM
+
+- [7:12](/shows/mdc-in-the-field/defender-for-containers#time=07m12s) - TVM capabilities in Defender for Servers
+
+- [8:38](/shows/mdc-in-the-field/defender-for-containers#time=08m38s) - Threat detections for Defender for Servers
+
+- [9:52](/shows/mdc-in-the-field/defender-for-containers#time=09m52s) - Defender for Servers in AWS
+
+- [12:23](/shows/mdc-in-the-field/defender-for-containers#time=12m23s) - Onboard process for TVM in an on-premises scenario
+
+- [13:20](/shows/mdc-in-the-field/defender-for-containers#time=13m20s) - Demonstration
+
+## Recommended resources
+
+Learn how to [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md).
+
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+
+- Follow us on social media:
+ [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Lessons Learned from the Field](episode-six.md)
defender-for-cloud Episode Four https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-four.md
+
+ Title: Security posture management improvements in Microsoft Defender for Cloud
+description: Learn how to manage your security posture with Microsoft Defender for Cloud.
+ Last updated : 05/25/2022++
+# Security posture management improvements in Microsoft Defender for Cloud
+
+**Episode description**: In this episode of Defender for Cloud in the field, Lior Arviv joins Yuri Diogenes to talk about the cloud security posture management improvements in Microsoft Defender for Cloud. Lior explains the MITRE ATT&CK Framework integration with recommendations, the overall improvements of recommendations and the other fields added in the API. Lior also demonstrates the different ways to access the MITRE ATT&CK integration via filters and recommendations.
+
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=845108fd-e57d-40e0-808a-1239e78a7390" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+
+- [1:24](/shows/mdc-in-the-field/defender-for-containers#time=01m24s) - Security recommendation refresh time changes
+
+- [3:50](/shows/mdc-in-the-field/defender-for-containers#time=03m50s) - MITRE ATT&CK Framework mapping to recommendations
+
+- [6:14](/shows/mdc-in-the-field/defender-for-containers#time=06m14s) - Demonstration
+
+- [14:44](/shows/mdc-in-the-field/defender-for-containers#time=14m44s) - Secure Score API updates
+
+- [18:54](/shows/mdc-in-the-field/defender-for-containers#time=18m54s) - What's coming next
+
+## Recommended resources
+
+Learn how to [Review your security recommendations](review-security-recommendations.md).
+
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+
+- Follow us on social media:
+ [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Microsoft Defender for Servers](episode-five.md)
defender-for-cloud Episode Nine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-nine.md
+
+ Title: Microsoft Defender for Containers in a multi-cloud environment
+description: Learn about Microsoft Defender for Containers implementation in AWS and GCP.
+ Last updated : 05/25/2022++
+# Microsoft Defender for Containers in a Multi-Cloud Environment
+
+**Episode description**: In this episode of Defender for Cloud in the field, Maya Herskovic joins Yuri Diogenes to talk about Microsoft Defender for Containers implementation in AWS and GCP.
+
+Maya explains about the new workload protection capabilities related to Containers when they're deployed in a multi-cloud environment. Maya also demonstrates the onboarding experience in GCP and how to visualize security recommendations across AWS, GCP, and Azure in a single dashboard.
+
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=f9470496-abe3-4344-8160-d6a6b65c077f" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+
+- [01:12](/shows/mdc-in-the-field/containers-multi-cloud#time=01m12s) - Container protection in a multi-cloud environment
+
+- [05:03](/shows/mdc-in-the-field/containers-multi-cloud#time=05m03s) - Workload protection capabilities for GCP
+
+- [06:18](/shows/mdc-in-the-field/containers-multi-cloud#time=06m18s) - Single dashboard for multi-cloud
+
+- [10:25](/shows/mdc-in-the-field/containers-multi-cloud#time=10m25s) - Demonstration
+
+## Recommended resources
+
+Learn how to [Enable Microsoft Defender for Containers](defender-for-containers-enable.md).
+
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+
+- Follow us on social media:
+ [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Protecting Containers in GCP with Defender for Containers](episode-ten.md)
defender-for-cloud Episode One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-one.md
+
+ Title: New AWS connector in Microsoft Defender for Cloud
+description: Learn all about the new AWS connector in Microsoft Defender for Cloud.
+ Last updated : 05/25/2022++
+# New AWS connector in Microsoft Defender for Cloud
+
+**Episode description**: In this episode of Defender for Cloud in the field, Or Serok joins Yuri Diogenes to share the new AWS connector in Microsoft Defender for Cloud, which was released at Ignite 2021. Or explains the use case scenarios for the new connector and how the new connector work. She demonstrates the onboarding process to connect AWS with Microsoft Defender for Cloud and talks about the centralized management of all security recommendations.
+
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=26cbaec8-0f3f-4bb1-9918-1bf7d912db57" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+
+- [00:00](/shows/mdc-in-the-field/aws-connector) - Introduction
+
+- [2:20](/shows/mdc-in-the-field/aws-connector) - Understanding the new AWS connector.
+
+- [3:45](/shows/mdc-in-the-field/aws-connector) - Overview of the new onboarding experience.
+
+- [4:30](/shows/mdc-in-the-field/aws-connector) - Customizing recommendations for AWS workloads.
+
+- [7:03](/shows/mdc-in-the-field/aws-connector) - Beyond CSPM capabilities.
+
+- [11:14](/shows/mdc-in-the-field/aws-connector) - Demonstration of the recommendations and onboarding process.
+
+- [23:20](/shows/mdc-in-the-field/aws-connector) - Demonstration of how to customize AWS assessments.
+
+## Recommended resources
+
+Learn more about the new [AWS connector](quickstart-onboard-aws.md)
+
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+
+- Follow us on social media:
+ [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Integrate Azure Purview with Microsoft Defender for Cloud](episode-two.md)
defender-for-cloud Episode Seven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-seven.md
+
+ Title: New GCP connector in Microsoft Defender for Cloud
+description: Learn all about the new GCP connector in Microsoft Defender for Cloud.
+ Last updated : 05/25/2022++
+# New GCP connector in Microsoft Defender for Cloud
+
+**Episode description**: In this episode of Defender for Cloud in the field, Or Serok joins Yuri Diogenes to share the new GCP Connector in Microsoft Defender for Cloud. Or explains the use case scenarios for the new connector and how the new connector works. She demonstrates the onboarding process to connect GCP with Microsoft Defender for Cloud and talks about custom assessment and the CSPM experience for multi-cloud.
+
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=80ba04f0-1551-48f3-94a2-d2e82e7073c9" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+
+- [1:23](/shows/mdc-in-the-field/gcp-connector#time=01m23s) - Overview of the new GCP connector
+
+- [4:05](/shows/mdc-in-the-field/gcp-connector#time=04m05s) - Migration path from the old GCP connector to the new one
+
+- [5:10](/shows/mdc-in-the-field/gcp-connector#time=05m10s) - Type of assessment utilized by the new GCP connector
+
+- [5:51](/shows/mdc-in-the-field/gcp-connector#time=05m51s) - Custom assessments
+
+- [6:52](/shows/mdc-in-the-field/gcp-connector#time=06m52s) - Demonstration
+
+- [15:05](/shows/mdc-in-the-field/gcp-connector#time=15m05s) - Recommendation experience
+
+- [18:00](/shows/mdc-in-the-field/gcp-connector#time=18m00s) - Final considerations
+
+## Recommended resources
+
+Learn more how to [Connect your GCP projects to Microsoft Defender for Cloud](quickstart-onboard-gcp.md).
+
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+
+- Follow us on social media:
+ [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Microsoft Defender for IoT](episode-eight.md)
defender-for-cloud Episode Six https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-six.md
+
+ Title: Lessons learned from the field with Microsoft Defender for Cloud
+description: Learn how Microsoft Defender for Cloud is used to fill the gap between cloud security posture management and cloud workload protection.
+ Last updated : 05/25/2022++
+# Lessons learned from the field with Microsoft Defender for Cloud
+
+**Episode description**: In this episode Carlos Faria, Microsoft Cybersecurity Consultant joins Yuri to talk about lessons from the field and how customers are using Microsoft Defender for Cloud to improve their security posture and protect their workloads in a multi-cloud environment.
+
+Carlos also covers how Microsoft Defender for Cloud is used to fill the gap between cloud security posture management and cloud workload protection, and demonstrates some features related to this scenario.
+
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=3811455b-cc20-4ee0-b1bf-9d4df5ee4eaf" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+
+- [1:30](/shows/mdc-in-the-field/lessons-from-the-field#time=01m30s) - Why Microsoft Defender for Cloud is a unique solution when compared with other competitors?
+
+- [2:58](/shows/mdc-in-the-field/lessons-from-the-field#time=02m58s) - How to fulfill the gap between CSPM and CWPP
+
+- [4:42](/shows/mdc-in-the-field/lessons-from-the-field#time=04m42s) - How a multi-cloud affects the CSPM lifecycle and how Defender for Cloud fits in?
+
+- [8:05](/shows/mdc-in-the-field/lessons-from-the-field#time=08m05s) - Demonstration
+
+- [12:34](/shows/mdc-in-the-field/lessons-from-the-field#time=12m34s) - Final considerations
+
+## Recommended resources
+
+Learn more [What is Microsoft Defender for Cloud?](defender-for-cloud-introduction.md).
+
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+
+- Follow us on social media:
+ [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New GCP Connector in Microsoft Defender for Cloud](episode-seven.md)
defender-for-cloud Episode Ten https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-ten.md
+
+ Title: Protecting containers in GCP with Defender for Containers
+description: Learn how to use Defender for Containers, to protect Containers that are located in Google Cloud Projects.
+ Last updated : 05/25/2022++
+# Protecting containers in GCP with Defender for Containers
+
+**Episode description**: In this episode of Defender for Cloud in the field, Nadav Wolfin joins Yuri Diogenes to talk about how to use Defender for Containers to protect Containers that are located at Google Cloud (GCP).
+
+Nadav gives insights about workload protection for GKE and how to obtain visibility of this type of workload across Azure and AWS. Nadav also demonstrates the overall onboarding experience and provides an overview of the architecture of this solution.
+
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=078af1f2-1f12-4030-bd3f-3e7616150562" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+
+- [00:55](/shows/mdc-in-the-field/gcp-containers#time=00m55s) - Architecture solution for Defender for Containers and support for GKE
+
+- [06:42](/shows/mdc-in-the-field/gcp-containers#time=06m42s) - How the onboard process works
+
+- [08:46](/shows/mdc-in-the-field/gcp-containers#time=08m46s) - Demonstration
+
+- [26:18](/shows/mdc-in-the-field/gcp-containers#time=26m18s) - Integration with Azure Arc
+
+## Recommended resources
+
+Learn how to [Enable Microsoft Defender for Containers](defender-for-containers-enable.md).
+
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+
+- Follow us on social media:
+ [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Threat landscape for Containers](episode-eleven.md)
defender-for-cloud Episode Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-three.md
+
+ Title: Microsoft Defender for Containers
+description: Learn how about Microsoft Defender for Containers.
+ Last updated : 05/25/2022++
+# Microsoft Defender for Containers
+
+**Episode description**: In this episode of Defender for Cloud in the field, Maya Herskovic joins Yuri Diogenes to talk about Microsoft Defender for Containers. Maya explains what's new in Microsoft Defender for Containers, the new capabilities that are available, the new pricing model, and the multi-cloud coverage. Maya also demonstrates the overall experience of Microsoft Defender for Containers from the recommendations to the alerts that you may receive.
+
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=b8624912-ef9e-4fc6-8c0c-ea65e86d9128" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+
+- [1:09](/shows/mdc-in-the-field/defender-for-containers#time=01m09s) - What's new in the Defender for Containers plan?
+
+- [4:42](/shows/mdc-in-the-field/defender-for-containers#time=04m42s) - Change in the host level protection
+
+- [8:08](/shows/mdc-in-the-field/defender-for-containers#time=08m08s) - How to migrate to the new plan?
+
+- [9:28](/shows/mdc-in-the-field/defender-for-containers#time=09m28s) - Onboarding requirements
+
+- [11:45](/shows/mdc-in-the-field/defender-for-containers#time=11m45s) - Improvements in the anomaly detection
+
+- [13:27](/shows/mdc-in-the-field/defender-for-containers#time=13m27s) - Demonstration
+
+- [22:17](/shows/mdc-in-the-field/defender-for-containers#time=22m17s) - Final considerations
+
+## Recommended resources
+
+Learn more about [Microsoft Defender for Containers](defender-for-containers-introduction.md).
+
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+
+- Follow us on social media:
+ [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Security posture management improvements](episode-four.md)
defender-for-cloud Episode Twelve https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twelve.md
+
+ Title: Enhanced workload protection features in Defender for Servers
+description: Learn about the enhanced capabilities available in Defender for Servers, for VMs that are located in GCP, AWS and on-premises.
+ Last updated : 05/25/2022++
+# Enhanced workload protection features in Defender for Servers
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Netta Norman joins Yuri Diogenes to talk about the enhanced capabilities available in Defender for Servers, for VMs that are located in GCP, AWS and on-premises.
+
+Netta explains how Defender for Servers applies Azure Arc as a bridge to onboard non-Azure VMs as she demonstrates what the experience looks like.
+
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=18fdbe74-4399-44fe-81e7-3e3ce92df451" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+
+- [00:55](/shows/mdc-in-the-field/enhanced-workload-protection#time=00m55s) - Arc Auto-provisioning in GCP
+
+- [2:57](/shows/mdc-in-the-field/enhanced-workload-protection#time=02m57s) - Prerequisites to Arc auto-provisioning
+
+- [3:50](/shows/mdc-in-the-field/enhanced-workload-protection#time=03m50s) - Considerations when enabling Defender for Server plan in GCP
+
+- [5:20](/shows/mdc-in-the-field/enhanced-workload-protection#time=05m20s) - Dashboard refresh time interval
+
+- [7:00](/shows/mdc-in-the-field/enhanced-workload-protection#time=07m00s) - Security value for non-Azure workloads
+
+- [9:06](/shows/mdc-in-the-field/enhanced-workload-protection#time=05m20s) - Demonstration
+
+## Recommended resources
+
+Introduce yourself to [Microsoft Defender for Servers](defender-for-servers-introduction.md).
+
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+
+- Follow us on social media:
+ [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Episode Two https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-two.md
+
+ Title: Integrate Azure Purview with Microsoft Defender for Cloud
+description: Learn how to integrate Azure Purview with Microsoft Defender for Cloud.
+ Last updated : 05/25/2022++
+# Integrate Azure Purview with Microsoft Defender for Cloud
+
+**Episode description**: In this episode of Defender for Cloud in the field, David Trigano joins Yuri Diogenes to share the new integration of Microsoft Defender for Cloud with Azure Purview, which was released at Ignite 2021.
+
+David explains the use case scenarios for this integration and how the data classification is done by Azure Purview can help prioritize recommendations and alerts in Defender for Cloud. David also demonstrates the overall experience of data enrichment based on the information that flows from Azure Purview to Defender for Cloud.
+
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=9b911e9c-e933-4b7b-908a-5fd614f822c7" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+
+- [1:36](/shows/mdc-in-the-field/integrate-with-purview) - Overview of Azure Purview
+
+- [2:40](/shows/mdc-in-the-field/integrate-with-purview) - Integration with Microsoft Defender for Cloud
+
+- [3:48](/shows/mdc-in-the-field/integrate-with-purview) - How the integration with Azure Purview helps to prioritize Recommendations in Microsoft Defender for Cloud
+
+- [5:26](/shows/mdc-in-the-field/integrate-with-purview) - How the integration with Azure Purview helps to prioritize Alerts in Microsoft Defender for Cloud
+
+- [8:54](/shows/mdc-in-the-field/integrate-with-purview) - Demonstration
+
+- [16:50](/shows/mdc-in-the-field/integrate-with-purview) - Final considerations
+
+## Recommended resources
+
+Learn more about the [integration with Azure Purview](information-protection.md).
+
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+
+- Follow us on social media:
+ [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Watch Episode 3](episode-three.md)
defender-for-cloud Information Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/information-protection.md
Title: Prioritize security actions by data sensitivity - Microsoft Defender for Cloud description: Use Microsoft Purview's data sensitivity classifications in Microsoft Defender for Cloud Previously updated : 11/09/2021 Last updated : 04/27/2022 # Prioritize security actions by data sensitivity
A graph shows the number of recommendations and alerts by classified resource ty
:::image type="content" source="./media/information-protection/overview-dashboard-information-protection.png" alt-text="Screenshot of the information protection tile in Microsoft Defender for Cloud's overview dashboard." lightbox="./media/information-protection/overview-dashboard-information-protection.png":::
+## Learn more
+
+If you would like to learn more from the product manager about Microsoft Defender for Cloud's [integration with Azure Purview](episode-two.md).
+
+You can also check out the following blog:
+
+- [Secure sensitive data in your cloud resources](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/secure-sensitive-data-in-your-cloud-resources/ba-p/2918646).
## Next steps
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
For other operating systems, the SSM Agent should be installed manually using th
- [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html) - [Install SSM Agent for a hybrid environment (Linux)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html)
+## Learn more
+
+If you would like to learn more from the product manager about Microsoft Defender for Cloud's new AWS connector check out [Microsoft Defender for Cloud in the Field](episode-one.md).
+
+You can also check out the following blogs:
+
+- [Ignite 2021: Microsoft Defender for Cloud news](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/ignite-2021-microsoft-defender-for-cloud-news/ba-p/2882807).
+- [Custom assessments and standards in Microsoft Defender for Cloud for AWS workloads (Preview)](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/custom-assessments-and-standards-in-microsoft-defender-for-cloud/ba-p/3066575).
+- [Security posture management and server protection for AWS and GCP](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/security-posture-management-and-server-protection-for-aws-and/ba-p/3271388)
## Next steps
defender-for-cloud Recommendations Reference Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-aws.md
Title: Reference table for all Microsoft Defender for Cloud recommendations for AWS resources description: This article lists Microsoft Defender for Cloud's security recommendations that help you harden and protect your AWS resources. Previously updated : 03/13/2022 Last updated : 05/25/2022 # Security recommendations for AWS resources - a reference guide
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Updates in May include:
- [Multicloud settings of Servers plan are now available in connector level](#multicloud-settings-of-servers-plan-are-now-available-in-connector-level) - [JIT (Just-in-time) access for VMs is now available for AWS EC2 instances (Preview)](#jit-just-in-time-access-for-vms-is-now-available-for-aws-ec2-instances-preview)
+- [Add and remove the Defender profile for AKS clusters from the CLI](#add-and-remove-the-defender-profile-for-aks-clusters-from-the-cli)
### Multicloud settings of Servers plan are now available in connector level
There are now connector-level settings for Defender for Servers in multicloud.
The new connector-level settings provide granularity for pricing and auto-provisioning configuration per connector, independently of the subscription. All auto-provisioning components available in the connector level (Azure Arc, MDE, and vulnerability assessments) are enabled by default, and the new configuration supports both [Plan 1 and Plan 2 pricing tiers](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans).
-
+ Updates in the UI include a reflection of the selected pricing tier and the required components configured. :::image type="content" source="media/release-notes/main-page.png" alt-text="Screenshot of the main plan page with the Server plan multicloud settings." lightbox="media/release-notes/main-page.png":::
When you [connect AWS accounts](quickstart-onboard-aws.md), JIT will automatical
Learn how [JIT protects your AWS EC2 instances](just-in-time-access-overview.md#how-jit-operates-with-network-resources-in-azure-and-aws)
+### Add and remove the Defender profile for AKS clusters from the CLI
+
+The Defender profile (preview) is required for Defender for Containers to provide the runtime protections and collects signals from nodes. You can now use the Azure CLI to [add and remove the Defender profile](/includes/defender-for-containers-enable-plan-aks.md#deploy-the-defender-profile) for an AKS cluster.
+
+> [!NOTE]
+> This option is included in [Azure CLI 3.7 and above](/cli/azure/update-azure-cli.md).
+ ## April 2022 Updates in April include:
defender-for-cloud Review Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md
Title: Security recommendations in Microsoft Defender for Cloud description: This document walks you through how recommendations in Microsoft Defender for Cloud help you protect your Azure resources and stay in compliance with security policies. Previously updated : 04/03/2022 Last updated : 05/11/2022 # Review your security recommendations
When the report is ready, you'll be notified by a second pop-up.
:::image type="content" source="media/review-security-recommendations/downloaded-csv.png" alt-text="Screenshot letting you know your downloaded completed.":::
+## Learn more
+
+If you would like to learn more from the product manager about security posture, check out [Security posture management improvements](episode-four.md).
+
+You can also check out the following blogs:
+
+- [Security posture management and server protection for AWS and GCP are now generally available](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/security-posture-management-and-server-protection-for-aws-and/ba-p/3271388)
+- [Custom assessments and standards in Microsoft Defender for Cloud for AWS workloads (Preview)](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/custom-assessments-and-standards-in-microsoft-defender-for-cloud/ba-p/3066575)
+- [New enhancements added to network security dashboard](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/new-enhancements-added-to-network-security-dashboard/ba-p/2896021)
+ ## Next steps In this document, you were introduced to security recommendations in Defender for Cloud. For related information:
digital-twins How To Authenticate Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-authenticate-client.md
The [ManagedIdentityCredential](/dotnet/api/azure.identity.managedidentitycreden
This means that you may use `ManagedIdentityCredential` in the same project as `DefaultAzureCredential` or `InteractiveBrowserCredential`, to authenticate a different part of the project.
-To use the default Azure credentials, you'll need the Azure Digital Twins instance's URL ([instructions to find](how-to-set-up-instance-portal.md#verify-success-and-collect-important-values)). You may also need an [app registration](./how-to-create-app-registration-portal.md) and the registration's [Application (client) ID](./how-to-create-app-registration-portal.md#collect-client-id-and-tenant-id).
+To use the default Azure credentials, you'll need the Azure Digital Twins instance's URL ([instructions to find](how-to-set-up-instance-portal.md#verify-success-and-collect-important-values)). You may also need an [app registration](./how-to-create-app-registration.md) and the registration's [Application (client) ID](./how-to-create-app-registration.md#collect-client-id-and-tenant-id).
In an Azure function, you can use the managed identity credentials like this:
In an Azure function, you can use the managed identity credentials like this:
The [InteractiveBrowserCredential](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true) method is intended for interactive applications and will bring up a web browser for authentication. You can use this method instead of `DefaultAzureCredential` in cases where you require interactive authentication.
-To use the interactive browser credentials, you'll need an **app registration** that has permissions to the Azure Digital Twins APIs. For steps on how to set up this app registration, see [Create an app registration with Azure Digital Twins access](./how-to-create-app-registration-portal.md). Once the app registration is set up, you'll need...
-* [the app registration's Application (client) ID](./how-to-create-app-registration-portal.md#collect-client-id-and-tenant-id)
-* [the app registration's Directory (tenant) ID](./how-to-create-app-registration-portal.md#collect-client-id-and-tenant-id)
+To use the interactive browser credentials, you'll need an **app registration** that has permissions to the Azure Digital Twins APIs. For steps on how to set up this app registration, see [Create an app registration with Azure Digital Twins access](./how-to-create-app-registration.md). Once the app registration is set up, you'll need...
+* [the app registration's Application (client) ID](./how-to-create-app-registration.md#collect-client-id-and-tenant-id)
+* [the app registration's Directory (tenant) ID](./how-to-create-app-registration.md#collect-client-id-and-tenant-id)
* [the Azure Digital Twins instance's URL](how-to-set-up-instance-portal.md#verify-success-and-collect-important-values) Here's an example of the code to create an authenticated SDK client using `InteractiveBrowserCredential`.
digital-twins How To Create App Registration Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-app-registration-cli.md
-
-# Mandatory fields.
Title: Create an app registration with Azure Digital Twins access (CLI)-
-description: Use the CLI to create an Azure AD app registration that can access Azure Digital Twins resources.
-- Previously updated : 2/24/2022----
-# Optional fields. Don't forget to remove # if you need a field.
-#
-#
-#
--
-# Create an app registration to use with Azure Digital Twins (CLI)
--
-This article describes how to use the Azure CLI to create an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) *app registration* that can access Azure Digital Twins.
-
-When working with Azure Digital Twins, it's common to interact with your instance through client applications. Those applications need to authenticate with Azure Digital Twins, and some of the [authentication mechanisms](how-to-authenticate-client.md) that apps can use involve an app registration.
-
-The app registration isn't required for all authentication scenarios. However, if you're using an authentication strategy or code sample that does require an app registration, this article shows you how to set one up and grant it permissions to the Azure Digital Twins APIs. It also covers how to collect important values that you'll need to use the app registration when authenticating.
-
->[!TIP]
-> You may prefer to set up a new app registration every time you need one, or to do this only once, establishing a single app registration that will be shared among all scenarios that require it.
--
-## Create manifest
-
-First, create a file containing certain service information that your app registration will need to access the Azure Digital Twins APIs. Later, you'll pass in this file when creating the app registration, to set up the Azure Digital Twins permissions.
-
-Create a new .json file on your computer called *manifest.json*. Copy this text into the file:
-
-```json
-[
- {
- "resourceAppId": "0b07f429-9f4b-4714-9392-cc5e8e80c8b0",
- "resourceAccess": [
- {
- "id": "4589bd03-58cb-4e6c-b17f-b580e39652f8",
- "type": "Scope"
- }
- ]
- }
-]
-```
-
-The static value `0b07f429-9f4b-4714-9392-cc5e8e80c8b0` is the resource ID for the Azure Digital Twins service endpoint, which your app registration will need to access the Azure Digital Twins APIs.
-
-Save the finished file.
-
-### Cloud Shell users: Upload manifest
-
-If you're using Cloud Shell for this tutorial, you'll need to upload the manifest file you created to the Cloud Shell, so that you can access it in Cloud Shell commands when configuring the app registration. If you're using a local installation of the Azure CLI, you can skip this step.
-
-To upload the file, go to the Cloud Shell window in your browser. Select the "Upload/Download files" icon and choose "Upload".
--
-Navigate to the *manifest.json* file on your machine and select **Open**. Doing so will upload the file to the root of your Cloud Shell storage.
-
-## Create the registration
-
-In this section, you'll run a CLI command to create an app registration with the following settings:
-* Name of your choice
-* Available only to accounts in the default directory (single tenant)
-* A web reply URL of `http://localhost`
-* Read/write permissions to the Azure Digital Twins APIs
-
-Run the following command to create the registration. If you're using Cloud Shell, the path to the manifest.json file is `@manifest.json`.
-
-```azurecli-interactive
-az ad app create --display-name <app-registration-name> --available-to-other-tenants false --reply-urls http://localhost --native-app --required-resource-accesses "<path-to-manifest.json>"
-```
-
-The output of the command is information about the app registration you've created.
-
-## Verify success
-
-You can confirm that the Azure Digital Twins permissions were granted by looking for the following fields in the output of the `az ad app create` command, and confirming their values match what's shown in the screenshot below:
--
-You can also verify the app registration was successfully created with the necessary API permissions by using the Azure portal. For portal instructions, see [Verify API permissions (portal)](how-to-create-app-registration-portal.md#verify-api-permissions).
-
-## Collect important values
-
-Next, collect some important values about the app registration that you'll need to use the app registration to authenticate a client application. These values include:
-* resource name
-* client ID
-* tenant ID
-* client secret
-
-To work with Azure Digital Twins, the resource name is `http://digitaltwins.azure.net`.
-
-The following sections describe how to find the other values.
-
-### Collect client ID and tenant ID
-
-To use the app registration for authentication, you may need to provide its **Application (client) ID** and **Directory (tenant) ID**. In this section, you'll collect these values so you can save them and use them whenever they're needed.
-
-You can find both of these values in the output from the `az ad app create` command.
-
-Application (client) ID:
--
-Directory (tenant) ID:
--
-### Collect client secret
-
-To create a client secret for your app registration, you'll need your app registration's client ID value from the previous section. Use the value in the following CLI command to create a new secret:
-
-```azurecli-interactive
-az ad app credential reset --id <client-ID> --append
-```
-
-You can also add optional parameters to this command to specify a credential description, end date, and other details. For more information about the command and its parameters, see [az ad app credential reset documentation](/cli/azure/ad/app/credential#az-ad-app-credential-reset).
-
-The output of this command is information about the client secret that you've created. Copy the value for `password` to use when you need the client secret for authentication.
--
->[!IMPORTANT]
->Make sure to copy the value now and store it in a safe place, as it cannot be retrieved again. If you can't find the value later, you'll have to create a new secret.
-
-## Create Azure Digital Twins role assignment
-
-In this section, you'll create a role assignment for the app registration to set its permissions on the Azure Digital Twins instance. This role will determine what permissions the app registration holds on the instance, so you should select the role that matches the appropriate level of permission for your situation. One possible role is [Azure Digital Twins Data Owner](../role-based-access-control/built-in-roles.md#azure-digital-twins-data-owner). For a full list of roles and their descriptions, see [Azure built-in roles](../role-based-access-control/built-in-roles.md).
-
-Use the following command to assign the role (must be run by a user with [sufficient permissions](how-to-set-up-instance-cli.md#prerequisites-permission-requirements) in the Azure subscription). The command requires you to pass in the name of the app registration.
-
-```azurecli-interactive
-az dt role-assignment create --dt-name <your-Azure-Digital-Twins-instance> --assignee "<name-of-app-registration>" --role "<appropriate-role-name>"
-```
-
-The result of this command is outputted information about the role assignment that's been created for the app registration.
-
-### Verify role assignment
-
-To further verify the role assignment, you can look for it in the Azure portal. Follow the instructions in [Verify role assignment (portal)](how-to-create-app-registration-portal.md#verify-role-assignment).
-
-## Other possible steps for your organization
-
-It's possible that your organization requires more actions from subscription Owners/administrators to successfully set up an app registration. The steps required may vary depending on your organization's specific settings.
-
-Here are some common potential activities that an Owner or administrator on the subscription may need to do.
-* Grant admin consent for the app registration. Your organization may have **Admin Consent Required** globally turned on in Azure AD for all app registrations within your subscription. If so, the Owner/administrator may need to grant additional delegated or application permissions.
-* Activate public client access by appending `--set publicClient=true` to a create or update command for the registration.
-* Set specific reply URLs for web and desktop access using the `--reply-urls` parameter. For more information on using this parameter with `az ad` commands, see the [az ad app documentation](/cli/azure/ad/app).
-* Allow for implicit OAuth2 authentication flows using the `--oauth2-allow-implicit-flow` parameter. For more information on using this parameter with `az ad` commands, see the [az ad app documentation](/cli/azure/ad/app).
-
-For more information about app registration and its different setup options, see [Register an application with the Microsoft identity platform](/graph/auth-register-app-v2).
-
-## Next steps
-
-In this article, you set up an Azure AD app registration that can be used to authenticate client applications with the Azure Digital Twins APIs.
-
-Next, read about authentication mechanisms, including one that uses app registrations and others that don't:
-* [Write app authentication code](how-to-authenticate-client.md)
digital-twins How To Create App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-app-registration.md
+
+# Mandatory fields.
+ Title: Create an app registration with Azure Digital Twins access
+
+description: Create an Azure Active Directory app registration that can access Azure Digital Twins resources.
++ Last updated : 5/25/2022++++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Create an app registration to use with Azure Digital Twins
+
+This article describes how to create an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) *app registration* that can access Azure Digital Twins. This article includes steps for the [Azure portal](https://portal.azure.com) and the [Azure CLI](/cli/azure/what-is-azure-cli).
+
+When working with Azure Digital Twins, it's common to interact with your instance through client applications. Those applications need to authenticate with Azure Digital Twins, and some of the [authentication mechanisms](how-to-authenticate-client.md) that apps can use involve an app registration.
+
+The app registration isn't required for all authentication scenarios. However, if you're using an authentication strategy or code sample that does require an app registration, this article shows you how to set one up and grant it permissions to the Azure Digital Twins APIs. It also covers how to collect important values that you'll need to use the app registration when authenticating.
+
+>[!TIP]
+> You may prefer to set up a new app registration every time you need one, or to do this only once, establishing a single app registration that will be shared among all scenarios that require it.
+
+## Create the registration
+
+Start by selecting the tab below for your preferred interface.
+
+# [Portal](#tab/portal)
+
+Navigate to [Azure Active Directory](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) in the Azure portal (you can use this link or find it with the portal search bar). Select **App registrations** from the service menu, and then **+ New registration**.
++
+In the **Register an application** page that follows, fill in the requested values:
+* **Name**: An Azure AD application display name to associate with the registration
+* **Supported account types**: Select **Accounts in this organizational directory only (Default Directory only - Single tenant)**
+* **Redirect URI**: An **Azure AD application reply URL** for the Azure AD application. Add a **Public client/native (mobile & desktop)** URI for `http://localhost`.
+
+When you're finished, select the **Register** button.
++
+When the registration is finished setting up, the portal will redirect you to its details page.
+
+# [CLI](#tab/cli)
+
+Start by creating a manifest file, which contains service information that your app registration will need to access the Azure Digital Twins APIs. Afterwards, you'll pass this file into a CLI command to create the registration.
+
+### Create manifest
+
+Create a new .json file on your computer called *manifest.json*. Copy this text into the file:
+
+```json
+[
+ {
+ "resourceAppId": "0b07f429-9f4b-4714-9392-cc5e8e80c8b0",
+ "resourceAccess": [
+ {
+ "id": "4589bd03-58cb-4e6c-b17f-b580e39652f8",
+ "type": "Scope"
+ }
+ ]
+ }
+]
+```
+
+The static value `0b07f429-9f4b-4714-9392-cc5e8e80c8b0` is the resource ID for the Azure Digital Twins service endpoint, which your app registration will need to access the Azure Digital Twins APIs.
+
+Save the finished file.
+
+### Cloud Shell users: Upload manifest
+
+If you're using Azure Cloud Shell for this tutorial, you'll need to upload the manifest file you created to the Cloud Shell, so that you can access it in Cloud Shell commands when configuring the app registration. If you're using a local installation of the Azure CLI, you can skip ahead to the next step, [Run the creation command](#run-the-creation-command).
+
+To upload the file, go to the Cloud Shell window in your browser. Select the "Upload/Download files" icon and choose "Upload".
++
+Navigate to the *manifest.json* file on your machine and select **Open**. Doing so will upload the file to the root of your Cloud Shell storage.
+
+### Run the creation command
+
+In this section, you'll run a CLI command to create an app registration with the following settings:
+* Name of your choice
+* Available only to accounts in the default directory (single tenant)
+* A web reply URL of `http://localhost`
+* Read/write permissions to the Azure Digital Twins APIs
+
+Run the following command to create the registration. If you're using Cloud Shell, the path to the manifest.json file is `@manifest.json`.
+
+```azurecli-interactive
+az ad app create --display-name <app-registration-name> --available-to-other-tenants false --reply-urls http://localhost --native-app --required-resource-accesses "<path-to-manifest.json>"
+```
+
+The output of the command is information about the app registration you've created.
+
+### Verify success
+
+You can confirm that the Azure Digital Twins permissions were granted by looking for the following fields in the output of the creation command, under `requiredResourceAccess`. Confirm their values match what's listed below.
+* `resourceAccess > id` is *4589bd03-58cb-4e6c-b17f-b580e39652f8*
+* `resourceAppId` is *0b07f429-9f4b-4714-9392-cc5e8e80c8b0*
++++
+## Collect important values
+
+Next, collect some important values about the app registration that you'll need to use the app registration to authenticate a client application. These values include:
+* resource name ΓÇö When working with Azure Digital Twins, the **resource name** is `http://digitaltwins.azure.net`.
+* client ID
+* tenant ID
+* client secret
+
+The following sections describe how to find the remaining values.
+
+### Collect client ID and tenant ID
+
+To use the app registration for authentication, you may need to provide its **Application (client) ID** and **Directory (tenant) ID**. Here, you'll collect these values so you can save them and use them whenever they're needed.
+
+# [Portal](#tab/portal)
+
+The client ID and tenant ID values can be collected from the app registration's details page in the Azure portal:
++
+Take note of the **Application (client) ID** and **Directory (tenant) ID** shown on your page.
+
+# [CLI](#tab/cli)
+
+You can find both of these values in the output from the `az ad app create` command that you ran [earlier](#run-the-creation-command). (You can also bring up the app registration's information again using [az ad app show](/cli/azure/ad/app#az-ad-app-show).)
+
+Look for these values in the result:
+
+Application (client) ID:
++
+Directory (tenant) ID:
++++
+### Collect client secret
+
+Set up a client secret for your app registration, which other applications can use to authenticate through it.
+
+# [Portal](#tab/portal)
+
+Start on your app registration page in the Azure portal.
+
+1. Select **Certificates & secrets** from the registration's menu, and then select **+ New client secret**.
+
+ :::image type="content" source="media/how-to-create-app-registration/client-secret.png" alt-text="Screenshot of the Azure portal showing an Azure AD app registration and a highlight around 'New client secret'.":::
+
+1. Enter whatever values you want for Description and Expires, and select **Add**.
+
+ :::row:::
+ :::column:::
+ :::image type="content" source="media/how-to-create-app-registration/add-client-secret.png" alt-text="Screenshot of the Azure portal while adding a client secret.":::
+ :::column-end:::
+ :::column:::
+ :::column-end:::
+ :::row-end:::
+
+1. Verify that the client secret is visible on the **Certificates & secrets** page with Expires and Value fields.
+
+1. Take note of its **Secret ID** and **Value** to use later (you can also copy them to the clipboard with the Copy icons).
+
+ :::image type="content" source="media/how-to-create-app-registration/client-secret-value.png" alt-text="Screenshot of the Azure portal showing how to copy the client secret value.":::
+
+>[!IMPORTANT]
+>Make sure to copy the values now and store them in a safe place, as they can't be retrieved again. If you can't find them later, you'll have to create a new secret.
+
+# [CLI](#tab/cli)
+
+To create a client secret for your app registration, you'll need your app registration's client ID value that you collected in the [previous step](#collect-client-id-and-tenant-id). Use the value in the following CLI command to create a new secret:
+
+```azurecli-interactive
+az ad app credential reset --id <client-ID> --append
+```
+
+You can also add optional parameters to this command to specify a credential description, end date, and other details. For more information about the command and its parameters, see [az ad app credential reset documentation](/cli/azure/ad/app/credential#az-ad-app-credential-reset).
+
+The output of this command is information about the client secret that you've created.
+
+Copy the value for `password` to use when you need the client secret for authentication.
++
+>[!IMPORTANT]
+>Make sure to copy the value now and store it in a safe place, as it cannot be retrieved again. If you can't find the value later, you'll have to create a new secret.
+++
+## Provide Azure Digital Twins permissions
+
+Next, configure the app registration you've created with permissions to access Azure Digital Twins. There are two types of permissions that are required:
+* A role assignment for the app registration within the Azure Digital Twins instance
+* API permissions for the app to read and write to the Azure Digital Twins APIs
+
+### Create role assignment
+
+In this section, you'll create a role assignment for the app registration on the Azure Digital Twins instance. This role will determine what permissions the app registration holds on the instance, so you should select the role that matches the appropriate level of permission for your situation. One possible role is [Azure Digital Twins Data Owner](../role-based-access-control/built-in-roles.md#azure-digital-twins-data-owner). For a full list of roles and their descriptions, see [Azure built-in roles](../role-based-access-control/built-in-roles.md).
+
+# [Portal](#tab/portal)
+
+Use these steps to create the role assignment for your registration.
+
+1. Open the page for your Azure Digital Twins instance in the Azure portal.
+
+1. Select **Access control (IAM)**.
+
+1. Select **Add** > **Add role assignment** to open the Add role assignment page.
+
+1. Assign the appropriate role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | Select as appropriate |
+ | Assign access to | User, group, or service principal |
+ | Members | Search for the name or [client ID](#collect-client-id-and-tenant-id) of the app registration |
+
+ ![Add role assignment page](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+
+#### Verify role assignment
+
+You can view the role assignment you've set up under **Access control (IAM) > Role assignments**.
++
+The app registration should show up in the list along with the role you assigned to it.
+
+# [CLI](#tab/cli)
+
+Use the [az dt role-assignment create](/cli/azure/dt/role-assignment#az-dt-role-assignment-create) command to assign the role (it must be run by a user with [sufficient permissions](how-to-set-up-instance-cli.md#prerequisites-permission-requirements) in the Azure subscription). The command requires you to pass in the name of the role you want to assign, the name of your Azure Digital Twins instance, and either the name or the object ID of the app registration.
+
+```azurecli-interactive
+az dt role-assignment create --dt-name <your-Azure-Digital-Twins-instance> --assignee "<name-or-ID-of-app-registration>" --role "<appropriate-role-name>"
+```
+
+The result of this command is outputted information about the role assignment that's been created for the app registration.
+
+To further verify the role assignment, you can look for it in the Azure portal (switch to the [Portal instruction tab](?tabs=portal#verify-role-assignment)).
+++
+### Provide API permissions
+
+In this section, you'll grant your app baseline read/write permissions to the Azure Digital Twins APIs.
+
+If you're using the Azure CLI and set up your app registration [earlier](#create-the-registration) with a manifest file, this step is already done. If you're using the Azure portal to create your app registration, continue through the rest of this section to set up API permissions.
+
+# [Portal](#tab/portal)
+
+From the portal page for your app registration, select **API permissions** from the menu. On the following permissions page, select the **+ Add a permission** button.
++
+In the **Request API permissions** page that follows, switch to the **APIs my organization uses** tab and search for *Azure digital twins*. Select **Azure Digital Twins** from the search results to continue with assigning permissions for the Azure Digital Twins APIs.
++
+>[!NOTE]
+> If your subscription still has an existing Azure Digital Twins instance from the previous public preview of the service (before July 2020), you'll need to search for and select **Azure Smart Spaces Service** instead. This is an older name for the same set of APIs (notice that the **Application (client) ID** is the same as in the screenshot above), and your experience won't be changed beyond this step.
+> :::image type="content" source="media/how-to-create-app-registration/request-api-permissions-1-smart-spaces.png" alt-text="Screenshot of the 'Request API Permissions' page search result showing Azure Smart Spaces Service in the Azure portal.":::
+
+Next, you'll select which permissions to grant for these APIs. Expand the **Read (1)** permission and check the box that says **Read.Write** to grant this app registration reader and writer permissions.
++
+Select **Add permissions** when finished.
+
+#### Verify API permissions
+
+On the **API permissions** page, verify that there's now an entry for Azure Digital Twins reflecting **Read.Write** permissions:
++
+You can also verify the connection to Azure Digital Twins within the app registration's *manifest.json*, which was automatically updated with the Azure Digital Twins information when you added the API permissions.
+
+To do so, select **Manifest** from the menu to view the app registration's manifest code. Scroll to the bottom of the code window and look for the following fields and values under `requiredResourceAccess`:
+* `"resourceAppId": "0b07f429-9f4b-4714-9392-cc5e8e80c8b0"`
+* `"resourceAccess"` > `"id": "4589bd03-58cb-4e6c-b17f-b580e39652f8"`
+
+These values are shown in the screenshot below:
++
+If these values are missing, retry the steps in the [section for adding the API permission](#provide-api-permissions).
+
+# [CLI](#tab/cli)
+
+If you're using the CLI, the API permissions were set up earlier as part of the [Create the registration](#create-the-registration) step.
+
+You can verify them now using the Azure portal (switch to the [Portal instruction tab](?tabs=portal#verify-api-permissions)).
+++
+## Other possible steps for your organization
+
+It's possible that your organization requires more actions from subscription owners or administrators to finish setting up the app registration. The steps required may vary depending on your organization's specific settings. Choose a tab below to see this information tailored to your preferred interface.
+
+# [Portal](#tab/portal)
+
+Here are some common potential activities that an owner or administrator on the subscription may need to do. These and other operations can be performed from the [Azure AD App registrations](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) page in the Azure portal.
+* Grant admin consent for the app registration. Your organization may have **Admin Consent Required** globally turned on in Azure AD for all app registrations within your subscription. If so, the owner/administrator will need to select this button for your company on the app registration's **API permissions** page for the app registration to be valid:
+
+ :::image type="content" source="media/how-to-create-app-registration/grant-admin-consent.png" alt-text="Screenshot of the Azure portal showing the 'Grant admin consent' button under API permissions.":::
+ - If consent was granted successfully, the entry for Azure Digital Twins should then show a **Status** value of **Granted for (your company)**
+
+ :::image type="content" source="media/how-to-create-app-registration/granted-admin-consent-done.png" alt-text="Screenshot of the Azure portal showing the admin consent granted for the company under API permissions.":::
+* Activate public client access
+* Set specific reply URLs for web and desktop access
+* Allow for implicit OAuth2 authentication flows
+
+# [CLI](#tab/cli)
+
+Here are some common potential activities that an owner or administrator on the subscription may need to do.
+* Grant admin consent for the app registration. Your organization may have **Admin Consent Required** globally turned on in Azure AD for all app registrations within your subscription. If so, the owner/administrator may need to grant additional delegated or application permissions.
+* Activate public client access by appending `--set publicClient=true` to a create or update command for the registration.
+* Set specific reply URLs for web and desktop access using the `--reply-urls` parameter. For more information on using this parameter with `az ad` commands, see the [az ad app documentation](/cli/azure/ad/app).
+* Allow for implicit OAuth2 authentication flows using the `--oauth2-allow-implicit-flow` parameter. For more information on using this parameter with `az ad` commands, see the [az ad app documentation](/cli/azure/ad/app).
+++
+For more information about app registration and its different setup options, see [Register an application with the Microsoft identity platform](/graph/auth-register-app-v2).
+
+## Next steps
+
+In this article, you set up an Azure AD app registration that can be used to authenticate client applications with the Azure Digital Twins APIs.
+
+Next, read about authentication mechanisms, including one that uses app registrations and others that don't:
+* [Write app authentication code](how-to-authenticate-client.md)
digital-twins How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-move-regions.md
The exact resources you need to edit depends on your scenario, but here are some
* Azure Maps. * IoT Hub Device Provisioning Service. * Personal or company apps outside of Azure, such as the client app created in [Code a client app](tutorial-code.md), that connect to the instance and call Azure Digital Twins APIs.
-* Azure AD app registrations don't need to be recreated. If you're using an [app registration](./how-to-create-app-registration-portal.md) to connect to the Azure Digital Twins APIs, you can reuse the same app registration with your new instance.
+* Azure AD app registrations don't need to be recreated. If you're using an [app registration](./how-to-create-app-registration.md) to connect to the Azure Digital Twins APIs, you can reuse the same app registration with your new instance.
After you finish this step, your new instance in the target region should be a copy of the original instance.
digital-twins Troubleshoot Error 403 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-error-403.md
Most often, this error indicates that your Azure role-based access control (Azur
### Cause #2
-If you're using a client app to communicate with Azure Digital Twins that's authenticating with an [app registration](./how-to-create-app-registration-portal.md), this error may happen because your app registration doesn't have permissions set up for the Azure Digital Twins service.
+If you're using a client app to communicate with Azure Digital Twins that's authenticating with an [app registration](./how-to-create-app-registration.md), this error may happen because your app registration doesn't have permissions set up for the Azure Digital Twins service.
The app registration must have access permissions configured for the Azure Digital Twins APIs. Then, when your client app authenticates against the app registration, it will be granted the permissions that the app registration has configured.
Next, select **API permissions** from the menu bar to verify that this app regis
#### Fix issues
-If any of this appears differently than described, follow the instructions on how to set up an app registration in [Create an app registration with Azure Digital Twins access](./how-to-create-app-registration-portal.md).
+If any of this appears differently than described, follow the instructions on how to set up an app registration in [Create an app registration with Azure Digital Twins access](./how-to-create-app-registration.md).
## Next steps
expressroute Designing For Disaster Recovery With Expressroute Privatepeering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md
Previously updated : 05/09/2022 Last updated : 05/25/2022
However, taking Murphy's popular adage--*if anything can go wrong, it will*--int
## Need for redundant connectivity solution
-There are possibilities and instances where an entire regional service (be it that of Microsoft, network service providers, customer, or other cloud service providers) gets degraded. The root cause for such regional wide service impact include natural calamity. That's why, for business continuity and mission critical applications it's important to plan for disaster recovery.
+There are possibilities and instances where an ExpressRoute peering locations or an entire regional service (be it that of Microsoft, network service providers, customer, or other cloud service providers) gets degraded. The root cause for such regional wide service impact include natural calamity. That's why, for business continuity and mission critical applications it's important to plan for disaster recovery.
No matter what, whether you run your mission critical applications in an Azure region or on-premises or anywhere else, you can use another Azure region as your failover site. The following articles addresses disaster recovery from applications and frontend access perspectives: - [Enterprise-scale disaster recovery][Enterprise DR] - [SMB disaster recovery with Azure Site Recovery][SMB DR]
-If you rely on ExpressRoute connectivity between your on-premises network and Microsoft for mission critical operations, your disaster recovery plan should also include geo-redundant network connectivity.
+If you rely on ExpressRoute connectivity between your on-premises network and Microsoft for mission critical operations, you need to consider the following to plan for disaster recovery over ExpressRoute
+
+- using geo-redundant ExpressRoute circuits
+- using diverse service provider network(s) for different ExpressRoute circuit
+- designing each of the ExpressRoute circuit for [high availability][HA]
+- terminating the different ExpressRoute circuit in different location on the customer network
+- using [Availability zone aware ExpressRoute Virtual Network Gateways](/articles/vpn-gateway/about-zone-redundant-vnet-gateways.md)
## Challenges of using multiple ExpressRoute circuits
When you interconnect the same set of networks using more than one connection, y
However, if you load balance traffic across geo-redundant parallel paths, regardless of whether you have stateful entities or not, you would experience inconsistent network performance. These geo-redundant parallel paths can be through the same metro or different metro found on the [providers by location](expressroute-locations-providers.md#partners) page.
-### Same metro
+### Redundancy with ExpressRoute circuits in same metro
-[Many metros](expressroute-locations-providers.md#global-commercial-azure) have two ExpressRoute locations. An example would be *Amsterdam* and *Amsterdam2*. When designing redundancy, you could build two parallel paths to Azure with both locations in the same metro. The advantage of this design is when application failover happens, end-to-end latency between your on-premises applications and Microsoft stays approximately the same. However, if there is a natural disaster such as an earthquake, connectivity for both paths may no longer be available.
+[Many metros](expressroute-locations-providers.md#global-commercial-azure) have two ExpressRoute locations. An example would be *Amsterdam* and *Amsterdam2*. When designing redundancy, you could build two parallel paths to Azure with both locations in the same metro. You could do this with the same provider or choose to work with a different service provider to improve resiliency. Another advantage of this design is when application failover happens, end-to-end latency between your on-premises applications and Microsoft stays approximately the same. However, if there is a natural disaster such as an earthquake, connectivity for both paths may no longer be available.
-### Different metros
+### Redundancy with ExpressRoute circuits in different metros
When using different metros for redundancy, you should select the secondary location in the same [geo-political region](expressroute-locations-providers.md#locations). To choose a location outside of the geo-political region, you'll need to use Premium SKU for both circuits in the parallel paths. The advantage of this configuration is the chances of a natural disaster causing an outage to both links are much lower but at the cost of increased latency end-to-end.
+>[!NOTE]
+>Enabling BFD on the ExpressRoute circuits will help with faster link failure detection between Microsoft Enterprise Edge (MSEE) devices and the Customer/Partner Edge routers. However, the overall failover and convergence to redundant site may take up to 180 seconds under some failure conditions and you may experience increased laterncy or performance degradation during this time.
+ In this article, let's discuss how to address challenges you may face when configuring geo-redundant paths. ## Small to medium on-premises network considerations
Let's consider the example network illustrated in the following diagram. In the
:::image type="content" source="./media/designing-for-disaster-recovery-with-expressroute-pvt/one-region.png" alt-text="Diagram of small to medium size on-premises network considerations.":::
-When you are designing ExpressRoute connectivity for disaster recovery, you need to consider:
--- using geo-redundant ExpressRoute circuits-- using diverse service provider network(s) for different ExpressRoute circuit-- designing each of the ExpressRoute circuit for [high availability][HA]-- terminating the different ExpressRoute circuit in different location on the customer network- By default, if you advertise routes identically over all the ExpressRoute paths, Azure will load-balance on-premises bound traffic across all the ExpressRoute paths using Equal-cost multi-path (ECMP) routing. However, with the geo-redundant ExpressRoute circuits we need to take into consideration different network performances with different network paths (particularly for network latency). To get more consistent network performance during normal operation, you may want to prefer the ExpressRoute circuit that offers the minimal latency.
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
Previously updated : 01/24/2022 Last updated : 05/24/2022
The following table shows connectivity locations and the service providers for e
| **Rio de Janeiro** | [Equinix-RJ2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/rio-de-janeiro-data-centers/rj2/) | 3 | Brazil Southeast | Supported | Equinix | | **San Antonio** | [CyrusOne SA1](https://cyrusone.com/locations/texas/san-antonio-texas/) | 1 | South Central US | Supported | CenturyLink Cloud Connect, Megaport, Zayo | | **Sao Paulo** | [Equinix SP2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/sao-paulo-data-centers/sp2/) | 3 | Brazil South | Supported | Aryaka Networks, Ascenty Data Centers, British Telecom, Equinix, InterCloud, Level 3 Communications, Neutrona Networks, Orange, Tata Communications, Telefonica, UOLDIVEO |
-| **Sao Paulo2** | [TIVIT TSM](https://www.tivit.com/en/tivit/) | 3 | Brazil South | Supported | Ascenty Data Centers |
+| **Sao Paulo2** | [TIVIT TSM](https://www.tivit.com/en/tivit/) | 3 | Brazil South | Supported | Ascenty Data Centers, Tivit |
| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | 1 | West US 2 | Supported | Aryaka Networks, CenturyLink Cloud Connect, Equinix, Level 3 Communications, Megaport, Telus, Zayo | | **Seoul** | [KINX Gasan IDC](https://www.kinx.net/?lang=en) | 2 | Korea Central | Supported | KINX, KT, LG CNS, LGUplus, Equinix, Sejong Telecom, SK Telecom | | **Seoul2** | [KT IDC](https://www.kt-idc.com/eng/introduce/sub1_4_10.jsp#tab) | 2 | Korea Central | n/a | KT |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
Previously updated : 01/31/2022 Last updated : 05/24/2022
The following table shows locations by service provider. If you want to view ava
| **[Transtelco](https://transtelco.net/enterprise-services/)** |Supported |Supported | Dallas, Queretaro(Mexico)| | **[T-Mobile/Sprint](https://www.t-mobile.com/business/solutions/networking/cloud-networking )** |Supported |Supported | Chicago, Silicon Valley, Washington DC | | **[T-Systems](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** |Supported |Supported | Frankfurt |
+| **[Tivit](https://www.tivit.com/cloud-solutions/public-cloud/public-cloud-azure/)** |Supported |Supported | Sao Paulo2 |
| **[UOLDIVEO](https://www.uoldiveo.com.br/)** |Supported |Supported | Sao Paulo | | **[UIH](https://www.uih.co.th/en/network-solutions/global-network/cloud-direct-for-microsoft-azure-expressroute)** | Supported | Supported | Bangkok | | **[Verizon](https://enterprise.verizon.com/products/network/application-enablement/secure-cloud-interconnect/)** |Supported |Supported | Amsterdam, Chicago, Dallas, Hong Kong SAR, London, Mumbai, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Washington DC |
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md
Previously updated : 03/04/2022 Last updated : 05/25/2022
Unregister-AzProviderFeature -FeatureName AFWEnableNetworkRuleNameLogging -Provi
As more applications move to the cloud, the performance of the network elements can become a bottleneck. As the central piece of any network design, the firewall needs to support all the workloads. The Azure Firewall Premium performance boost feature allows more scalability for these deployments.
-This feature significantly increases the throughput of Azure Firewall Premium. For more details, see [Azure Firewall performance](firewall-performance.md).
+This feature significantly increases the throughput of Azure Firewall Premium. For more information, see [Azure Firewall performance](firewall-performance.md).
To enable the Azure Firewall Premium Performance boost feature, run the following commands in Azure PowerShell. Stop and start the firewall for the feature to take effect immediately. Otherwise, the firewall/s is updated with the feature within several days.
Run the following Azure PowerShell command to turn off this feature:
Unregister-AzProviderFeature -FeatureName AFWEnableAccelnet -ProviderNamespace Microsoft.Network ```
+### IDPS Private IP ranges (preview)
+
+In Azure Firewall Premium IDPS, private IP address ranges are used to identify if traffic is inbound, outbound, or internal (East-West). Each signature is applied on specific traffic direction, as indicated in the signature rules table. By default, only ranges defined by IANA RFC 1918 are considered private IP addresses. So traffic sent from a private IP address range to a private IP address range is considered internal. To modify your private IP addresses, you can now easily edit, remove, or add ranges as needed.
++
+### Structured firewall logs (preview)
+
+Today, the following diagnostic log categories are available for Azure Firewall:
+- Application rule log
+- Network rule log
+- DNS proxy log
+
+These log categories use [Azure diagnostics mode](../azure-monitor/essentials/resource-logs.md#azure-diagnostics-mode). In this mode, all data from any diagnostic setting will be collected in the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table.
+
+With this new feature, you'll be able to choose to use [Resource Specific Tables](../azure-monitor/essentials/resource-logs.md#resource-specific) instead of the existing [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table. In case both sets of logs are required, at least two diagnostic settings need to be created per firewall.
+
+In **Resource specific** mode, individual tables in the selected workspace are created for each category selected in the diagnostic setting. This method is recommended since it:
+- makes it much easier to work with the data in log queries
+- makes it easier to discover schemas and their structure
+- improves performance across both ingestion latency and query times
+- allows you to grant Azure RBAC rights on a specific table
+
+New resource specific tables are now available in Diagnostic setting that allows you to utilize the following newly added categories:
+
+- [Network rule log](/azure/azure-monitor/reference/tables/azfwnetworkrule) - Contains all Network Rule log data. Each match between data plane and network rule creates a log entry with the data plane packet and the matched rule's attributes.
+- [NAT rule log](/azure/azure-monitor/reference/tables/azfwnatrule) - Contains all DNAT (Destination Network Address Translation) events log data. Each match between data plane and DNAT rule creates a log entry with the data plane packet and the matched rule's attributes.
+- [Application rule log](/azure/azure-monitor/reference/tables/azfwapplicationrule) - Contains all Application rule log data. Each match between data plane and Application rule creates a log entry with the data plane packet and the matched rule's attributes.
+- [Threat Intelligence log](/azure/azure-monitor/reference/tables/azfwthreatintel) - Contains all Threat Intelligence events.
+- [IDPS log](/azure/azure-monitor/reference/tables/azfwidpssignature) - Contains all data plane packets that were matched with one or more IDPS signatures.
+- [DNS proxy log](/azure/azure-monitor/reference/tables/azfwdnsquery) - Contains all DNS Proxy events log data.
+- [Internal FQDN resolve failure log](/azure/azure-monitor/reference/tables/azfwinternalfqdnresolutionfailure) - Contains all internal Firewall FQDN resolution requests that resulted in failure.
+- [Application rule aggregation log](/azure/azure-monitor/reference/tables/azfwapplicationruleaggregation) - Contains aggregated Application rule log data for Policy Analytics.
+- [Network rule aggregation log](/azure/azure-monitor/reference/tables/azfwnetworkruleaggregation) - Contains aggregated Network rule log data for Policy Analytics.
+- [NAT rule aggregation log](/azure/azure-monitor/reference/tables/azfwnatruleaggregation) - Contains aggregated NAT rule log data for Policy Analytics.
+
+By default, the new resource specific tables are disabled. Open a support ticket to enable the functionality in your environment.
+
+In addition, when setting up your log analytics workspace, you must select whether you want to work with the AzureDiagnostics table (default) or with Resource Specific Tables.
+
+Additional KQL log queries were added (as seen in the following screenshot) to query structured firewall logs.
++
+> [!NOTE]
+> Existing Workbooks and any Sentinel integration will be adjusted to support the new structured logs when **Resource Specific** mode is selected.
+ ## Next steps To learn more about Azure Firewall, see [What is Azure Firewall?](overview.md).
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
Previously updated : 03/30/2022 Last updated : 05/25/2022
To learn more about Azure Firewall Premium Intermediate CA certificate requireme
A network intrusion detection and prevention system (IDPS) allows you to monitor your network for malicious activity, log information about this activity, report it, and optionally attempt to block it.
-Azure Firewall Premium provides signature-based IDPS to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. The IDPS signatures are applicable for both application and network level traffic (Layers 4-7), they're fully managed, and continuously updated. IDPS can be applied to inbound, spoke-to-spoke (East-West), and outbound traffic. Spoke-to-spoke (East-West) includes traffic that goes from/to an on-premises network.
+Azure Firewall Premium provides signature-based IDPS to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. The IDPS signatures are applicable for both application and network level traffic (Layers 4-7), they're fully managed, and continuously updated. IDPS can be applied to inbound, spoke-to-spoke (East-West), and outbound traffic. Spoke-to-spoke (East-West) includes traffic that goes from/to an on-premises network. You can configure your IDPS private IP address ranges using the **Private IP ranges** preview feature. For more information, see [Azure Firewall preview features](firewall-preview.md#idps-private-ip-ranges-preview).
The Azure Firewall signatures/rulesets include: - An emphasis on fingerprinting actual malware, Command and Control, exploit kits, and in the wild malicious activity missed by traditional prevention methods.
frontdoor Concept Endpoint Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/concept-endpoint-manager.md
Last updated 02/18/2021
-# What is Azure Front Door Standard/Premium (Preview) Endpoint Manager?
+# What is Azure Front Door Standard/Premium Endpoint Manager?
> [!NOTE]
-> * This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [Azure Front Door Docs](../front-door-overview.md).
+> * This documentation is for Azure Front Door Standard/Premium. Looking for information on Azure Front Door? View [Azure Front Door Docs](../front-door-overview.md).
Endpoint Manager provides an overview of endpoints you've configured for your Azure Front Door. An endpoint is a logical grouping of a domains and their associated configurations. Endpoint Manager helps you manage your collection of endpoints for CRUD (create, read, update, and delete) operation. You can manage the following elements for your endpoints through Endpoint
Endpoint Manager provides an overview of endpoints you've configured for your Az
Endpoint Manager list how many instances of each element are created within an endpoint. The association status for each element will also be displayed. For example, you may create multiple domains and origin groups, and assign the association between them with different routes.
-> [!IMPORTANT]
-> * Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Linked view With the linked view within Endpoint Manager, you could easily identify the association between your Azure Front Door elements, such as:
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/overview.md
Title: Organize your resources with management groups - Azure Governance description: Learn about the management groups, how their permissions work, and how to use them. Previously updated : 05/12/2022 Last updated : 05/25/2022
management group.
When looking to query on management groups outside the Azure portal, the target scope for management groups looks like **"/providers/Microsoft.Management/managementGroups/{_management-group-id_}"**.
+> [!NOTE]
+> Using the Azure Resource Manager REST API, you can enable diagnostic settings on a management group to send related Azure Activity log entries to a Log Analytics workspace, Azure Storage, or Azure Event Hub. For more information, see [Management Group Diagnostic Settings - Create Or Update](https://docs.microsoft.com/rest/api/monitor/management-group-diagnostic-settings/create-or-update).
+ ## Next steps To learn more about management groups, see:
hdinsight Tutorial Cli Rest Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/tutorial-cli-rest-proxy.md
If you don't have an Azure subscription, create a [free account](https://azure.m
|storageAccount|Replace STORAGEACCOUNTNAME with a name for your new storage account.| |httpPassword|Replace PASSWORD with a password for the cluster login, **admin**.| |sshPassword|Replace PASSWORD with a password for the secure shell username, **sshuser**.|
- |securityGroupName|Replace SECURITYGROUPNAME with the client AAD security group name for Kafka Rest Proxy. The variable will be passed to the `--kafka-client-group-name` parameter for `az-hdinsight-create`.|
- |securityGroupID|Replace SECURITYGROUPID with the client AAD security group ID for Kafka Rest Proxy. The variable will be passed to the `--kafka-client-group-id` parameter for `az-hdinsight-create`.|
+ |securityGroupName|Replace SECURITYGROUPNAME with the client AAD security group name for Kafka REST Proxy. The variable will be passed to the `--kafka-client-group-name` parameter for `az-hdinsight-create`.|
+ |securityGroupID|Replace SECURITYGROUPID with the client AAD security group ID for Kafka REST Proxy. The variable will be passed to the `--kafka-client-group-id` parameter for `az-hdinsight-create`.|
|storageContainer|Storage container the cluster will use, leave as-is for this tutorial. This variable will be set with the name of the cluster.| |workernodeCount|Number of worker nodes in the cluster, leave as-is for this tutorial. To guarantee high availability, Kafka requires a minimum of 3 worker nodes| |clusterType|Type of HDInsight cluster, leave as-is for this tutorial.|
- |clusterVersion|HDInsight cluster version, leave as-is for this tutorial. Kafka Rest Proxy requires a minimum cluster version of 4.0.|
- |componentVersion|Kafka version, leave as-is for this tutorial. Kafka Rest Proxy requires a minimum component version of 2.1.|
+ |clusterVersion|HDInsight cluster version, leave as-is for this tutorial. Kafka REST Proxy requires a minimum cluster version of 4.0.|
+ |componentVersion|Kafka version, leave as-is for this tutorial. Kafka REST Proxy requires a minimum component version of 2.1.|
Update the variables with desired values. Then enter the CLI commands to set the environment variables.
If you don't have an Azure subscription, create a [free account](https://azure.m
|Parameter | Description| ||| |--kafka-management-node-size|The size of the node. This tutorial uses the value **Standard_D4_v2**.|
- |--kafka-client-group-id|The client AAD security group ID for Kafka Rest Proxy. The value is passed from the variable **$securityGroupID**.|
- |--kafka-client-group-name|The client AAD security group name for Kafka Rest Proxy. The value is passed from the variable **$securityGroupName**.|
+ |--kafka-client-group-id|The client AAD security group ID for Kafka REST Proxy. The value is passed from the variable **$securityGroupID**.|
+ |--kafka-client-group-name|The client AAD security group name for Kafka REST Proxy. The value is passed from the variable **$securityGroupName**.|
|--version|The HDInsight cluster version must be at least 4.0. The value is passed from the variable **$clusterVersion**.| |--component-version|The Kafka version must be at least 2.1. The value is passed from the variable **$componentVersion**.|
hdinsight Apache Spark Zeppelin Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-zeppelin-notebook.md
Privileged domain users can use the `Shiro.ini` file to control access to the In
/api/interpreter/** = authc, roles[adminGroupName] ```
+### Example shiro.ini for multiple domain groups:
+
+ ```
+ [main]
+ anyofrolesuser = org.apache.zeppelin.utils.AnyOfRolesUserAuthorizationFilter
+
+ [roles]
+ group1 = *
+ group2 = *
+ group3 = *
+
+ [urls]
+ /api/interpreter/** = authc, anyofrolesuser[group1, group2, group3]
+ ```
+
## Livy session management The first code paragraph in your Zeppelin notebook creates a new Livy session in your cluster. This session is shared across all Zeppelin notebooks that you later create. If the Livy session is killed for any reason, jobs won't run from the Zeppelin notebook.
healthcare-apis Davinci Drug Formulary Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/davinci-drug-formulary-tutorial.md
--++ Last updated 02/15/2022
import-export Storage Import Export Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-requirements.md
Previously updated : 03/14/2022 Last updated : 05/19/2022 # Azure Import/Export system requirements
The following list of disks is supported for use with the Import/Export service.
<sup>*</sup>An HDD must have 512-byte sectors; 4096-byte (4K) sectors are not supported.
+If you use Advanced Format (or 512e) drive and connect the drive to the system using USB to SATA Hard Drive adapter, make sure that the system sees the logical sector size as 512. If the drive reports 4096 logical sector size, choose one of these options:
+
+- Use direct SATA connection to prepare the drive.
+- Use USB to SATA hard drive adapter that reports logical sector size as 512 for advanced format drives.
+
+
+To check the logical sector size that the disk reports, run the following commands:
+1. Run PowerShell as Administrator.
+1. To identify the disk drive number, run `Get-disk` cmdlet. Make a note of the number of the USB connected drive.
+1. To see the logical sector size on this disk, run the following command:
+
+ `Get-Disk -number <Enter Disk Number> | fl *`
+
+### Unsupported disks
+ The following disk types are not supported: - USBs. - External HDD with built-in USB adaptor. - Disks that are inside the casing of an external HDD.
+### Using multiple disks
+ A single import/export job can have: - A maximum of 10 HDD/SSDs.
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
The following table lists the components included in each release starting with
The following table lists the components included in each release up to the 1.1 LTS release. The components listed in this table can be installed or updated individually, and are backwards compatible with older versions.
-IoT Edge 1.1 is the first long-term support (LTS) release channel. This version introduced no new features, but will receive security updates and fixes to regressions. IoT Edge 1.1 LTS uses .NET Core 3.1, and will be supported until December 3, 2022 to match the [.NET Core and .NET 5 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+IoT Edge 1.1 is the first long-term support (LTS) release channel. This version introduced no new features, but will receive security updates and fixes to regressions. IoT Edge 1.1 LTS uses .NET Core 3.1, and will be supported until December 13, 2022 to match the [.NET Core and .NET 5 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
| Release | iotedge | edgeHub<br>edgeAgent | libiothsm | moby | |--|--|--|--|--|
IoT Edge 1.1 is the first long-term support (LTS) release channel. This version
| **1.0.6** | 1.0.6<br>1.0.6.1 | 1.0.6<br>1.0.6.1 | 1.0.6<br>1.0.6.1 | | | **1.0.5** | 1.0.5 | 1.0.5 | 1.0.5 | 3.0.2 |
->[!IMPORTANT]
->With the release of a long-term support channel, we recommend that all current customers running 1.0.x upgrade their devices to 1.1.x to receive ongoing support.
+> [!IMPORTANT]
+> * Every Microsoft product has a lifecycle. The lifecycle begins when a product is released and ends when it's no longer supported. Knowing key dates in this lifecycle helps you make informed decisions about when to upgrade or make other changes to your software. IoT Edge is governed by Microsoft's [Modern Lifecycle Policy](/lifecycle/policies/modern).
+> * With the release of a long-term support channel, we recommend that all current customers running 1.0.x upgrade their devices to 1.1.x to receive ongoing support.
IoT Edge uses the Microsoft.Azure.Devices.Client SDK. For more information, see the [Azure IoT C# SDK GitHub repo](https://github.com/Azure/azure-iot-sdk-csharp) or the [Azure SDK for .NET reference content](/dotnet/api/overview/azure/iot/client). The following list shows the version of the client SDK that each release is tested against:
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
Azure IoT Edge is a product built from the open-source IoT Edge project hosted on GitHub. All new releases are made available in the [Azure IoT Edge project](https://github.com/Azure/azure-iotedge). Contributions and bug reports can be made on the [open-source IoT Edge project](https://github.com/Azure/iotedge).
+Azure IoT Edge is governed by Microsoft's [Modern Lifecycle Policy](/lifecycle/policies/modern).
+ ## Documented versions The IoT Edge documentation on this site is available for two different versions of the product, so that you can choose the content that applies to your IoT Edge environment. Currently, the two supported versions are:
key-vault Tutorial Net Create Vault Azure Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-net-create-vault-azure-web-app.md
dotnet add package Azure.Security.KeyVault.Secrets
#### Update the code
-Find and open the Startup.cs file in your akvwebapp project.
+Find and open the Startup.cs file for .NET 5.0 or earlier, or Program.cs file for .NET 6.0 in your akvwebapp project.
Add these lines to the header:
using Azure.Security.KeyVault.Secrets;
using Azure.Core; ```
-Add the following lines before the `app.UseEndpoints` call, updating the URI to reflect the `vaultUri` of your key vault. This code uses [DefaultAzureCredential()](/dotnet/api/azure.identity.defaultazurecredential) to authenticate to Key Vault, which uses a token from managed identity to authenticate. For more information about authenticating to Key Vault, see the [Developer's Guide](./developers-guide.md#authenticate-to-key-vault-in-code). The code also uses exponential backoff for retries in case Key Vault is being throttled. For more information about Key Vault transaction limits, see [Azure Key Vault throttling guidance](./overview-throttling.md).
+Add the following lines before the `app.UseEndpoints` call (.NET 5.0 or earlier) or `app.MapGet` call (.NET 6.0) , updating the URI to reflect the `vaultUri` of your key vault. This code uses [DefaultAzureCredential()](/dotnet/api/azure.identity.defaultazurecredential) to authenticate to Key Vault, which uses a token from managed identity to authenticate. For more information about authenticating to Key Vault, see the [Developer's Guide](./developers-guide.md#authenticate-to-key-vault-in-code). The code also uses exponential backoff for retries in case Key Vault is being throttled. For more information about Key Vault transaction limits, see [Azure Key Vault throttling guidance](./overview-throttling.md).
```csharp SecretClientOptions options = new SecretClientOptions()
KeyVaultSecret secret = client.GetSecret("<mySecret>");
string secretValue = secret.Value; ```
+##### .NET 5.0 or earlier
+ Update the line `await context.Response.WriteAsync("Hello World!");` to look like this line: ```csharp await context.Response.WriteAsync(secretValue); ```
+##### .NET 6.0
+
+Update the line `app.MapGet("/", () => "Hello World!");` to look like this line:
+
+```csharp
+app.MapGet("/", () => secretValue);
+```
++ Be sure to save your changes before continuing to the next step. #### Redeploy your web app
machine-learning How To Deploy Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-local.md
model = Model.register(model_path="sklearn_regression_model.pkl",
You can then find your newly registered model on the Azure Machine Learning **Model** tab: For more information on uploading and updating models and environments, see [Register model and deploy locally with advanced usages](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/deploy-to-local/register-model-deploy-local-advanced.ipynb).
machine-learning How To Deploy Managed Online Endpoint Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoint-sdk-v2.md
+
+ Title: Deploy machine learning models to managed online endpoint using Python SDK v2 (preview).
+
+description: Learn to deploy your machine learning model to Azure using Python SDK v2 (preview).
++++++ Last updated : 05/25/2022++++
+# Deploy and score a machine learning model with managed online endpoint using Python SDK v2 (preview)
++
+> [!IMPORTANT]
+> SDK v2 is currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+In this article, you learn how to deploy your machine learning model to managed online endpoint and get predictions. You'll begin by deploying a model on your local machine to debug any errors, and then you'll deploy and test it in Azure.
+
+## Prerequisites
+
+* If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+* The [Azure Machine Learning SDK v2 for Python](/python/api/overview/azure/ml/installv2).
+* You must have an Azure resource group, and you (or the service principal you use) must have Contributor access to it.
+* You must have an Azure Machine Learning workspace.
+* To deploy locally, you must install [Docker Engine](https://docs.docker.com/engine/) on your local computer. We highly recommend this option, so it's easier to debug issues.
+
+### Clone examples repository
+
+To run the training examples, first clone the examples repository and change into the `sdk` directory:
+
+```bash
+git clone --depth 1 https://github.com/Azure/azureml-examples
+cd azureml-examples/sdk
+```
+
+> [!TIP]
+> Use `--depth 1` to clone only the latest commit to the repository, which reduces time to complete the operation.
+
+## Connect to Azure Machine Learning workspace
+
+The [workspace](concept-workspace.md) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
+
+1. Import the required libraries:
+
+ ```python
+ # import required libraries
+ from azure.ai.ml import MLClient
+ from azure.ai.ml.entities import (
+ ManagedOnlineEndpoint,
+ ManagedOnlineDeployment,
+ Model,
+ Environment,
+ CodeConfiguration,
+ )
+ from azure.identity import DefaultAzureCredential
+ ```
+
+1. Configure workspace details and get a handle to the workspace:
+
+ To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. This example uses the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential).
+
+ ```python
+ # enter details of your AML workspace
+ subscription_id = "<SUBSCRIPTION_ID>"
+ resource_group = "<RESOURCE_GROUP>"
+ workspace = "<AML_WORKSPACE_NAME>"
+ ```
+
+ ```python
+ # get a handle to the workspace
+ ml_client = MLClient(
+ DefaultAzureCredential(), subscription_id, resource_group, workspace
+ )
+ ```
+
+## Create local endpoint and deployment
+
+> [!NOTE]
+> To deploy locally, [Docker Engine](https://docs.docker.com/engine/install/) must be installed.
+> Docker Engine must be running. Docker Engine typically starts when the computer starts. If it doesn't, you can [troubleshoot Docker Engine](https://docs.docker.com/config/daemon/#start-the-daemon-manually).
+
+1. Create local endpoint:
+
+ The goal of a local endpoint deployment is to validate and debug your code and configuration before you deploy to Azure. Local deployment has the following limitations:
+
+ * Local endpoints don't support traffic rules, authentication, or probe settings.
+ * Local endpoints support only one deployment per endpoint.
+
+ ```python
+ # Creating a local endpoint
+ import datetime
+
+ local_endpoint_name = "local-" + datetime.datetime.now().strftime("%m%d%H%M%f")
+
+ # create an online endpoint
+ endpoint = ManagedOnlineEndpoint(
+ name=local_endpoint_name, description="this is a sample local endpoint"
+ )
+ ```
+
+ ```python
+ ml_client.online_endpoints.begin_create_or_update(endpoint, local=True)
+ ```
+
+1. Create local deployment:
+
+ The example contains all the files needed to deploy a model on an online endpoint. To deploy a model, you must have:
+
+ * Model files (or the name and version of a model that's already registered in your workspace). In the example, we have a scikit-learn model that does regression.
+ * The code that's required to score the model. In this case, we have a score.py file.
+ * An environment in which your model runs. As you'll see, the environment might be a Docker image with Conda dependencies, or it might be a Dockerfile.
+ * Settings to specify the instance type and scaling capacity.
+
+ **Key aspects of deployment**
+ * `name` - Name of the deployment.
+ * `endpoint_name` - Name of the endpoint to create the deployment under.
+ * `model` - The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.
+ * `environment` - The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification.
+ * `code_configuration` - the configuration for the source code and scoring script
+ * `path`- Path to the source code directory for scoring the model
+ * `scoring_script` - Relative path to the scoring file in the source code directory
+ * `instance_type` - The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md).
+ * `instance_count` - The number of instances to use for the deployment
+
+ ```python
+ model = Model(path="../model-1/model/sklearn_regression_model.pkl")
+ env = Environment(
+ conda_file="../model-1/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
+ )
+
+ blue_deployment = ManagedOnlineDeployment(
+ name="blue",
+ endpoint_name=local_endpoint_name,
+ model=model,
+ environment=env,
+ code_configuration=CodeConfiguration(
+ code="../model-1/onlinescoring", scoring_script="score.py"
+ ),
+ instance_type="Standard_F2s_v2",
+ instance_count=1,
+ )
+ ```
+
+ ```python
+ ml_client.online_deployments.begin_create_or_update(
+ deployment=blue_deployment, local=True
+ )
+ ```
+
+## Verify the local deployment succeeded
+
+1. Check the status to see whether the model was deployed without error:
+
+ ```python
+ ml_client.online_endpoints.get(name=local_endpoint_name, local=True)
+ ```
+
+1. Get logs:
+
+ ```python
+ ml_client.online_deployments.get_logs(
+ name="blue", endpoint_name=local_endpoint_name, local=True, lines=50
+ )
+ ```
+
+## Invoke the local endpoint
+
+Invoke the endpoint to score the model by using the convenience command invoke and passing query parameters that are stored in a JSON file
+
+```python
+ml_client.online_endpoints.invoke(
+ endpoint_name=local_endpoint_name,
+ request_file="../model-1/sample-request.json",
+ local=True,
+)
+```
+
+## Deploy your online endpoint to Azure
+
+Next, deploy your online endpoint to Azure.
+
+1. Configure online endpoint:
+
+ > [!TIP]
+ > * `endpoint_name`: The name of the endpoint. It must be unique in the Azure region. For more information on the naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
+ > * `auth_mode` : Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. A `key` doesn't expire, but `aml_token` does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md).
+ > * Optionally, you can add description, tags to your endpoint.
+
+ ```python
+ # Creating a unique endpoint name with current datetime to avoid conflicts
+ import datetime
+
+ online_endpoint_name = "endpoint-" + datetime.datetime.now().strftime("%m%d%H%M%f")
+
+ # create an online endpoint
+ endpoint = ManagedOnlineEndpoint(
+ name=online_endpoint_name,
+ description="this is a sample online endpoint",
+ auth_mode="key",
+ tags={"foo": "bar"},
+ )
+ ```
+
+1. Create the endpoint:
+
+ Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
+
+ ```python
+ ml_client.begin_create_or_update(endpoint)
+ ```
+
+1. Configure online deployment:
+
+ A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `ManagedOnlineDeployment` class.
+
+ ```python
+ model = Model(path="../model-1/model/sklearn_regression_model.pkl")
+ env = Environment(
+ conda_file="../model-1/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
+ )
+
+ blue_deployment = ManagedOnlineDeployment(
+ name="blue",
+ endpoint_name=online_endpoint_name,
+ model=model,
+ environment=env,
+ code_configuration=CodeConfiguration(
+ code="../model-1/onlinescoring", scoring_script="score.py"
+ ),
+ instance_type="Standard_F2s_v2",
+ instance_count=1,
+ )
+ ```
+
+1. Create the deployment:
+
+ Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+ ```python
+ ml_client.begin_create_or_update(blue_deployment)
+ ```
+
+ ```python
+ # blue deployment takes 100 traffic
+ endpoint.traffic = {"blue": 100}
+ ml_client.begin_create_or_update(endpoint)
+ ```
+
+## Test the endpoint with sample data
+
+Using the `MLClient` created earlier, we'll get a handle to the endpoint. The endpoint can be invoked using the `invoke` command with the following parameters:
+
+* `endpoint_name` - Name of the endpoint
+* `request_file` - File with request data
+* `deployment_name` - Name of the specific deployment to test in an endpoint
+
+We'll send a sample request using a [json](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/model-1/sample-request.json) file.
+
+```python
+# test the blue deployment with some sample data
+ml_client.online_endpoints.invoke(
+ endpoint_name=online_endpoint_name,
+ deployment_name="blue",
+ request_file="../model-1/sample-request.json",
+)
+```
+
+## Managing endpoints and deployments
+
+1. Get details of the endpoint:
+
+ ```python
+ # Get the details for online endpoint
+ endpoint = ml_client.online_endpoints.get(name=online_endpoint_name)
+
+ # existing traffic details
+ print(endpoint.traffic)
+
+ # Get the scoring URI
+ print(endpoint.scoring_uri)
+ ```
+
+1. Get the logs for the new deployment:
+
+ Get the logs for the green deployment and verify as needed
+
+ ```python
+ ml_client.online_deployments.get_logs(
+ name="blue", endpoint_name=online_endpoint_name, lines=50
+ )
+ ```
+
+## Delete the endpoint
+
+```python
+ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
+```
+
+## Next steps
+
+Try these next steps to learn how to use the Azure Machine Learning SDK (v2) for Python:
+* [Managed online endpoint safe rollout](how-to-safely-rollout-managed-endpoints-sdk-v2.md)
+* Explore online endpoint samples - [https://github.com/Azure/azureml-examples/tree/main/sdk/endpoints](https://github.com/Azure/azureml-examples/tree/main/sdk/endpoints)
machine-learning How To Log Pipelines Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-pipelines-application-insights.md
Some of the queries below use 'customDimensions.Level'. These severity levels co
## Next Steps
-Once you have logs in your Application Insights instance, they can be used to set [Azure Monitor alerts](../azure-monitor/alerts/alerts-overview.md#what-you-can-alert-on) based on query results.
+Once you have logs in your Application Insights instance, they can be used to set [Azure Monitor alerts](../azure-monitor/alerts/alerts-overview.md) based on query results.
You can also add results from queries to an [Azure Dashboard](../azure-monitor/app/tutorial-app-dashboards.md#add-logs-query) for additional insights.
machine-learning How To Safely Rollout Managed Endpoints Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-managed-endpoints-sdk-v2.md
+
+ Title: Safe rollout for managed online endpoints using Python SDK v2 (preview).
+
+description: Safe rollout for online endpoints using Python SDK v2 (preview).
++++++ Last updated : 05/25/2022++++
+# Safe rollout for managed online endpoints using Python SDK v2 (preview)
++
+> [!IMPORTANT]
+> SDK v2 is currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+In this article, you learn how to deploy a new version of the model without causing any disruption. With blue-green deployment or safe rollout, an approach in which a new version of a web service is introduced to production by rolling out the change to a small subset of users/requests before rolling it out completely. This article assumes you're using online endpoints; for more information, see [Azure Machine Learning endpoints](concept-endpoints.md).
+
+In this article, you'll learn to:
+
+* Deploy a new online endpoint called "blue" that serves version 1 of the model.
+* Scale this deployment so that it can handle more requests.
+* Deploy version 2 of the model to an endpoint called "green" that accepts no live traffic.
+* Test the green deployment in isolation.
+* Send 10% of live traffic to the green deployment.
+* Fully cut-over all live traffic to the green deployment.
+* Delete the now-unused v1 blue deployment.
+
+## Prerequisites
+
+* If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+* The [Azure Machine Learning SDK v2 for Python](/python/api/overview/azure/ml/installv2).
+* You must have an Azure resource group, and you (or the service principal you use) must have Contributor access to it.
+* You must have an Azure Machine Learning workspace.
+* To deploy locally, you must install [Docker Engine](https://docs.docker.com/engine/) on your local computer. We highly recommend this option, so it's easier to debug issues.
+
+### Clone examples repository
+
+To run the training examples, first clone the examples repository and change into the `sdk` directory:
+
+```bash
+git clone --depth 1 https://github.com/Azure/azureml-examples
+cd azureml-examples/sdk
+```
+
+> [!TIP]
+> Use `--depth 1` to clone only the latest commit to the repository, which reduces time to complete the operation.
+
+## Connect to Azure Machine Learning workspace
+
+The [workspace](concept-workspace.md) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
+
+1. Import the required libraries:
+
+ ```python
+ # import required libraries
+ from azure.ai.ml import MLClient
+ from azure.ai.ml.entities import (
+ ManagedOnlineEndpoint,
+ ManagedOnlineDeployment,
+ Model,
+ Environment,
+ CodeConfiguration,
+ )
+ from azure.identity import DefaultAzureCredential
+ ```
+
+1. Configure workspace details and get a handle to the workspace:
+
+ To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. This example uses the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential).
+
+ ```python
+ # enter details of your AML workspace
+ subscription_id = "<SUBSCRIPTION_ID>"
+ resource_group = "<RESOURCE_GROUP>"
+ workspace = "<AML_WORKSPACE_NAME>"
+ ```
+
+ ```python
+ # get a handle to the workspace
+ ml_client = MLClient(
+ DefaultAzureCredential(), subscription_id, resource_group, workspace
+ )
+ ```
+
+## Create online endpoint
+
+Online endpoints are endpoints that are used for online (real-time) inferencing. Online endpoints contain deployments that are ready to receive data from clients and can send responses back in real time.
+
+To create an online endpoint, we'll use `ManagedOnlineEndpoint`. This class allows user to configure the following key aspects:
+
+* `name` - Name of the endpoint. Needs to be unique at the Azure region level
+* `auth_mode` - The authentication method for the endpoint. Key-based authentication and Azure ML token-based authentication are supported. Key-based authentication doesn't expire but Azure ML token-based authentication does. Possible values are `key` or `aml_token`.
+* `identity`- The managed identity configuration for accessing Azure resources for endpoint provisioning and inference.
+ * `type`- The type of managed identity. Azure Machine Learning supports `system_assigned` or `user_assigned` identity.
+ * `user_assigned_identities` - List (array) of fully qualified resource IDs of the user-assigned identities. This property is required if `identity.type` is user_assigned.
+* `description`- Description of the endpoint.
+
+1. Configure the endpoint:
+
+ ```python
+ # Creating a unique endpoint name with current datetime to avoid conflicts
+ import datetime
+
+ online_endpoint_name = "endpoint-" + datetime.datetime.now().strftime("%m%d%H%M%f")
+
+ # create an online endpoint
+ endpoint = ManagedOnlineEndpoint(
+ name=online_endpoint_name,
+ description="this is a sample online endpoint",
+ auth_mode="key",
+ tags={"foo": "bar"},
+ )
+ ```
+
+1. Create the endpoint:
+
+ Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
+
+ ```python
+ ml_client.begin_create_or_update(endpoint)
+ ```
+
+## Create the 'blue' deployment
+
+A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `ManagedOnlineDeployment` class. This class allows user to configure the following key aspects.
+
+**Key aspects of deployment**
+* `name` - Name of the deployment.
+* `endpoint_name` - Name of the endpoint to create the deployment under.
+* `model` - The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.
+* `environment` - The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification.
+* `code_configuration` - the configuration for the source code and scoring script
+ * `path`- Path to the source code directory for scoring the model
+ * `scoring_script` - Relative path to the scoring file in the source code directory
+* `instance_type` - The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md).
+* `instance_count` - The number of instances to use for the deployment
+
+1. Configure blue deployment:
+
+ ```python
+ # create blue deployment
+ model = Model(path="../model-1/model/sklearn_regression_model.pkl")
+ env = Environment(
+ conda_file="../model-1/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
+ )
+
+ blue_deployment = ManagedOnlineDeployment(
+ name="blue",
+ endpoint_name=online_endpoint_name,
+ model=model,
+ environment=env,
+ code_configuration=CodeConfiguration(
+ code="../model-1/onlinescoring", scoring_script="score.py"
+ ),
+ instance_type="Standard_F2s_v2",
+ instance_count=1,
+ )
+ ```
+
+1. Create the deployment:
+
+ Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+ ```python
+ ml_client.begin_create_or_update(blue_deployment)
+ ```
+
+ ```python
+ # blue deployment takes 100 traffic
+ endpoint.traffic = {"blue": 100}
+ ml_client.begin_create_or_update(endpoint)
+ ```
+
+## Test the endpoint with sample data
+
+Using the `MLClient` created earlier, we'll get a handle to the endpoint. The endpoint can be invoked using the `invoke` command with the following parameters:
+
+* `endpoint_name` - Name of the endpoint
+* `request_file` - File with request data
+* `deployment_name` - Name of the specific deployment to test in an endpoint
+
+We'll send a sample request using a [json](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/model-1/sample-request.json) file.
+
+```python
+# test the blue deployment with some sample data
+ml_client.online_endpoints.invoke(
+ endpoint_name=online_endpoint_name,
+ deployment_name="blue",
+ request_file="../model-1/sample-request.json",
+)
+```
+
+## Scale the deployment
+
+Using the `MLClient` created earlier, we'll get a handle to the deployment. The deployment can be scaled by increasing or decreasing the `instance_count`.
+
+```python
+# scale the deployment
+blue_deployment = ml_client.online_deployments.get(
+ name="blue", endpoint_name=online_endpoint_name
+)
+blue_deployment.instance_count = 2
+ml_client.online_deployments.begin_create_or_update(blue_deployment)
+```
+
+## Get endpoint details
+
+```python
+# Get the details for online endpoint
+endpoint = ml_client.online_endpoints.get(name=online_endpoint_name)
+
+# existing traffic details
+print(endpoint.traffic)
+
+# Get the scoring URI
+print(endpoint.scoring_uri)
+```
+
+## Deploy a new model, but send no traffic yet
+
+Create a new deployment named green:
+
+```python
+# create green deployment
+model2 = Model(path="../model-2/model/sklearn_regression_model.pkl")
+env2 = Environment(
+ conda_file="../model-2/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
+)
+
+green_deployment = ManagedOnlineDeployment(
+ name="green",
+ endpoint_name=online_endpoint_name,
+ model=model2,
+ environment=env2,
+ code_configuration=CodeConfiguration(
+ code="../model-2/onlinescoring", scoring_script="score.py"
+ ),
+ instance_type="Standard_F2s_v2",
+ instance_count=1,
+)
+```
+
+```python
+# use MLClient to create green deployment
+ml_client.begin_create_or_update(green_deployment)
+```
+
+## Test the 'green' deployment
+
+Though green has 0% of traffic allocated, you can still invoke the endpoint and deployment with [json](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/model-2/sample-request.json) file.
+
+```python
+ml_client.online_endpoints.invoke(
+ endpoint_name=online_endpoint_name,
+ deployment_name="green",
+ request_file="../model-2/sample-request.json",
+)
+```
+
+1. Test the new deployment with a small percentage of live traffic:
+
+ Once you've tested your green deployment, allocate a small percentage of traffic to it:
+
+ ```python
+ endpoint.traffic = {"blue": 90, "green": 10}
+ ml_client.begin_create_or_update(endpoint)
+ ```
+
+ Now, your green deployment will receive 10% of requests.
+
+1. Send all traffic to your new deployment:
+
+ Once you're satisfied that your green deployment is fully satisfactory, switch all traffic to it.
+
+ ```python
+ endpoint.traffic = {"blue": 0, "green": 100}
+ ml_client.begin_create_or_update(endpoint)
+ ```
+
+1. Remove the old deployment:
+
+ ```python
+ ml_client.online_deployments.delete(name="blue", endpoint_name=online_endpoint_name)
+ ```
+
+## Delete endpoint
+
+```python
+ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
+```
+
+## Next steps
+
+* Explore online endpoint samples - [https://github.com/Azure/azureml-examples/tree/main/sdk/endpoints](https://github.com/Azure/azureml-examples/tree/main/sdk/endpoints)
machine-learning How To Use Batch Endpoint Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint-sdk-v2.md
+
+ Title: 'Use batch endpoints for batch scoring using Python SDK v2 (preview)'
+
+description: In this article, learn how to create a batch endpoint to continuously batch score large data using Python SDK v2 (preview).
+++++++ Last updated : 05/25/2022+
+#Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
++
+# Use batch endpoints for batch scoring using Python SDK v2 (preview)
++
+> [!IMPORTANT]
+> SDK v2 is currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Learn how to use batch endpoints to do batch scoring using Python SDK v2. Batch endpoints simplify the process of hosting your models for batch scoring, so you can focus on machine learning, not infrastructure. For more information, see [What are Azure Machine Learning endpoints?](concept-endpoints.md).
+
+In this article, you'll learn to:
+
+* Connect to your Azure machine learning workspace from the Python SDK v2.
+* Create a batch endpoint from Python SDK v2.
+* Create deployments on that endpoint from Python SDK v2.
+* Test a deployment with a sample request.
+
+## Prerequisites
+
+* A basic understanding of Machine Learning.
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
+* An Azure ML workspace with computer cluster to run your batch scoring job.
+* The [Azure Machine Learning SDK v2 for Python](/python/api/overview/azure/ml/installv2).
++
+## 1. Connect to Azure Machine Learning workspace
+
+The [workspace](concept-workspace.md) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which the job will be run.
+
+1. Import the required libraries:
+
+ ```python
+ # import required libraries
+ from azure.ai.ml import MLClient, Input
+ from azure.ai.ml.entities import (
+ BatchEndpoint,
+ BatchDeployment,
+ Model,
+ Environment,
+ BatchRetrySettings,
+ )
+ from azure.ai.ml.entities._assets import Dataset
+ from azure.identity import DefaultAzureCredential
+ from azure.ai.ml.constants import BatchDeploymentOutputAction
+ ```
+
+1. Configure workspace details and get a handle to the workspace:
+
+ To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. This example uses the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential).
+
+ ```python
+ # enter details of your AML workspace
+ subscription_id = "<SUBSCRIPTION_ID>"
+ resource_group = "<RESOURCE_GROUP>"
+ workspace = "<AML_WORKSPACE_NAME>"
+ ```
+
+ ```python
+ # get a handle to the workspace
+ ml_client = MLClient(
+ DefaultAzureCredential(), subscription_id, resource_group, workspace
+ )
+ ```
+
+## Create batch endpoint
+
+Batch endpoints are endpoints that are used batch inferencing on large volumes of data over a period of time. Batch endpoints receive pointers to data and run jobs asynchronously to process the data in parallel on compute clusters. Batch endpoints store outputs to a data store for further analysis.
+
+To create an online endpoint, we'll use `BatchEndpoint`. This class allows user to configure the following key aspects:
+
+* `name` - Name of the endpoint. Needs to be unique at the Azure region level
+* `auth_mode` - The authentication method for the endpoint. Currently only Azure Active Directory (Azure AD) token-based (`aad_token`) authentication is supported.
+* `identity`- The managed identity configuration for accessing Azure resources for endpoint provisioning and inference.
+* `defaults` - Default settings for the endpoint.
+ * `deployment_name` - Name of the deployment that will serve as the default deployment for the endpoint.
+* `description`- Description of the endpoint.
+
+1. Configure the endpoint:
+
+ ```python
+ # Creating a unique endpoint name with current datetime to avoid conflicts
+ import datetime
+
+ batch_endpoint_name = "my-batch-endpoint-" + datetime.datetime.now().strftime(
+ "%Y%m%d%H%M"
+ )
+
+ # create a batch endpoint
+ endpoint = BatchEndpoint(
+ name=batch_endpoint_name,
+ description="this is a sample batch endpoint",
+ tags={"foo": "bar"},
+ )
+ ```
+
+1. Create the endpoint:
+
+ Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
+
+ ```python
+ ml_client.begin_create_or_update(endpoint)
+ ```
+
+## Create a deployment
+
+A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `BatchDeployment` class. This class allows user to configure the following key aspects.
+
+* `name` - Name of the deployment.
+* `endpoint_name` - Name of the endpoint to create the deployment under.
+* `model` - The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.
+* `environment` - The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification.
+* `code_path`- Path to the source code directory for scoring the model
+* `scoring_script` - Relative path to the scoring file in the source code directory
+* `compute` - Name of the compute target to execute the batch scoring jobs on
+* `instance_count`- The number of nodes to use for each batch scoring job.
+* `max_concurrency_per_instance`- The maximum number of parallel scoring_script runs per instance.
+* `mini_batch_size` - The number of files the code_configuration.scoring_script can process in one `run`() call.
+* `retry_settings`- Retry settings for scoring each mini batch.
+ * `max_retries`- The maximum number of retries for a failed or timed-out mini batch (default is 3)
+ * `timeout`- The timeout in seconds for scoring a mini batch (default is 30)
+* `output_action`- Indicates how the output should be organized in the output file. Allowed values are `append_row` or `summary_only`. Default is `append_row`
+* `output_file_name`- Name of the batch scoring output file. Default is `predictions.csv`
+* `environment_variables`- Dictionary of environment variable name-value pairs to set for each batch scoring job.
+* `logging_level`- The log verbosity level. Allowed values are `warning`, `info`, `debug`. Default is `info`.
+
+1. Configure the deployment:
+
+ ```python
+ # create a batch deployment
+ model = Model(path="./mnist/model/")
+ env = Environment(
+ conda_file="./mnist/environment/conda.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
+ )
+ deployment = BatchDeployment(
+ name="non-mlflow-deployment",
+ description="this is a sample non-mlflow deployment",
+ endpoint_name=batch_endpoint_name,
+ model=model,
+ code_path="./mnist/code/",
+ scoring_script="digit_identification.py",
+ environment=env,
+ compute="cpu-cluster",
+ instance_count=2,
+ max_concurrency_per_instance=2,
+ mini_batch_size=10,
+ output_action=BatchDeploymentOutputAction.APPEND_ROW,
+ output_file_name="predictions.csv",
+ retry_settings=BatchRetrySettings(max_retries=3, timeout=30),
+ logging_level="info",
+ )
+ ```
+
+1. Create the deployment:
+
+ Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+ ```python
+ ml_client.begin_create_or_update(deployment)
+ ```
+
+## Test the endpoint with sample data
+
+Using the `MLClient` created earlier, we'll get a handle to the endpoint. The endpoint can be invoked using the `invoke` command with the following parameters:
+
+* `name` - Name of the endpoint
+* `input_path` - Path where input data is present
+* `deployment_name` - Name of the specific deployment to test in an endpoint
+
+1. Invoke the endpoint:
+
+ ```python
+ # create a dataset form the folderpath
+ input = Input(path="https://pipelinedata.blob.core.windows.net/sampledata/mnist")
+
+ # invoke the endpoint for batch scoring job
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=batch_endpoint_name,
+ input_data=input,
+ deployment_name="non-mlflow-deployment", # name is required as default deployment is not set
+ params_override=[{"mini_batch_size": "20"}, {"compute.instance_count": "4"}],
+ )
+ ```
+
+1. Get the details of the invoked job:
+
+ Let us get details and logs of the invoked job
+
+ ```python
+ # get the details of the job
+ job_name = job.name
+ batch_job = ml_client.jobs.get(name=job_name)
+ print(batch_job.status)
+ # stream the job logs
+ ml_client.jobs.stream(name=job_name)
+ ```
+
+## Clean up resources
+
+Delete endpoint
+
+```python
+ml_client.batch_endpoints.begin_delete(name=batch_endpoint_name)
+```
+
+## Next steps
+
+If you encounter problems using batch endpoints, see [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md).
marketplace Plan Azure App Managed App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-azure-app-managed-app.md
Previously updated : 11/02/2021 Last updated : 05/25/2022 # Plan an Azure managed application for an Azure application offer
Use an Azure Application: Managed application plan when the following conditions
| An Azure subscription | Managed applications must be deployed to a customer's subscription, but they can be managed by a third party. | | Billing and metering | The resources are provided in a customer's Azure subscription. VMs that use the pay-as-you-go payment model are transacted with the customer via Microsoft and billed via the customer's Azure subscription. <br><br> For bring-your-own-license VMs, Microsoft bills any infrastructure costs that are incurred in the customer subscription, but you transact software licensing fees with the customer directly. | | Azure-compatible virtual hard disk (VHD) | VMs must be built on Windows or Linux. For more information, see:<br> * [Create an Azure VM technical asset](./azure-vm-certification-faq.yml#address-a-vulnerability-or-an-exploit-in-a-vm-offer) (for Windows VHDs).<br> * [Linux distributions endorsed on Azure](../virtual-machines/linux/endorsed-distros.md) (for Linux VHDs). |
-| Customer usage attribution | All new Azure application offers must also include an [Azure partner customer usage attribution](azure-partner-customer-usage-attribution.md) GUID. For more information about customer usage attribution and how to enable it, see [Azure partner customer usage attribution](azure-partner-customer-usage-attribution.md). |
+| Customer usage attribution | For more information about customer usage attribution and how to enable it, see [Azure partner customer usage attribution](azure-partner-customer-usage-attribution.md). |
| Deployment package | You'll need a deployment package that will let customers deploy your plan. If you create multiple plans that require the same technical configuration, you can use the same package. For details, see the next section: Deployment package. | > [!NOTE]
Maximum file sizes supported are:
- Up to 1 Gb in total compressed .zip archive size - Up to 1 Gb for any individual uncompressed file within the .zip archive
-All new Azure application offers must also include an [Azure partner customer usage attribution](azure-partner-customer-usage-attribution.md) GUID.
- ## Azure regions You can publish your plan to the Azure public region, Azure Government region, or both. Before publishing to [Azure Government](../azure-government/documentation-government-manage-marketplace-partners.md), test and validate your plan in the environment as certain endpoints may differ. To set up and test your plan, request a trial account from [Microsoft Azure Government trial](https://azure.microsoft.com/global-infrastructure/government/request/).
marketplace Plan Azure App Solution Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-azure-app-solution-template.md
Previously updated : 11/11/2021 Last updated : 05/25/2022 # Plan a solution template for an Azure application offer
Maximum file sizes supported are:
- Up to 1 Gb in total compressed .zip archive size - Up to 1 Gb for any individual uncompressed file within the .zip archive
-All new Azure application offers must also include an [Azure partner customer usage attribution](azure-partner-customer-usage-attribution.md) GUID.
- ## Azure regions You can publish your plan to the Azure public region, Azure Government region, or both. Before publishing to [Azure Government](../azure-government/documentation-government-manage-marketplace-partners.md), test and validate your plan in the environment as certain endpoints may differ. To set up and test your plan, request a trial account from [Microsoft Azure Government trial](https://azure.microsoft.com/global-infrastructure/government/request/).
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
Last updated 05/24/2022
This article summarizes new releases and features in Azure Database for MySQL - Flexible Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first.
+## May 2022
+
+- **Announcing Azure Database for MySQL - Flexible Server for business-critical workloads**
+ Azure Database for MySQL ΓÇô Flexible Server Business Critical service tier is now generally available. Business Critical service tier is ideal for Tier 1 production workloads that require low latency, high concurrency, fast failover, and high scalability, such as gaming, e-commerce, and Internet-scale applications, to learn more about [Business Critical service Tier](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/announcing-azure-database-for-mysql-flexible-server-for-business/ba-p/3361718).
+
+**Announcing the addition of new Burstable compute instances for Azure Database for MySQL - Flexible Server**
+ We are announcing the addition of new Burstable compute instances to support customersΓÇÖ auto-scaling compute requirements from 1 vCore up to 20 vCores. learn more about [Compute Option for Azure Database for MySQL - Flexible Server](https://docs.microsoft.com/azure/mysql/flexible-server/concepts-compute-storage).
+ ## April 2022 - **Minor version upgrade for Azure Database for MySQL - Flexible server to 8.0.28**
openshift Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/quickstart-portal.md
Azure Red Hat OpenShift is a managed OpenShift service that lets you quickly dep
## Prerequisites Sign in to the [Azure portal](https://portal.azure.com).
-On [Use the portal to create an Azure AD application and service principal that can access resources](/active-directory/develop/howto-create-service-principal-portal) create a service principal. Be sure to save the client ID and the appID.
+Create a service principal, as explained in [Use the portal to create an Azure AD application and service principal that can access resources](/azure/active-directory/develop/howto-create-service-principal-portal). **Be sure to save the client ID and the appID.**
## Create an Azure Red Hat OpenShift cluster 1. On the Azure portal menu or from the **Home** page, select **All Services** under three horizontal bars on the top left hand page.
On [Use the portal to create an Azure AD application and service principal that
![**Tags** tab on Azure portal](./media/Tags.png)
-7. Click **Review + create** and then **Create** when validation completes.
+7. Check **Review + create** and then **Create** when validation completes.
![**Review + create** tab on Azure portal](./media/Review+Create.png)
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
Flexible servers are best suited for
## High availability
-The flexible server deployment model is designed to support high availability within single availability zone and across multiple availability zones. The architecture separates compute and storage. The database engine runs on a container inside a Linux virtual machine, while data files reside on Azure storage. The storage maintains three locally redundant synchronous copies of the database files ensuring data durability.
+The flexible server deployment model is designed to support high availability within a single availability zone and across multiple availability zones. The architecture separates compute and storage. The database engine runs on a container inside a Linux virtual machine, while data files reside on Azure storage. The storage maintains three locally redundant synchronous copies of the database files ensuring data durability.
During planned or unplanned failover events, if the server goes down, the service maintains high availability of the servers using following automated procedure:
private-link Create Private Endpoint Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-cli.md
Previously updated : 11/07/2020 Last updated : 05/24/2022 #Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint by using the Azure CLI.
Get started with Azure Private Link by using a private endpoint to connect secur
In this quickstart, you'll create a private endpoint for an Azure web app and then create and deploy a virtual machine (VM) to test the private connection.
-You can create private endpoints for a variety of Azure services, such as Azure SQL and Azure Storage.
+You can create private endpoints for various Azure services, such as Azure SQL and Azure Storage.
## Prerequisites
You can create private endpoints for a variety of Azure services, such as Azure
* An Azure web app with a *PremiumV2-tier* or higher app service plan, deployed in your Azure subscription.
- For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
+ - For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
- For a detailed tutorial on creating a web app and an endpoint, see [Tutorial: Connect to a web app by using a private endpoint](tutorial-private-endpoint-webapp-portal.md).
+ - The example webapp in this article is named **myWebApp1979**. Replace the example with your webapp name.
* The latest version of the Azure CLI, installed.
You can create private endpoints for a variety of Azure services, such as Azure
An Azure resource group is a logical container where Azure resources are deployed and managed.
-First, create a resource group by using [az group create](/cli/azure/group#az-group-create):
+First, create a resource group by using **[az group create](/cli/azure/group#az-group-create)**:
```azurecli-interactive az group create \
az group create \
## Create a virtual network and bastion host
-Next, create a virtual network, subnet, and bastion host. You'll use the bastion host to connect securely to the VM for testing the private endpoint.
-
-1. Create a virtual network by using [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create):
-
- * Name: **myVNet**
- * Address prefix: **10.0.0.0/16**
- * Subnet name: **myBackendSubnet**
- * Subnet prefix: **10.0.0.0/24**
- * Resource group: **CreatePrivateEndpointQS-rg**
- * Location: **eastus**
-
- ```azurecli-interactive
- az network vnet create \
- --resource-group CreatePrivateEndpointQS-rg\
- --location eastus \
- --name myVNet \
- --address-prefixes 10.0.0.0/16 \
- --subnet-name myBackendSubnet \
- --subnet-prefixes 10.0.0.0/24
- ```
-
-1. Update the subnet to disable private-endpoint network policies for the private endpoint by using [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update):
+A virtual network and subnet is required for to host the private IP address for the private endpoint. You'll create a bastion host to connect securely to the virtual machine to test the private endpoint. You'll create the virtual machine in a later section.
- ```azurecli-interactive
- az network vnet subnet update \
- --name myBackendSubnet \
- --resource-group CreatePrivateEndpointQS-rg \
- --vnet-name myVNet \
- --disable-private-endpoint-network-policies true
- ```
+Create a virtual network with **[az network vnet create](/cli/azure/network/vnet#az-network-vnet-create)**.
-1. Create a public IP address for the bastion host by using [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create):
+```azurecli-interactive
+az network vnet create \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --location eastus \
+ --name myVNet \
+ --address-prefixes 10.0.0.0/16 \
+ --subnet-name myBackendSubnet \
+ --subnet-prefixes 10.0.0.0/24
+```
- * Standard zone-redundant public IP address name: **myBastionIP**
- * Resource group: **CreatePrivateEndpointQS-rg**
+Create a bastion subnet with **[az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create)**.
- ```azurecli-interactive
- az network public-ip create \
- --resource-group CreatePrivateEndpointQS-rg \
- --name myBastionIP \
- --sku Standard
- ```
+```azurecli-interactive
+az network vnet subnet create \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --name AzureBastionSubnet \
+ --vnet-name myVNet \
+ --address-prefixes 10.0.1.0/27
+```
-1. Create a bastion subnet by using [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create):
+Create a public IP address for the bastion host with **[az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create)**.
- * Name: **AzureBastionSubnet**
- * Address prefix: **10.0.1.0/24**
- * Virtual network: **myVNet**
- * Resource group: **CreatePrivateEndpointQS-rg**
+```azurecli-interactive
+az network public-ip create \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --name myBastionIP \
+ --sku Standard \
+ --zone 1 2 3
+```
- ```azurecli-interactive
- az network vnet subnet create \
- --resource-group CreatePrivateEndpointQS-rg \
- --name AzureBastionSubnet \
- --vnet-name myVNet \
- --address-prefixes 10.0.1.0/24
- ```
+Create the bastion host with **[az network bastion create](/cli/azure/network/bastion#az-network-bastion-create)**.
-1. Create a bastion host by using [az network bastion create](/cli/azure/network/bastion#az-network-bastion-create):
-
- * Name: **myBastionHost**
- * Resource group: **CreatePrivateEndpointQS-rg**
- * Public IP address: **myBastionIP**
- * Virtual network: **myVNet**
- * Location: **eastus**
-
- ```azurecli-interactive
- az network bastion create \
- --resource-group CreatePrivateEndpointQS-rg \
- --name myBastionHost \
- --public-ip-address myBastionIP \
- --vnet-name myVNet \
- --location eastus
- ```
+```azurecli-interactive
+az network bastion create \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --name myBastionHost \
+ --public-ip-address myBastionIP \
+ --vnet-name myVNet \
+ --location eastus
+```
It can take a few minutes for the Azure Bastion host to deploy.
-## Create a test virtual machine
-
-Next, create a VM that you can use to test the private endpoint.
-
-1. Create the VM by using [az vm create](/cli/azure/vm#az-vm-create).
+## Create a private endpoint
-1. At the prompt, provide a password to be used as the credentials for the VM:
+An Azure service that supports private endpoints is required to set up the private endpoint and connection to the virtual network. For the examples in this article, you'll use the Azure WebApp from the prerequisites. For more information on the Azure services that support a private endpoint, see [Azure Private Link availability](availability.md).
- * Name: **myVM**
- * Resource group: **CreatePrivateEndpointQS-rg**
- * Virtual network: **myVNet**
- * Subnet: **myBackendSubnet**
- * Server image: **Win2019Datacenter**
+A private endpoint can have a static or dynamically assigned IP address.
+> [!IMPORTANT]
+> You must have a previously deployed Azure WebApp to proceed with the steps in this article. For more information, see [Prerequisites](#prerequisites) .
- ```azurecli-interactive
- az vm create \
- --resource-group CreatePrivateEndpointQS-rg \
- --name myVM \
- --image Win2019Datacenter \
- --public-ip-address "" \
- --vnet-name myVNet \
- --subnet myBackendSubnet \
- --admin-username azureuser
- ```
+Place the resource ID of the web app that you created earlier into a shell variable with **[az webapp list](/cli/azure/webapp#az-webapp-list)**. Create the private endpoint with **[az network private-endpoint create](/cli/azure/network/private-endpoint#az-network-private-endpoint-create)**.
+# [**Dynamic IP**](#tab/dynamic-ip)
-## Create a private endpoint
-
-Next, create the private endpoint.
+```azurecli-interactive
+id=$(az webapp list \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --query '[].[id]' \
+ --output tsv)
-1. Place the resource ID of the web app that you created earlier into a shell variable by using [az webapp list](/cli/azure/webapp#az-webapp-list).
+az network private-endpoint create \
+ --connection-name myConnection
+ --name myPrivateEndpoint \
+ --private-connection-resource-id $id \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --subnet myBackendSubnet \
+ --group-id sites \
+ --vnet-name myVNet
+```
-1. Create the endpoint and connection by using [az network private-endpoint create](/cli/azure/network/private-endpoint#az-network-private-endpoint-create):
+# [**Static IP**](#tab/static-ip)
- * Name: **myPrivateEndpoint**
- * Resource group: **CreatePrivateEndpointQS-rg**
- * Virtual network: **myVNet**
- * Subnet: **myBackendSubnet**
- * Connection name: **myConnection**
- * Web app: **\<webapp-resource-group-name>**
+ ```azurecli-interactive
+id=$(az webapp list \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --query '[].[id]' \
+ --output tsv)
- ```azurecli-interactive
- id=$(az webapp list \
- --resource-group <webapp-resource-group-name> \
- --query '[].[id]' \
- --output tsv)
+az network private-endpoint create \
+ --connection-name myConnection \
+ --name myPrivateEndpoint \
+ --private-connection-resource-id $id \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --subnet myBackendSubnet \
+ --group-id sites \
+ --ip-config name=myIPconfig group-id=sites member-name=sites private-ip-address=10.0.0.10 \
+ --vnet-name myVNet
+```
- az network private-endpoint create \
- --name myPrivateEndpoint \
- --resource-group CreatePrivateEndpointQS-rg \
- --vnet-name myVNet --subnet myBackendSubnet \
- --private-connection-resource-id $id \
- --group-id sites \
- --connection-name myConnection
- ```
+ ## Configure the private DNS zone
-Next, create and configure the private DNS zone by using [az network private-dns zone create](/cli/azure/network/private-dns/zone#az-network-private-dns-zone-create).
+A private DNS zone is used to resolve the DNS name of the private endpoint in the virtual network. For this example, we're using the DNS information for an Azure WebApp, for more information on the DNS configuration of private endpoints, see [Azure Private Endpoint DNS configuration](private-endpoint-dns.md)].
-1. Create the virtual network link to the DNS zone by using [az network private-dns link vnet create](/cli/azure/network/private-dns/link/vnet#az-network-private-dns-link-vnet-create).
+Create a new private Azure DNS zone with **[az network private-dns zone create](/cli/azure/network/private-dns/zone#az-network-private-dns-zone-create)**.
-1. Create a DNS zone group by using [az network private-endpoint dns-zone-group create](/cli/azure/network/private-endpoint/dns-zone-group#az-network-private-endpoint-dns-zone-group-create).
+```azurecli-interactive
+az network private-dns zone create \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --name "privatelink.azurewebsites.net"
+```
- * Zone name: **privatelink.azurewebsites.net**
- * Virtual network: **myVNet**
- * Resource group: **CreatePrivateEndpointQS-rg**
- * DNS link name: **myDNSLink**
- * Endpoint name: **myPrivateEndpoint**
- * Zone group name: **MyZoneGroup**
+Link the DNS zone to the virtual network you created previously with **[az network private-dns link vnet create](/cli/azure/network/private-dns/link/vnet#az-network-private-dns-link-vnet-create)**.
- ```azurecli-interactive
- az network private-dns zone create \
- --resource-group CreatePrivateEndpointQS-rg \
- --name "privatelink.azurewebsites.net"
+```azurecli-interactive
+az network private-dns link vnet create \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --zone-name "privatelink.azurewebsites.net" \
+ --name MyDNSLink \
+ --virtual-network myVNet \
+ --registration-enabled false
+```
- az network private-dns link vnet create \
- --resource-group CreatePrivateEndpointQS-rg \
- --zone-name "privatelink.azurewebsites.net" \
- --name MyDNSLink \
- --virtual-network myVNet \
- --registration-enabled false
+Create a DNS zone group with **[az network private-endpoint dns-zone-group create](/cli/azure/network/private-endpoint/dns-zone-group#az-network-private-endpoint-dns-zone-group-create)**.
- az network private-endpoint dns-zone-group create \
+```azurecli-interactive
+az network private-endpoint dns-zone-group create \
--resource-group CreatePrivateEndpointQS-rg \ --endpoint-name myPrivateEndpoint \ --name MyZoneGroup \ --private-dns-zone "privatelink.azurewebsites.net" \ --zone-name webapp
- ```
+```
-## Test connectivity to the private endpoint
+## Create a test virtual machine
-Finally, use the VM that you created earlier to connect to the SQL Server instance across the private endpoint.
+To verify the static IP address and the functionality of the private endpoint, a test virtual machine connected to your virtual network is required.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. On the left pane, select **Resource groups**.
+Create the virtual machine with **[az vm create](/cli/azure/vm#az-vm-create)**.
-1. Select **CreatePrivateEndpointQS-rg**.
+```azurecli-interactive
+az vm create \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --name myVM \
+ --image Win2019Datacenter \
+ --public-ip-address "" \
+ --vnet-name myVNet \
+ --subnet myBackendSubnet \
+ --admin-username azureuser
+```
-1. Select **myVM**.
+
+## Test connectivity with the private endpoint
+
+Use the VM you created in the previous step to connect to the webapp across the private endpoint.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines**.
-1. On the overview page for **myVM**, select **Connect**, and then select **Bastion**.
+3. Select **myVM**.
-1. Select the blue **Use Bastion** button.
+4. On the overview page for **myVM**, select **Connect**, and then select **Bastion**.
-1. Enter the username and password that you used when you created the VM.
+5. Enter the username and password that you used when you created the VM. Select **Connect**.
-1. After you've connected, open PowerShell on the server.
+6. After you've connected, open PowerShell on the server.
-1. Enter `nslookup <your-webapp-name>.azurewebsites.net`, replacing *\<your-webapp-name>* with the name of the web app that you created earlier. You'll receive a message that's similar to the following:
+7. Enter `nslookup mywebapp1979.azurewebsites.net`. Replace **mywebapp1979** with the name of the web app that you created earlier. You'll receive a message that's similar to the following example:
```powershell Server: UnKnown Address: 168.63.129.16 Non-authoritative answer:
- Name: mywebapp8675.privatelink.azurewebsites.net
- Address: 10.0.0.5
- Aliases: mywebapp8675.azurewebsites.net
+ Name: mywebapp1979.privatelink.azurewebsites.net
+ Address: 10.0.0.10
+ Aliases: mywebapp1979.azurewebsites.net
```
- A private IP address of *10.0.0.5* is returned for the web app name. This address is in the subnet of the virtual network that you created earlier.
+8. In the bastion connection to **myVM**, open the web browser.
-1. In the bastion connection to *myVM**, open your web browser.
-
-1. Enter the URL of your web app, *https://\<your-webapp-name>.azurewebsites.net*.
+9. Enter the URL of your web app, **https://mywebapp1979.azurewebsites.net**.
If your web app hasn't been deployed, you'll get the following default web app page:
- :::image type="content" source="./media/create-private-endpoint-portal/web-app-default-page.png" alt-text="Screenshot of the default web app page on a browser." border="true":::
-
-1. Close the connection to *myVM*.
-
-## Clean up resources
-
-When you're done using the private endpoint and the VM, use [az group delete](/cli/azure/group#az-group-delete) to remove the resource group and all the resources within it:
-
-```azurecli-interactive
-az group delete \
- --name CreatePrivateEndpointQS-rg
-```
-
-## What you've learned
-
-In this quickstart, you created:
-
-* A virtual network and bastion host
-* A virtual machine
-* A private endpoint for an Azure web app
+ :::image type="content" source="./media/create-private-endpoint-portal/web-app-default-page.png" alt-text="Screenshot of the default web app page on a browser." border="true":::
-You used the VM to securely test connectivity to the web app across the private endpoint.
+10. Close the connection to **myVM**.
## Next steps
private-link Create Private Endpoint Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-powershell.md
Previously updated : 04/22/2022 Last updated : 05/24/2022 #Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint by using Azure PowerShell.
Get started with Azure Private Link by using a private endpoint to connect secur
In this quickstart, you'll create a private endpoint for an Azure web app and then create and deploy a virtual machine (VM) to test the private connection.
-You can create private endpoints for a variety of Azure services, such as Azure SQL and Azure Storage.
+You can create private endpoints for various Azure services, such as Azure SQL and Azure Storage.
## Prerequisites
-* An Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An Azure web app with a *PremiumV2-tier* or higher app service plan, deployed in your Azure subscription.
+- An Azure web app with a *PremiumV2-tier* or higher app service plan, deployed in your Azure subscription.
- For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
+ - For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
- For a detailed tutorial on creating a web app and an endpoint, see [Tutorial: Connect to a web app by using a private endpoint](tutorial-private-endpoint-webapp-portal.md).
+ - The example webapp in this article is named **myWebApp1979**. Replace the example with your webapp name.
If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install the Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
New-AzResourceGroup -Name 'CreatePrivateEndpointQS-rg' -Location 'eastus'
## Create a virtual network and bastion host
-First, you'll create a virtual network, subnet, and bastion host.
+A virtual network and subnet is required for to host the private IP address for the private endpoint. You'll create a bastion host to connect securely to the virtual machine to test the private endpoint. You'll create the virtual machine in a later section.
-You'll use the bastion host to connect securely to the VM for testing the private endpoint.
+In this section, you'll:
-1. Create a virtual network and bastion host with:
+- Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork)
- * [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork)
- * [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress)
- * [New-AzBastion](/powershell/module/az.network/new-azbastion)
+- Create subnet configurations for the backend subnet and the bastion subnet with [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig)
-1. Configure the back-end subnet.
+- Create a public IP address for the bastion host with [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress)
- ```azurepowershell-interactive
- $subnetConfig = New-AzVirtualNetworkSubnetConfig -Name myBackendSubnet -AddressPrefix 10.0.0.0/24
- ```
+- Create the bastion host with [New-AzBastion](/powershell/module/az.network/new-azbastion)
-1. Create the Azure Bastion subnet:
-
- ```azurepowershell-interactive
- $bastsubnetConfig = New-AzVirtualNetworkSubnetConfig -Name AzureBastionSubnet -AddressPrefix 10.0.1.0/24
- ```
-
-1. Create the virtual network:
-
- ```azurepowershell-interactive
- $parameters1 = @{
- Name = 'MyVNet'
- ResourceGroupName = 'CreatePrivateEndpointQS-rg'
- Location = 'eastus'
- AddressPrefix = '10.0.0.0/16'
- Subnet = $subnetConfig, $bastsubnetConfig
- }
- $vnet = New-AzVirtualNetwork @parameters1
- ```
-
-1. Create the public IP address for the bastion host:
-
- ```azurepowershell-interactive
- $parameters2 = @{
- Name = 'myBastionIP'
- ResourceGroupName = 'CreatePrivateEndpointQS-rg'
- Location = 'eastus'
- Sku = 'Standard'
- AllocationMethod = 'Static'
- }
- $publicip = New-AzPublicIpAddress @parameters2
- ```
-
-1. Create the bastion host:
-
- ```azurepowershell-interactive
- $parameters3 = @{
- ResourceGroupName = 'CreatePrivateEndpointQS-rg'
- Name = 'myBastion'
- PublicIpAddress = $publicip
- VirtualNetwork = $vnet
- }
- New-AzBastion @parameters3
- ```
-
-It can take a few minutes for the Azure Bastion host to deploy.
-
-## Create a test virtual machine
-
-Next, create a VM that you can use to test the private endpoint.
-
-1. Create the VM by using:
-
- * [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential)
- * [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface)
- * [New-AzVM](/powershell/module/az.compute/new-azvm)
- * [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig)
- * [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem)
- * [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage)
- * [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface)
+```azurepowershell-interactive
+## Configure the back-end subnet. ##
+$subnetConfig = New-AzVirtualNetworkSubnetConfig -Name myBackendSubnet -AddressPrefix 10.0.0.0/24
+
+## Create the Azure Bastion subnet. ##
+$bastsubnetConfig = New-AzVirtualNetworkSubnetConfig -Name AzureBastionSubnet -AddressPrefix 10.0.1.0/24
+
+## Create the virtual network. ##
+$net = @{
+ Name = 'MyVNet'
+ ResourceGroupName = 'CreatePrivateEndpointQS-rg'
+ Location = 'eastus'
+ AddressPrefix = '10.0.0.0/16'
+ Subnet = $subnetConfig, $bastsubnetConfig
+}
+$vnet = New-AzVirtualNetwork @net
+
+## Create the public IP address for the bastion host. ##
+$ip = @{
+ Name = 'myBastionIP'
+ ResourceGroupName = 'CreatePrivateEndpointQS-rg'
+ Location = 'eastus'
+ Sku = 'Standard'
+ AllocationMethod = 'Static'
+ Zone = 1,2,3
+}
+$publicip = New-AzPublicIpAddress @ip
+
+## Create the bastion host. ##
+$bastion = @{
+ ResourceGroupName = 'CreatePrivateEndpointQS-rg'
+ Name = 'myBastion'
+ PublicIpAddress = $publicip
+ VirtualNetwork = $vnet
+}
+New-AzBastion @bastion -AsJob
+```
+## Create a private endpoint
-1. Get the server admin credentials and password:
+An Azure service that supports private endpoints is required to set up the private endpoint and connection to the virtual network. For the examples in this article, we're using an Azure WebApp from the prerequisites. For more information on the Azure services that support a private endpoint, see [Azure Private Link availability](availability.md).
- ```azurepowershell-interactive
- $cred = Get-Credential
- ```
+A private endpoint can have a static or dynamically assigned IP address.
-1. Get the virtual network configuration:
+> [!IMPORTANT]
+> You must have a previously deployed Azure WebApp to proceed with the steps in this article. For more information, see [Prerequisites](#prerequisites).
- ```azurepowershell-interactive
- $vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName CreatePrivateEndpointQS-rg
- ```
+In this section, you'll:
-1. Create a network interface for the VM:
+- Create a private link service connection with [New-AzPrivateLinkServiceConnection](/powershell/module/az.network/new-azprivatelinkserviceconnection).
- ```azurepowershell-interactive
- $parameters1 = @{
- Name = 'myNicVM'
- ResourceGroupName = 'CreatePrivateEndpointQS-rg'
- Location = 'eastus'
- Subnet = $vnet.Subnets[0]
- }
- $nicVM = New-AzNetworkInterface @parameters1
- ```
+- Create the private endpoint with [New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint).
-1. Configure the VM:
-
- ```azurepowershell-interactive
- $parameters2 = @{
- VMName = 'myVM'
- VMSize = 'Standard_DS1_v2'
- }
- $parameters3 = @{
- ComputerName = 'myVM'
- Credential = $cred
- }
- $parameters4 = @{
- PublisherName = 'MicrosoftWindowsServer'
- Offer = 'WindowsServer'
- Skus = '2019-Datacenter'
- Version = 'latest'
- }
- $vmConfig =
- New-AzVMConfig @parameters2 | Set-AzVMOperatingSystem -Windows @parameters3 | Set-AzVMSourceImage @parameters4 | Add-AzVMNetworkInterface -Id $nicVM.Id
- ```
+- Optionally create the private endpoint static IP configuration with [New-AzPrivateEndpointIpConfiguration](/powershell/module/az.network/new-azprivateendpointipconfiguration).
-1. Create the VM:
+# [**Dynamic IP**](#tab/dynamic-ip)
- ```azurepowershell-interactive
- New-AzVM -ResourceGroupName 'CreatePrivateEndpointQS-rg' -Location 'eastus' -VM $vmConfig
- ```
-
+```azurepowershell-interactive
+## Place the previously created webapp into a variable. ##
+$webapp = Get-AzWebApp -ResourceGroupName CreatePrivateEndpointQS-rg -Name myWebApp1979
+
+## Create the private endpoint connection. ##
+$pec = @{
+ Name = 'myConnection'
+ PrivateLinkServiceId = $webapp.ID
+ GroupID = 'sites'
+}
+$privateEndpointConnection = New-AzPrivateLinkServiceConnection @pec
+
+## Place the virtual network you created previously into a variable. ##
+$vnet = Get-AzVirtualNetwork -ResourceGroupName 'CreatePrivateEndpointQS-rg' -Name 'myVNet'
+
+## Create the private endpoint. ##
+$pe = @{
+ ResourceGroupName = 'CreatePrivateEndpointQS-rg'
+ Name = 'myPrivateEndpoint'
+ Location = 'eastus'
+ Subnet = $vnet.Subnets[0]
+ PrivateLinkServiceConnection = $privateEndpointConnection
+}
+New-AzPrivateEndpoint @pe
-## Create a private endpoint
+```
-1. Create a private endpoint and connection by using:
+# [**Static IP**](#tab/static-ip)
- * [New-AzPrivateLinkServiceConnection](/powershell/module/az.network/New-AzPrivateLinkServiceConnection)
- * [New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint)
+```azurepowershell-interactive
+## Place the previously created webapp into a variable. ##
+$webapp = Get-AzWebApp -ResourceGroupName CreatePrivateEndpointQS-rg -Name myWebApp1979
+
+## Create the private endpoint connection. ##
+$pec = @{
+ Name = 'myConnection'
+ PrivateLinkServiceId = $webapp.ID
+ GroupID = 'sites'
+}
+$privateEndpointConnection = New-AzPrivateLinkServiceConnection @pec
+
+## Place the virtual network you created previously into a variable. ##
+$vnet = Get-AzVirtualNetwork -ResourceGroupName 'CreatePrivateEndpointQS-rg' -Name 'myVNet'
+
+## Create the static IP configuration. ##
+$ip = @{
+ Name = 'myIPconfig'
+ GroupId = 'sites'
+ MemberName = 'sites'
+ PrivateIPAddress = '10.0.0.10'
+}
+$ipconfig = New-AzPrivateEndpointIpConfiguration @ip
+
+## Create the private endpoint. ##
+$pe = @{
+ ResourceGroupName = 'CreatePrivateEndpointQS-rg'
+ Name = 'myPrivateEndpoint'
+ Location = 'eastus'
+ Subnet = $vnet.Subnets[0]
+ PrivateLinkServiceConnection = $privateEndpointConnection
+ IpConfiguration = $ipconfig
+}
+New-AzPrivateEndpoint @pe
-1. Place the web app into a variable. Replace \<webapp-resource-group-name> with the resource group name of your web app, and replace \<your-webapp-name> with your web app name.
+```
- ```azurepowershell-interactive
- $webapp = Get-AzWebApp -ResourceGroupName <webapp-resource-group-name> -Name <your-webapp-name>
- ```
+
-1. Create the private endpoint connection:
+## Configure the private DNS zone
- ```azurepowershell-interactive
- $parameters1 = @{
- Name = 'myConnection'
- PrivateLinkServiceId = $webapp.ID
- GroupID = 'sites'
- }
- $privateEndpointConnection = New-AzPrivateLinkServiceConnection @parameters1
- ```
+A private DNS zone is used to resolve the DNS name of the private endpoint in the virtual network. For this example, we're using the DNS information for an Azure WebApp, for more information on the DNS configuration of private endpoints, see [Azure Private Endpoint DNS configuration](private-endpoint-dns.md).
-1. Place the virtual network into a variable:
+In this section, you'll:
- ```azurepowershell-interactive
- $vnet = Get-AzVirtualNetwork -ResourceGroupName 'CreatePrivateEndpointQS-rg' -Name 'myVNet'
- ```
+- Create a new private Azure DNS zone with [New-AzPrivateDnsZone](/powershell/module/az.privatedns/new-azprivatednszone)
-1. Disable the private endpoint network policy:
+- Link the DNS zone to the virtual network you created previously with [New-AzPrivateDnsVirtualNetworkLink](/powershell/module/az.privatedns/new-azprivatednsvirtualnetworklink)
- ```azurepowershell-interactive
- $vnet.Subnets[0].PrivateEndpointNetworkPolicies = "Disabled"
- $vnet | Set-AzVirtualNetwork
- ```
+- Create a DNS zone configuration with [New-AzPrivateDnsZoneConfig](/powershell/module/az.network/new-azprivatednszoneconfig)
-1. Create the private endpoint:
-
- ```azurepowershell-interactive
- $parameters2 = @{
- ResourceGroupName = 'CreatePrivateEndpointQS-rg'
- Name = 'myPrivateEndpoint'
- Location = 'eastus'
- Subnet = $vnet.Subnets[0]
- PrivateLinkServiceConnection = $privateEndpointConnection
- }
- New-AzPrivateEndpoint @parameters2
- ```
-## Configure the private DNS zone
+- Create a DNS zone group with [New-AzPrivateDnsZoneGroup](/powershell/module/az.network/new-azprivatednszonegroup)
-1. Create and configure the private DNS zone by using:
+```azurepowershell-interactive
+## Place the virtual network into a variable. ##
+$vnet = Get-AzVirtualNetwork -ResourceGroupName 'CreatePrivateEndpointQS-rg' -Name 'myVNet'
+
+## Create the private DNS zone. ##
+$zn = @{
+ ResourceGroupName = 'CreatePrivateEndpointQS-rg'
+ Name = 'privatelink.azurewebsites.net'
+}
+$zone = New-AzPrivateDnsZone @zn
+
+## Create a DNS network link. ##
+$lk = @{
+ ResourceGroupName = 'CreatePrivateEndpointQS-rg'
+ ZoneName = 'privatelink.azurewebsites.net'
+ Name = 'myLink'
+ VirtualNetworkId = $vnet.Id
+}
+$link = New-AzPrivateDnsVirtualNetworkLink @lk
+
+## Configure the DNS zone. ##
+$cg = @{
+ Name = 'privatelink.azurewebsites.net'
+ PrivateDnsZoneId = $zone.ResourceId
+}
+$config = New-AzPrivateDnsZoneConfig @cg
+
+## Create the DNS zone group. ##
+$zg = @{
+ ResourceGroupName = 'CreatePrivateEndpointQS-rg'
+ PrivateEndpointName = 'myPrivateEndpoint'
+ Name = 'myZoneGroup'
+ PrivateDnsZoneConfig = $config
+}
+New-AzPrivateDnsZoneGroup @zg
- * [New-AzPrivateDnsZone](/powershell/module/az.privatedns/new-azprivatednszone)
- * [New-AzPrivateDnsVirtualNetworkLink](/powershell/module/az.privatedns/new-azprivatednsvirtualnetworklink)
- * [New-AzPrivateDnsZoneConfig](/powershell/module/az.network/new-azprivatednszoneconfig)
- * [New-AzPrivateDnsZoneGroup](/powershell/module/az.network/new-azprivatednszonegroup)
+```
-1. Place the virtual network into a variable:
+## Create a test virtual machine
- ```azurepowershell-interactive
- $vnet = Get-AzVirtualNetwork -ResourceGroupName 'CreatePrivateEndpointQS-rg' -Name 'myVNet'
- ```
+To verify the static IP address and the functionality of the private endpoint, a test virtual machine connected to your virtual network is required.
-1. Create the private DNS zone:
+In this section, you'll:
- ```azurepowershell-interactive
- $parameters1 = @{
- ResourceGroupName = 'CreatePrivateEndpointQS-rg'
- Name = 'privatelink.azurewebsites.net'
- }
- $zone = New-AzPrivateDnsZone @parameters1
- ```
+- Create a sign-in credential for the virtual machine with [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential)
-1. Create a DNS network link:
+- Create a network interface for the virtual machine with [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface)
- ```azurepowershell-interactive
- $parameters2 = @{
- ResourceGroupName = 'CreatePrivateEndpointQS-rg'
- ZoneName = 'privatelink.azurewebsites.net'
- Name = 'myLink'
- VirtualNetworkId = $vnet.Id
- }
- $link = New-AzPrivateDnsVirtualNetworkLink @parameters2
- ```
+- Create a virtual machine configuration with [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig), [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem), [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage), and [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface)
-1. Configure the DNS zone:
+- Create the virtual machine with [New-AzVM](/powershell/module/az.compute/new-azvm)
- ```azurepowershell-interactive
- $parameters3 = @{
- Name = 'privatelink.azurewebsites.net'
- PrivateDnsZoneId = $zone.ResourceId
- }
- $config = New-AzPrivateDnsZoneConfig @parameters3
- ```
+```azurepowershell-interactive
+## Create the credential for the virtual machine. Enter a username and password at the prompt. ##
+$cred = Get-Credential
+
+## Place the virtual network into a variable. ##
+$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName CreatePrivateEndpointQS-rg
+
+## Create a network interface for the virtual machine. ##
+$nic = @{
+ Name = 'myNicVM'
+ ResourceGroupName = 'CreatePrivateEndpointQS-rg'
+ Location = 'eastus'
+ Subnet = $vnet.Subnets[0]
+}
+$nicVM = New-AzNetworkInterface @nic
+
+## Create the configuration for the virtual machine. ##
+$vm1 = @{
+ VMName = 'myVM'
+ VMSize = 'Standard_DS1_v2'
+}
+$vm2 = @{
+ ComputerName = 'myVM'
+ Credential = $cred
+}
+$vm3 = @{
+ PublisherName = 'MicrosoftWindowsServer'
+ Offer = 'WindowsServer'
+ Skus = '2019-Datacenter'
+ Version = 'latest'
+}
+$vmConfig =
+New-AzVMConfig @vm1 | Set-AzVMOperatingSystem -Windows @vm2 | Set-AzVMSourceImage @vm3 | Add-AzVMNetworkInterface -Id $nicVM.Id
+
+## Create the virtual machine. ##
+New-AzVM -ResourceGroupName 'CreatePrivateEndpointQS-rg' -Location 'eastus' -VM $vmConfig
-1. Create the DNS zone group:
+```
- ```azurepowershell-interactive
- $parameters4 = @{
- ResourceGroupName = 'CreatePrivateEndpointQS-rg'
- PrivateEndpointName = 'myPrivateEndpoint'
- Name = 'myZoneGroup'
- PrivateDnsZoneConfig = $config
- }
- New-AzPrivateDnsZoneGroup @parameters4
- ```
## Test connectivity with the private endpoint
-Finally, use the VM you created in the previous step to connect to the SQL server across the private endpoint.
+Use the VM you created in the previous step to connect to the webapp across the private endpoint.
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. On the left pane, select **Resource groups**.
+2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines**.
-1. Select **CreatePrivateEndpointQS-rg**.
+3. Select **myVM**.
-1. Select **myVM**.
+4. On the overview page for **myVM**, select **Connect**, and then select **Bastion**.
-1. On the overview page for **myVM**, select **Connect**, and then select **Bastion**.
+5. Enter the username and password that you used when you created the VM. Select **Connect**.
-1. Select the blue **Use Bastion** button.
+6. After you've connected, open PowerShell on the server.
-1. Enter the username and password that you used when you created the VM.
-
-1. After you've connected, open PowerShell on the server.
-
-1. Enter `nslookup <your-webapp-name>.azurewebsites.net`. Replace **\<your-webapp-name>** with the name of the web app that you created earlier. You'll receive a message that's similar to the following:
+7. Enter `nslookup mywebapp1979.azurewebsites.net`. Replace **mywebapp1979** with the name of the web app that you created earlier. You'll receive a message that's similar to the following example:
```powershell Server: UnKnown Address: 168.63.129.16 Non-authoritative answer:
- Name: mywebapp8675.privatelink.azurewebsites.net
- Address: 10.0.0.5
- Aliases: mywebapp8675.azurewebsites.net
+ Name: mywebapp1979.privatelink.azurewebsites.net
+ Address: 10.0.0.10
+ Aliases: mywebapp1979.azurewebsites.net
```
- A private IP address of *10.0.0.5* is returned for the web app name. This address is in the subnet of the virtual network that you created earlier.
+8. In the bastion connection to **myVM**, open the web browser.
-1. In the bastion connection to **myVM**, open your web browser.
-
-1. Enter the URL of your web app, **https://\<your-webapp-name>.azurewebsites.net**.
+9. Enter the URL of your web app, **https://mywebapp1979.azurewebsites.net**.
If your web app hasn't been deployed, you'll get the following default web app page:
- :::image type="content" source="./media/create-private-endpoint-portal/web-app-default-page.png" alt-text="Screenshot of the default web app page on a browser." border="true":::
-
-1. Close the connection to **myVM**.
-
-## Clean up resources
-When you're done using the private endpoint and the VM, use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) to remove the resource group and all the resources within it:
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name CreatePrivateEndpointQS-rg -Force
-```
-
-## What you've learned
-
-In this quickstart, you created:
-
-* A virtual network and bastion host
-* A virtual machine
-* A private endpoint for an Azure web app
+ :::image type="content" source="./media/create-private-endpoint-portal/web-app-default-page.png" alt-text="Screenshot of the default web app page on a browser." border="true":::
-You used the VM to securely test connectivity to the web app across the private endpoint.
+10. Close the connection to **myVM**.
## Next steps
private-link Private Endpoint Static Ip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-static-ip-powershell.md
- Title: Create a private endpoint with a static IP address - PowerShell-
-description: Learn how to create a private endpoint for an Azure service with a static private IP address.
---- Previously updated : 05/13/2022---
-# Create a private endpoint with a static IP address using PowerShell
-
- A private endpoint IP address is allocated by DHCP in your virtual network by default. In this article, you'll create a private endpoint with a static IP address.
-
-## Prerequisites
--- An Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).--- An Azure web app with a **PremiumV2-tier** or higher app service plan, deployed in your Azure subscription. -
- - For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
-
- - The example webapp in this article is named **myWebApp1979**. Replace the example with your webapp name.
-
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install the Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-
-## Create a resource group
-
-An Azure resource group is a logical container where Azure resources are deployed and managed.
-
-Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup):
-
-```azurepowershell-interactive
-New-AzResourceGroup -Name 'myResourceGroup' -Location 'eastus'
-```
-
-## Create a virtual network and bastion host
-
-A virtual network and subnet is required for to host the private IP address for the private endpoint. You'll create a bastion host to connect securely to the virtual machine to test the private endpoint. You'll create the virtual machine in a later section.
-
-In this section, you'll:
--- Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork)--- Create subnet configurations for the backend subnet and the bastion subnet with [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig)--- Create a public IP address for the bastion host with [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress)--- Create the bastion host with [New-AzBastion](/powershell/module/az.network/new-azbastion)-
-```azurepowershell-interactive
-## Configure the back-end subnet. ##
-$subnetConfig = New-AzVirtualNetworkSubnetConfig -Name myBackendSubnet -AddressPrefix 10.0.0.0/24
-
-## Create the Azure Bastion subnet. ##
-$bastsubnetConfig = New-AzVirtualNetworkSubnetConfig -Name AzureBastionSubnet -AddressPrefix 10.0.1.0/24
-
-## Create the virtual network. ##
-$net = @{
- Name = 'MyVNet'
- ResourceGroupName = 'myResourceGroup'
- Location = 'eastus'
- AddressPrefix = '10.0.0.0/16'
- Subnet = $subnetConfig, $bastsubnetConfig
-}
-$vnet = New-AzVirtualNetwork @net
-
-## Create the public IP address for the bastion host. ##
-$ip = @{
- Name = 'myBastionIP'
- ResourceGroupName = 'myResourceGroup'
- Location = 'eastus'
- Sku = 'Standard'
- AllocationMethod = 'Static'
- Zone = 1,2,3
-}
-$publicip = New-AzPublicIpAddress @ip
-
-## Create the bastion host. ##
-$bastion = @{
- ResourceGroupName = 'myResourceGroup'
- Name = 'myBastion'
- PublicIpAddress = $publicip
- VirtualNetwork = $vnet
-}
-New-AzBastion @bastion -AsJob
-```
-
-## Create a private endpoint
-
-An Azure service that supports private endpoints is required to setup the private endpoint and connection to the virtual network. For the examples in this article, we are using an Azure WebApp from the prerequisites. For more information on the Azure services that support a private endpoint, see [Azure Private Link availability](availability.md).
-
-> [!IMPORTANT]
-> You must have a previously deployed Azure WebApp to proceed with the steps in this article. See [Prerequisites](#prerequisites) for more information.
-
-In this section, you'll:
--- Create a private link service connection with [New-AzPrivateLinkServiceConnection](/powershell/module/az.network/new-azprivatelinkserviceconnection).--- Create the private endpoint static IP configuration with [New-AzPrivateEndpointIpConfiguration](/powershell/module/az.network/new-azprivateendpointipconfiguration).--- Create the private endpoint with [New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint).-
-```azurepowershell-interactive
-## Place the previously created webapp into a variable. ##
-$webapp = Get-AzWebApp -ResourceGroupName myResourceGroup -Name myWebApp1979
-
-## Create the private endpoint connection. ##
-$pec = @{
- Name = 'myConnection'
- PrivateLinkServiceId = $webapp.ID
- GroupID = 'sites'
-}
-$privateEndpointConnection = New-AzPrivateLinkServiceConnection @pec
-
-## Place the virtual network you created previously into a variable. ##
-$vnet = Get-AzVirtualNetwork -ResourceGroupName 'myResourceGroup' -Name 'myVNet'
-
-## Disable the private endpoint network policy. ##
-$vnet.Subnets[0].PrivateEndpointNetworkPolicies = "Disabled"
-$vnet | Set-AzVirtualNetwork
-
-## Create the static IP configuration. ##
-$ip = @{
- Name = 'myIPconfig'
- GroupId = 'sites'
- MemberName = 'sites'
- PrivateIPAddress = '10.0.0.10'
-}
-$ipconfig = New-AzPrivateEndpointIpConfiguration @ip
-
-## Create the private endpoint. ##
-$pe = @{
- ResourceGroupName = 'myResourceGroup'
- Name = 'myPrivateEndpoint'
- Location = 'eastus'
- Subnet = $vnet.Subnets[0]
- PrivateLinkServiceConnection = $privateEndpointConnection
- IpConfiguration = $ipconfig
-}
-New-AzPrivateEndpoint @pe
-
-```
-
-## Configure the private DNS zone
-
-A private DNS zone is used to resolve the DNS name of the private endpoint in the virtual network. For this example, we are using the DNS information for an Azure WebApp, for more information on the DNS configuration of private endpoints, see [Azure Private Endpoint DNS configuration](private-endpoint-dns.md)].
-
-In this section, you'll:
--- Create a new private Azure DNS zone with [New-AzPrivateDnsZone](/powershell/module/az.privatedns/new-azprivatednszone)--- Link the DNS zone to the virtual network you created previously with [New-AzPrivateDnsVirtualNetworkLink](/powershell/module/az.privatedns/new-azprivatednsvirtualnetworklink)--- Create a DNS zone configuration with [New-AzPrivateDnsZoneConfig](/powershell/module/az.network/new-azprivatednszoneconfig)--- Create a DNS zone group with [New-AzPrivateDnsZoneGroup](/powershell/module/az.network/new-azprivatednszonegroup)-
-```azurepowershell-interactive
-## Place the virtual network into a variable. ##
-$vnet = Get-AzVirtualNetwork -ResourceGroupName 'myResourceGroup' -Name 'myVNet'
-
-## Create the private DNS zone. ##
-$zn = @{
- ResourceGroupName = 'myResourceGroup'
- Name = 'privatelink.azurewebsites.net'
-}
-$zone = New-AzPrivateDnsZone @zn
-
-## Create a DNS network link. ##
-$lk = @{
- ResourceGroupName = 'myResourceGroup'
- ZoneName = 'privatelink.azurewebsites.net'
- Name = 'myLink'
- VirtualNetworkId = $vnet.Id
-}
-$link = New-AzPrivateDnsVirtualNetworkLink @lk
-
-## Configure the DNS zone. ##
-$cg = @{
- Name = 'privatelink.azurewebsites.net'
- PrivateDnsZoneId = $zone.ResourceId
-}
-$config = New-AzPrivateDnsZoneConfig @cg
-
-## Create the DNS zone group. ##
-$zg = @{
- ResourceGroupName = 'myResourceGroup'
- PrivateEndpointName = 'myPrivateEndpoint'
- Name = 'myZoneGroup'
- PrivateDnsZoneConfig = $config
-}
-New-AzPrivateDnsZoneGroup @zg
-
-```
-
-## Create a test virtual machine
-
-To verify the static IP address and the functionality of the private endpoint, a test virtual machine connected to your virtual network is required.
-
-In this section, you'll:
--- Create a login credential for the virtual machine with [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential)--- Create a network interface for the virtual machine with [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface)--- Create a virtual machine configuration with [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig), [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem), [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage), and [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface)--- Create the virtual machine with [New-AzVM](/powershell/module/az.compute/new-azvm)-
-```azurepowershell-interactive
-## Create the credential for the virtual machine. Enter a username and password at the prompt. ##
-$cred = Get-Credential
-
-## Place the virtual network into a variable. ##
-$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
-
-## Create a network interface for the virtual machine. ##
-$nic = @{
- Name = 'myNicVM'
- ResourceGroupName = 'myResourceGroup'
- Location = 'eastus'
- Subnet = $vnet.Subnets[0]
-}
-$nicVM = New-AzNetworkInterface @nic
-
-## Create the configuration for the virtual machine. ##
-$vm1 = @{
- VMName = 'myVM'
- VMSize = 'Standard_DS1_v2'
-}
-$vm2 = @{
- ComputerName = 'myVM'
- Credential = $cred
-}
-$vm3 = @{
- PublisherName = 'MicrosoftWindowsServer'
- Offer = 'WindowsServer'
- Skus = '2019-Datacenter'
- Version = 'latest'
-}
-$vmConfig =
-New-AzVMConfig @vm1 | Set-AzVMOperatingSystem -Windows @vm2 | Set-AzVMSourceImage @vm3 | Add-AzVMNetworkInterface -Id $nicVM.Id
-
-## Create the virtual machine. ##
-New-AzVM -ResourceGroupName 'myResourceGroup' -Location 'eastus' -VM $vmConfig
-
-```
--
-## Test connectivity with the private endpoint
-
-Use the VM you created in the previous step to connect to the webapp across the private endpoint.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines**.
-
-3. Select **myVM**.
-
-4. On the overview page for **myVM**, select **Connect**, and then select **Bastion**.
-
-5. Enter the username and password that you used when you created the VM. Select **Connect**.
-
-6. After you've connected, open PowerShell on the server.
-
-7. Enter `nslookup mywebapp1979.azurewebsites.net`. Replace **mywebapp1979** with the name of the web app that you created earlier. You'll receive a message that's similar to the following:
-
- ```powershell
- Server: UnKnown
- Address: 168.63.129.16
-
- Non-authoritative answer:
- Name: mywebapp1979.privatelink.azurewebsites.net
- Address: 10.0.0.10
- Aliases: mywebapp1979.azurewebsites.net
- ```
-
- A static private IP address of *10.0.0.10* is returned for the web app name.
-
-8. In the bastion connection to **myVM**, open the web browser.
-
-9. Enter the URL of your web app, **https://mywebapp1979.azurewebsites.net**.
-
- If your web app hasn't been deployed, you'll get the following default web app page:
-
- :::image type="content" source="./media/private-endpoint-static-ip-powershell/web-app-default-page.png" alt-text="Screenshot of the default web app page on a browser." border="true":::
-
-10. Close the connection to **myVM**.
-
-## Next steps
-
-To learn more about Private Link and Private endpoints, see
--- [What is Azure Private Link](private-link-overview.md)--- [Private endpoint overview](private-endpoint-overview.md)---
route-server Vmware Solution Default Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/vmware-solution-default-route.md
There are two main scenarios for this pattern:
- ExpressRoute Global Reach might not be available on a particular region to interconnect the ExpressRoute circuits of AVS and the on-premises network. - Some organizations might have the requirement to send traffic between AVS and the on-premises network through an NVA (typically a firewall).
-If both ExpressRoute circuits (to AVS and to on-premises) are terminated in the same ExpressRoute gateway, you could think that the gateway is going to route packets across them. However, an ExpressRoute gateway isn't designed to do that. Instead, you need to hairpin the traffic to a Network Virtual Appliance that is able to route the traffic. To that purpose, two actions are required:
+> [!IMPORTANT]
+> Global Reach is still the preferred option to connect AVS and on-premises environments, the patterns described in this document add a considerable complexity to the environment.
+
+If both ExpressRoute circuits (to AVS and to on-premises) are terminated in the same ExpressRoute gateway, you could assume that the gateway is going to route packets across them. However, an ExpressRoute gateway isn't designed to do that. Instead, you need to hairpin the traffic to a Network Virtual Appliance that is able to route the traffic. To that purpose, two actions are required:
-- The NVA should advertise a supernet for the AVS and on-premises prefixes, as the diagram below shows. You could use a supernet that includes both AVS and on-premises prefixes, or individual prefixes for AVS and on-premises (always less specific that the actual prefixes advertised over ExpressRoute).
+- The NVA should advertise a supernet for the AVS and on-premises prefixes, as the diagram below shows. You could use a supernet that includes both AVS and on-premises prefixes, or individual prefixes for AVS and on-premises (always less specific that the actual prefixes advertised over ExpressRoute). Consider though that all supernet prefixes advertised to Route Server are going to be propagated both to AVS and on-premises.
- UDRs in the GatewaySubnet that exactly match the prefixes advertised from AVS and on-premises will hairpin traffic from the GatewaySubnet to the Network Virtual Appliance. :::image type="content" source="./media/scenarios/vmware-solution-to-on-premises-hairpin.png" alt-text="Diagram of AVS to on-premises communication with Route Server in a single region."::: As the diagram above shows, the NVA needs to advertise more generic (less specific) prefixes that include the networks from on-premises and AVS. You need to be careful with this approach, since the NVA might be potentially attracting traffic that it shouldn't (since it is advertising wider ranges, in the example above the whole `10.0.0.0/8` network).
-If advertising less specific prefixes isn't possible, you could instead implement an alternative design using two separate VNets. In this design, instead of propagating less specific routes to attract traffic to the ExpressRoute gateway, two different NVAs in separate VNets exchange routes between each other, and propagate them to their respective ExpressRoute circuits via BGP and Azure Route Server, as the following diagram shows:
+If advertising less specific prefixes isn't possible as in the option described above or the UDRs that are required in the GatewaySubnet are not desired or supported (for example because they exceed the maximum limit of routes per route table, see [Azure subscription and service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits) for more details), you could instead implement an alternative design using two separate VNets. In this topology, instead of propagating less specific routes to attract traffic to the ExpressRoute gateway, two different NVAs in separate VNets exchange routes between each other, and propagate them to their respective ExpressRoute circuits via BGP and Azure Route Server, as the following diagram shows. Each NVAs has full control on which prefixes are propagated to each ExpressRoute circuit. For example, the diagram below shows how a single 0.0.0.0/0 is advertised to AVS, and the individual AVS prefixes are propagated to the on-premises network:
:::image type="content" source="./media/scenarios/vmware-solution-to-on-premises.png" alt-text="Diagram of AVS to on-premises communication with Route Server in two regions.":::
-Note that some sort of encapsulation protocol such as VXLAN or IPsec is required between the NVAs. The reason why encapsulation is needed is because the NVA NICs would learn the routes from ExpressRoute and from the Route Server, so they would send packets that need to be routed to the other NVA in the wrong direction. This would create a routing loop returning the packets to the local NVA.
+Note that some sort of encapsulation protocol such as VXLAN or IPsec is required between the NVAs. The reason why encapsulation is needed is because the NVA NICs would learn the routes from Azure Route Server with the NVA as next hop, and create a routing loop.
-The main difference between this dual VNet design and the previously described single VNet design is that with two VNets you're actually interconnecting from a routing perspective both ExpressRoute circuits (AVS and on-premises), meaning that whatever is learned from one will be advertised to the other. In the single-VNet design described earlier in this document a common set of supernets or less specific prefixes are sent down both circuits to attract traffic to the VNet.
+The main difference between this dual VNet design and the previously described single VNet design is that with two VNets you have full control on what is advertised to each ExpressRoute circuit, and this allows for a more dynamic and granular configuration. In comparison, in the single-VNet design described earlier in this document a common set of supernets or less specific prefixes are sent down both circuits to attract traffic to the VNet. Additional, in the single-VNet design there is a static configuration component in the UDRs that are required in the Gateway Subnet. Hence, although less cost-effective (two ExpressRoute gateways and two sets of NVAs are required), the double-VNet design might be a better alternative for very dynamic routing environments.
## Next steps
search Cognitive Search Skill Entity Linking V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-entity-linking-v3.md
description: Extract different linked entities from text in an enrichment pipeline in Azure Cognitive Search. --++ Last updated 12/09/2021
search Cognitive Search Skill Entity Recognition V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-entity-recognition-v3.md
description: Extract different types of entities using the machine learning models of Azure Cognitive Services for Language in an AI enrichment pipeline in Azure Cognitive Search. --++ Last updated 12/09/2021
sentinel Playbook Triggers Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/playbook-triggers-actions.md
Basic playbook to send incident details over mail:
The **Entities** dynamic field is an array of JSON objects, each of which represents an entity. Each entity type has its own schema, depending on its unique properties.
-The **"Entities - Get \<entity name>"** action allows you to do the following:
+The **"Entities - Get \<entity type>"** action allows you to do the following:
- Filter the array of entities by the requested type. - Parse the specific fields of this type, so they can be used as dynamic fields in further actions.
service-connector How To Integrate Signalr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-signalr.md
Title: Integrate Azure SignalR Service with Service Connector
-description: Integrate Azure SignalR Service into your application with Service Connector
+description: Integrate Azure SignalR Service into your application with Service Connector. Learn about authentication types and client types of Azure SignalR Service.
Previously updated : 10/29/2021- Last updated : 5/25/2022+
+- ignite-fall-2021
+- kr2b-contr-experiment
+- event-tier1-build-2022
# Integrate Azure SignalR Service with Service Connector
-This page shows the supported authentication types and client types of Azure SignalR Service using Service Connector. You might still be able to connect to Azure SignalR Service in other programming languages without using Service Connector. This page also shows default environment variable name and value (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This article shows the supported authentication types and client types of Azure SignalR Service using Service Connector. This article also shows default environment variable name and value or Spring Boot configuration that you get when you create the service connection. For more information, see [Service Connector environment variable naming convention](concept-service-connector-internals.md).
## Supported compute service - Azure App Service
-## Supported Authentication types and client types
+## Supported authentication types and client types
| Client Type | System-assigned Managed Identity | User-assigned Managed Identity | Secret/ConnectionString | Service Principal | | | | | | |
This page shows the supported authentication types and client types of Azure Sig
### .NET
-**Secret/ConnectionString**
+- Secret/ConnectionString
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string | `Endpoint=https://{signalrName}.service.signalr.net;AccessKey={};Version=1.0;` |
+ | Default environment variable name | Description | Example value |
+ | | | |
+ | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string | `Endpoint=https://{signalrName}.service.signalr.net;AccessKey={};Version=1.0;` |
-**System-assigned Managed Identity**
+- System-assigned Managed Identity
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string with Managed Identity | `Endpoint=https://{signalrName}.service.signalr.net;AuthType=aad;ClientId={};Version=1.0;` |
+ | Default environment variable name | Description | Example value |
+ | | | |
+ | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string with Managed Identity | `Endpoint=https://{signalrName}.service.signalr.net;AuthType=aad;ClientId={};Version=1.0;` |
-**User-assigned Managed Identity**
+- User-assigned Managed Identity
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string with Managed Identity | `Endpoint=https://{signalrName}.service.signalr.net;AuthType=aad;ClientId={};Version=1.0;` |
+ | Default environment variable name | Description | Example value |
+ | | | |
+ | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string with Managed Identity | `Endpoint=https://{signalrName}.service.signalr.net;AuthType=aad;ClientId={};Version=1.0;` |
-**Service Principal**
+- Service Principal
-| Default environment variable name | Description | Example value |
-| | | |
-| AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string with Service Principal | `Endpoint=https://{signalrName}.service.signalr.net;AuthType=aad;ClientId={};ClientSecret={};TenantId={};Version=1.0;` |
+ | Default environment variable name | Description | Example value |
+ | | | |
+ | AZURE_SIGNALR_CONNECTIONSTRING | SignalR Service connection string with Service Principal | `Endpoint=https://{signalrName}.service.signalr.net;AuthType=aad;ClientId={};ClientSecret={};TenantId={};Version=1.0;` |
## Next steps
-Follow the tutorials listed below to learn more about Service Connector.
- > [!div class="nextstepaction"] > [Learn about Service Connector concepts](./concept-service-connector-internals.md)
service-connector How To Troubleshoot Front End Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-troubleshoot-front-end-error.md
Title: Service Connector Troubleshooting Guidance
-description: Error list and suggested actions of Service Connector
+ Title: Service Connector troubleshooting guidance
+description: This article lists error messages and suggested actions of Service Connector to use for troubleshooting issues.
-- Previously updated : 05/03/2022+ Last updated : 5/25/2022+
+- ignite-fall-2021
+- kr2b-contr-experiment
+- event-tier1-build-2022
# How to troubleshoot with Service Connector
-If you come across an issue, you can refer to the error message to find suggested actions or fixes. This how-to guide shows you several options to troubleshoot Service Connector.
+This article lists error messages and suggestions to troubleshoot Service Connector.
-## Troubleshooting from the Azure portal
+## Error message and suggested actions from the Azure portal
| Error message | Suggested Action | | | |
-| Unknown resource type | <ul><li>Check source and target resource to verify whether the service types are supported by Service Connector.</li><li>Check whether the specified source-target connection combination is supported by Service Connector.</li></ul> |
-| Unknown resource type | <ul><li>Check whether the target resource exists.</li><li>Check the correctness of the target resource ID.</li></ul> |
+| Unknown resource type | <ul><li>Check source and target resource to verify whether the service types are supported by Service Connector.</li><li>Check whether the specified source-target connection combination is supported by Service Connector.</li><li>Check whether the target resource exists.</li><li>Check the correctness of the target resource ID.</li></ul> |
| Unsupported resource | <ul><li>Check whether the authentication type is supported by the specified source-target connection combination.</li></ul> |
-### Troubleshooting using the Azure CLI
-
-#### InvalidArgumentValueError
+## Error type,error message, and suggested actions using Azure CLI
+### InvalidArgumentValueError
| Error message | Suggested Action | | | |
If you come across an issue, you can refer to the error message to find suggeste
| Error message | Suggested Action | | | | | `{Argument}` shouldn't be blank | User should provide argument value for interactive input |
-| Required keys missing for parameter `{Parameter}`. All possible keys are: `{Keys}` | Provide value for the auth info parameter, usually in the form of `--param key1=val1 key2=val2`. |
+| Required keys missing for parameter `{Parameter}`. All possible keys are: `{Keys}` | Provide value for the authentication information parameter, usually in the form of `--param key1=val1 key2=val2`. |
| Required argument is missing, please provide the arguments: `{Arguments}` | Provide the required argument. | #### ValidationError | Error message | Suggested Action | | | |
-| Only one auth info is needed | User can only provide one auth info parameter. Check whether auth info is missing or multiple auth info parameters are provided. |
-| Auth info argument should be provided when updating the connection: `{ConnectionName}` | When you update a secret type connection, auth info parameter should be provided. This error occurs because user's secret cannot be accessed through the ARM API.
-| Either client type or auth info should be specified to update | When you update a connection, either client type or auth info should be provided. |
-| Usage error: {} [KEY=VALUE ...] | Check the available keys and provide values for the auth info parameter, usually in the form of `--param key1=val1 key2=val2`. |
-| Unsupported Key `{Key}` is provided for parameter `{Parameter}`. All possible keys are: `{Keys}` | Check the available keys and provide values for the auth info parameter, usually in the form of `--param key1=val1 key2=val2`. |
+| Only one auth info is needed | User can only provide one authentication information parameter. Check whether it isn't provided or multiple parameters are provided. |
+| Auth info argument should be provided when updating the connection: `{ConnectionName}` | The authentication information should be provided when updating a secret type connection. This error occurs because a user's secret can't be accessed through the Azure Resource Manager API. |
+| Either client type or auth info should be specified to update | Either client type or authentication information should be provided when updating a connection. |
+| Usage error: `{} [KEY=VALUE ...]` | Check the available keys and provide values for the auth info parameter, usually in the form of `--param key1=val1 key2=val2`. |
+| Unsupported Key `{Key}` is provided for parameter `{Parameter}`. All possible keys are: `{Keys}` | Check the available keys and provide values for the authentication information parameter, usually in the form of `--param key1=val1 key2=val2`. |
| Provision failed, please create the target resource manually and then create the connection. Error details: `{ErrorTrace}` | <ul><li>Retry.</li><li>Create the target resource manually and then create the connection.</li></ul> | ## Next steps
service-connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/overview.md
Previously updated : 05/03/2022 Last updated : 05/24/2022 # What is Service Connector?
-The Service Connector service helps you connect Azure compute services to other backing services. This service configures the network settings and connection information (for example, generating environment variables) between compute services and target backing services in management plane. Developers use their preferred SDK or library that consumes the connection information to do data plane operations against the target backing service.
+Service Connector helps you connect Azure compute services to other backing services. This service configures the network settings and connection information (for example, generating environment variables) between compute services and target backing services in management plane. Developers use their preferred SDK or library that consumes the connection information to do data plane operations against the target backing service.
This article provides an overview of Service Connector.
Any application that runs on Azure compute services and requires a backing servi
See [what services are supported in Service Connector](#what-services-are-supported-in-service-connector) to see more supported services and application patterns.
-## What are the benefits using Service Connector?
+## What are the benefits to using Service Connector?
-**Connect to target backing service with just a single command or a few clicks:**
+**Connect to a target backing service with just a single command or a few clicks:**
Service Connector is designed for your ease of use. To create a connection, you'll need three required parameters: a target service instance, an authentication type between the compute service and the target service, and your application client type. Developers can use the Azure CLI or the guided Azure portal experience to create connections.
Once a service connection is created, developers can validate and check the heal
* Azure App Service * Azure Spring Cloud
+* Azure Container Apps
**Target
Once a service connection is created, developers can validate and check the heal
There are two major ways to use Service Connector for your Azure application: * **Azure CLI:** Create, list, validate and delete service-to-service connections with connection commands in the Azure CLI.
-* **Azure Portal:** Use the guided portal experience to create service-to-service connections and manage connections with a hierarchy list.
+* **Azure portal:** Use the guided portal experience to create service-to-service connections and manage connections with a hierarchy list.
## Next steps
service-connector Quickstart Cli Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-container-apps.md
+
+ Title: Quickstart - Create a service connection in Container Apps using the Azure CLI
+description: Quickstart showing how to create a service connection in Azure Container Apps using the Azure CLI
++++ Last updated : 05/24/2022
+ms.devlang: azurecli
++
+# Quickstart: Create a service connection in Container Apps with the Azure CLI
+
+This quickstart shows you how to create a service connection in Container Apps with the Azure CLI. The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation.
+++
+- Version 2.37.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+
+- An application deployed to Container Apps in a [region supported by Service Connector](./concept-region-support.md). If you don't have one yet, [create and deploy a container to Container Apps](../container-apps/quickstart-portal.md).
+
+> [!IMPORTANT]
+> Service Connector in Container Apps is currently in preview.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## View supported target services
+
+Use the following Azure CLI command to create and manage service connections from Container Apps.
+
+```azurecli-interactive
+az provider register -n Microsoft.ServiceLinker
+az containerapp connection list-support-types --output table
+```
+
+## Create a service connection
+
+### [Using an access key](#tab/using-access-key)
+
+1. Use the following Azure CLI command to create a service connection from Container Apps to a Blob Storage with an access key.
+
+ ```azurecli-interactive
+ az containerapp connection create storage-blob --secret
+ ```
+
+1. Provide the following information at the Azure CLI's request:
+
+ - **The resource group which contains the container app**: the name of the resource group with the container app.
+ - **Name of the container app**: the name of your container app.
+ - **The container where the connection information will be saved:** the name of the container, in your container app, that connects to the target service
+ - **The resource group which contains the storage account:** the name of the resource group name with the storage account. In this guide, we're using a Blob Storage.
+ - **Name of the storage account:** the name of the storage account that contains your blob.
+
+> [!NOTE]
+> If you don't have a Blob Storage, you can run `az containerapp connection create storage-blob --new --secret` to provision a new Blob Storage and directly get connected to your app service.
+
+### [Using a managed identity](#tab/using-managed-identity)
+
+> [!IMPORTANT]
+> Using a managed identity requires you have the permission to [Azure AD role assignment](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). Without this permission, your connection creation will fail. Ask your subscription owner to grant you this permission, or use an access key instead to create the connection.
+
+1. Use the following Azure CLI command to create a service connection from Container Apps to a Blob Storage with a system-assigned managed identity.
+
+ ```azurecli-interactive
+ az containerapp connection create storage-blob --system-identity
+ ```
+
+1. Provide the following information at the Azure CLI's request:
+
+ - **The resource group which contains the container app**: the name of the resource group with the container app.
+ - **Name of the container app**: the name of your container app.
+ - **The container where the connection information will be saved:** the name of the container, in your container app, that connects to the target service
+ - **The resource group which contains the storage account:** the name of the resource group name with the storage account. In this guide, we're using a Blob Storage.
+ - **Name of the storage account:** the name of the storage account that contains your blob.
+
+> [!NOTE]
+> If you don't have a Blob Storage, you can run `az containerapp connection create storage-blob --new --system-identity` to provision a new Blob Storage and directly get connected to your app service.
+++
+## View connections
+
+ Use the Azure CLI command `az containerapp connection list` to list all your container app's provisioned connections. Provide the following information:
+
+- **Source compute service resource group name:** the resource group name of the container app.
+- **Container app name:** the name of your container app.
+
+```azurecli-interactive
+az containerapp connection list -g "<your-container-app-resource-group>" --name "<your-container-app-name>" --output table
+```
+
+The output also displays the provisioning state of your connections: failed or succeeded.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Service Connector internals](./concept-service-connector-internals.md)
service-connector Quickstart Cli Spring Cloud Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-spring-cloud-connection.md
Previously updated : 05/03/2022 Last updated : 03/24/2022 ms.devlang: azurecli
The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure
- At least one Spring Cloud application running on Azure. If you don't have a Spring Cloud application, [create one](../spring-cloud/quickstart.md). - ## View supported target service types
-Use the Azure CLI [[az spring-cloud connection](quickstart-cli-spring-cloud-connection.md)] command to create and manage service connections to your Spring Cloud application.
+Use the Azure CLI [[az spring-cloud connection](quickstart-cli-spring-cloud-connection.md)] command to create and manage service connections to your Spring Cloud application.
```azurecli-interactive az provider register -n Microsoft.ServiceLinker
az spring-cloud connection list-support-types --output table
## Create a service connection
-#### [Using an access key](#tab/Using-access-key)
+### [Using an access key](#tab/Using-access-key)
-Use the Azure CLI [az spring-cloud connection]() command to create a service connection to an Azure Blob Storage with an access key, providing the following information:
+Use the Azure CLI command `az spring-cloud connection` to create a service connection to an Azure Blob Storage with an access key, providing the following information:
- **Spring Cloud resource group name:** the resource group name of the Spring Cloud. - **Spring Cloud name:** the name of your Spring Cloud.
az spring-cloud connection create storage-blob --secret
> [!NOTE] > If you don't have a Blob Storage, you can run `az spring-cloud connection create storage-blob --new --secret` to provision a new one and directly get connected to your app service.
-#### [Using Managed Identity](#tab/Using-Managed-Identity)
+### [Using Managed Identity](#tab/Using-Managed-Identity)
> [!IMPORTANT] > To use Managed Identity, you must have permission to manage [role assignments in Azure Active Directory](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). If you don't have this permission, your connection creation will fail. You can ask your subscription owner to grant you a role assignment permission or use an access key to create the connection.
-Use the Azure CLI [az spring-cloud connection](quickstart-cli-spring-cloud-connection.md) command to create a service connection to a Blob Storage with System-assigned Managed Identity, providing the following information:
+Use the Azure CLI command `az spring-cloud connection` to create a service connection to a Blob Storage with System-assigned Managed Identity, providing the following information:
- **Spring Cloud resource group name:** the resource group name of the Spring Cloud. - **Spring Cloud name:** the name of your Spring Cloud.
az spring-cloud connection list -g <your-spring-cloud-resource-group> --spring-c
Follow the tutorials listed below to start building your own application with Service Connector. > [!div class="nextstepaction"]
-> - [Tutorial: Spring Cloud + MySQL](./tutorial-java-spring-mysql.md)
-> - [Tutorial: Spring Cloud + Apache Kafka on Confluent Cloud](./tutorial-java-spring-confluent-kafka.md)
+> [Tutorial: Spring Cloud + MySQL](./tutorial-java-spring-mysql.md)
+> [Tutorial: Spring Cloud + Apache Kafka on Confluent Cloud](./tutorial-java-spring-confluent-kafka.md)
service-connector Quickstart Portal Spring Cloud Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-spring-cloud-connection.md
Title: Quickstart - Create a service connection in Spring Cloud from the Azure portal
-description: Quickstart showing how to create a service connection in Spring Cloud from Azure portal
+ Title: Create a service connection in Spring Cloud from Azure portal
+description: This quickstart shows you how to create a service connection in Spring Cloud from the Azure portal.
-- Previously updated : 05/03/2022+ Last updated : 5/25/2022+
+- ignite-fall-2021
+- kr2b-contr-experiment
+- event-tier1-build-2022
# Quickstart: Create a service connection in Spring Cloud from the Azure portal
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.
| **Storage account** | Your storage account | The target storage account you want to connect to. If you choose a different service type, select the corresponding target service instance. | 1. Select **Next: Authentication** to select the authentication type. Then select **Connection string** to use access key to connect your Blob storage account.
-1. Then select **Next: Review + Create** to review the provided information. Then select **Create** to create the service connection. It might take 1 minute to complete the operation.
+1. Then select **Next: Review + Create** to review the provided information. Then select **Create** to create the service connection. It might take one minute to complete the operation.
## View service connections in Spring Cloud
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.
1. Select **>** to expand the list and access the properties required by your Spring boot application.
-1. Select the ellipsis **...** and **Validate**. You can see the connection validation details in the pop-up blade from the right.
+1. Select the ellipsis **...** and **Validate**. You can see the connection validation details in the context pane from the right.
## Next steps
service-fabric How To Deploy Service Fabric Application System Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-deploy-service-fabric-application-system-assigned-managed-identity.md
Title: Deploy a Service Fabric app with system-assigned MI
description: This article shows you how to assign a system-assigned managed identity to an Azure Service Fabric application Previously updated : 07/25/2019 Last updated : 05/25/2022 # Deploy Service Fabric application with system-assigned managed identity
+> [!NOTE]
+> Enabling identity for an existing app which was initially deployed using Azure cmdlets is not supported.
+ In order to access the managed identity feature for Azure Service Fabric applications, you must first enable the Managed Identity Token Service on the cluster. This service is responsible for the authentication of Service Fabric applications using their managed identities, and for obtaining access tokens on their behalf. Once the service is enabled, you can see it in Service Fabric Explorer under the **System** section in the left pane, running under the name **fabric:/System/ManagedIdentityTokenService** next to other system services. > [!NOTE]
service-fabric Service Fabric Application Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-lifecycle.md
Title: Application lifecycle in Service Fabric description: Describes developing, deploying, testing, upgrading, maintaining, and removing Service Fabric applications.-
+service: service-fabric
+++ Previously updated : 1/19/2018 Last updated : 05/25/2022 # Service Fabric application lifecycle As with other platforms, an application on Azure Service Fabric usually goes through the following phases: design, development, testing, deployment, upgrading, maintenance, and removal. Service Fabric provides first-class support for the full application lifecycle of cloud applications, from development through deployment, daily management, and maintenance to eventual decommissioning. The service model enables several different roles to participate independently in the application lifecycle. This article provides an overview of the APIs and how they are used by the different roles throughout the phases of the Service Fabric application lifecycle.
See the [Application upgrade tutorial](service-fabric-application-upgrade-tutori
See [Deploy an application](service-fabric-deploy-remove-applications.md) for examples.
+## Preserving disk space in cluster image store
+
+The ImageStoreService keeps copied and provisioned packages, which can lead to accumulation of files. File accumulation can cause the ImageStoreService (fabric:/System/ImageStoreService) to fill up the disk and can increase the build time for ImageStoreService replicas.
+
+To avoid file accumulation, use the following provisioning sequence:
+
+1. Copy package to ImageStore, and use the compress option
+
+1. Provision the package
+
+1. Remove the package in the image store
+
+1. Upgrade the application/cluster
+
+1. Unprovision the old version
+
+Steps 3 and 5 in the procedure above prevent the accumulation of files in the image store.
+
+### Configuration for automatic cleanup
+
+You can automate step 3 above using PowerShell or XML. This will cause the application package to be automatically deleted after the successful registration of the application type.
+
+[PowerShell](/powershell/module/servicefabric/register-servicefabricapplicationtype?view=azureservicefabricps&preserve-view=true):
+
+```powershell
+Register-ServiceFabricApplicationTye -ApplicationPackageCleanupPolicy Automatic
+```
+
+XML:
+
+```xml
+<Section Name="Management">
+ <Parameter Name="CleanupApplicationPackageOnProvisionSuccess" Value="True" />
+</Section>
+```
+
+You can automate step 5 above using XML. This will cause unused application types to be automatically unregistered.
+
+```xml
+<Section Name="Management">
+ <Parameter Name="CleanupUnusedApplicationTypes" Value="true" />
+ <Parameter Name="PeriodicCleanupUnusedApplicationTypes" Value="true" />
+ <Parameter Name="TriggerAppTypeCleanupOnProvisionSuccess" Value="true" />
+ <Parameter Name="MaxUnusedAppTypeVersionsToKeep" Value="3" />
+</Section>
+```
+ ## Cleaning up files and data on nodes The replication of application files will distribute eventually the files to all nodes depending on balancing actions. This can create disk pressure depending on the number of applications and their file size.
service-fabric Service Fabric Cluster Resource Manager Movement Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-movement-cost.md
await fabricClient.ServiceManager.UpdateServiceAsync(new Uri("fabric:/AppName/Se
## Dynamically specifying move cost on a per-replica basis
-The preceding snippets are all for specifying MoveCost for a whole service at once from outside the service itself. However, move cost is most useful when the move cost of a specific service object changes over its lifespan. Since the services themselves probably have the best idea of how costly they are to move a given time, there's an API for services to report their own individual move cost during runtime.
+The preceding snippets are all for specifying MoveCost for a whole service at once from outside the service itself. However, move cost is most useful when the move cost of a specific service object changes over its lifespan. Since the services themselves probably have the best idea of how costly they are to move a given time, there's an API for services to report their own individual move cost during runtime.
C#:
C#:
this.Partition.ReportMoveCost(MoveCost.Medium); ```
+> [!NOTE]
+> You can only set the movement cost for secondary replicas through code.
+ ## Reporting move cost for a partition The previous section describes how service replicas or instances report MoveCost themselves. We provided Service Fabric API for reporting MoveCost values on behalf of other partitions. Sometimes service replica or instance can't determine the best MoveCost value by itself, and must rely on other services logic. Reporting MoveCost on behalf of other partitions, alongside [reporting load on behalf of other partitions](service-fabric-cluster-resource-manager-metrics.md#reporting-load-for-a-partition), allows you to completely manage partitions from outside. These APIs eliminate needs for [the Sidecar pattern](/azure/architecture/patterns/sidecar), from the perspective of the Cluster Resource Manager.
service-health Azure Status Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/azure-status-overview.md
Title: Azure status overview | Microsoft Docs description: A global view into the health of Azure services Previously updated : 06/11/2019 Last updated : 05/26/2022 # Azure status overview
The Azure status page gets updated in real time as the health of Azure services
![Azure status refresh](./media/azure-status-overview/update.PNG)
-## Azure status history
-
-While the Azure status page always shows the latest health information, you can view older events using the [Azure status history page](https://status.azure.com/status/history/). The history page contains all RCAs for incidents that occurred on November 20th, 2019 or later and will - from that date forward - provide a 5-year RCA history. RCAs prior to November 20th, 2019 are not available.
- ## RSS Feed Azure status also provides [an RSS feed](https://status.azure.com/status/feed/) of changes to the health of Azure services that you can subscribe to.
+## When does Azure publish communications to the Status page?
+
+Most of our service issue communications are provided as targeted notifications sent directly to impacted customers & partners. These are delivered through [Azure Service Health](https://azure.microsoft.com/features/service-health/) in the Azure portal and trigger any [Azure Service Health alerts](/azure/service-health/alerts-activity-log-service-notifications-portal?toc=%2Fazure%2Fservice-health%2Ftoc.json) that have been configured. The public Status page is only used to communicate about service issues under three specific scenarios:
+
+- **Scenario 1 - Broad impact involving multiple regions, zones, or services** - A service issue has broad/significant customer impact across multiple services for a full region or multiple regions. We notify you in this case because customer-configured resilience like high availability and/or disaster recovery may not be sufficient to avoid impact.
+- **Scenario 2 - Azure portal / Service Health not accessible** - A service issue impedes you from accessing the Azure portal or Azure Service Health and thus impacted our standard outage communications path described earlier.
+- **Scenario 3 - Service Impact, but not sure who exactly is affected yet** - The service issue has broad/significant customer impact but we aren't yet able to confirm which customers, regions, or services are affected. In this case, we aren't able to send targeted communications, so we provide public updates.
+
+## When does Azure publish RCAs to the Status History page?
+
+While the [Azure status page](https://status.azure.com/status) always shows the latest health information, you can view older events using the [Azure status history page](https://status.azure.com/status/history/). The history page contains all RCAs (Root Cause Analysis) for incidents that occurred on November 20, 2019 or later and will - from that date forward - provide a 5-year RCA history. RCAs prior to November 20, 2019 aren't available.
+
+After June 1st 2022, the [Azure status history page](https://status.azure.com/status/history/) will only be used to provide RCAs for scenario 1 above. We're committed to publishing RCAs publicly for service issues that had the broadest impact, such as those with both a multi-service and multi-region impact. We publish to ensure that all customers and the industry at large can learn from our retrospectives on these issues, and understand what steps we're taking to make such issues less likely and/or less impactful in future.
+
+For scenarios 2 and 3 above - We may communicate publicly on the Status page during impact to work around when our standard, targeted communications aren't able to reach all impacted customers. After the issue is mitigated, we'll conduct a thorough impact analysis to determine exactly which customer subscriptions were impacted. In such scenarios, we'll provide the relevant PIR only to affected customers via [Azure Service Health](https://azure.microsoft.com/features/service-health/) in the Azure portal.
++ ## Next Steps * Learn how you can get a more personalized view into Azure health with [Service Health](./service-health-overview.md).
spring-cloud Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/breaking-changes.md
+
+ Title: Azure Spring Apps API breaking changes
+description: Describes the breaking changes introduced by the latest Azure Spring Apps stable API version.
++++ Last updated : 05/25/2022+++
+# Azure Spring Apps API breaking changes
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+
+This article describes breaking changes introduced into the Azure Spring Apps API.
+
+The Azure Spring Apps service releases the new stable API version 2022-04-01. The new API version introduces breaking changes based on the previous stable API version 2020-07-01. We suggest that you update your API calls to the new API version.
+
+## Previous API deprecation date
+
+The previous API version 2020-07-01 will not be supported starting April, 2025.
+
+## API breaking changes from 2020-07-01 to 2022-04-01
+
+### Deprecate number value CPU and MemoryInGB in Deployments
+
+Deprecate field `properties.deploymentSettings.cpu` and `properties.deploymentSettings.memoryInGB` in the `Spring/Apps/Deployments` resource. Use `properties.deploymentSettings.resourceRequests.cpu` and `properties.deploymentSettings.resourceRequests.memory` instead.
+
+### RBAC role change for blue-green deployment
+
+Deprecate field `properties.activeDeploymentName` in the `Spring/Apps` resource. Use `POST/SUBSCRIPTIONS/RESOURCEGROUPS/PROVIDERS/MICROSOFT.APPPLATFORM/SPRING/APPS/SETACTIVEDEPLOYMENTS` for blue-green deployment. This action needs a separate RBAC role `spring/apps/setActiveDeployments/action` to perform.
+
+### Move options from different property bags for the Spring/Apps/Deployments resource
+
+- Deprecate `properties.createdTime`. Use `systemData.createdAt`.
+- Deprecate `properties.deploymentSettings.jvmOptions`. Use `properties.source.jvmOptions`.
+- Deprecate `properties.deploymentSettings.jvmOptions`. Use `properties.source.runtimeVersion`.
+- Deprecate `properties.deploymentSettings.netCoreMainEntryPath`. Use `properties.source.netCoreMainEntryPath`.
+- Deprecate `properties.appName`, which you can extract from `id`.
+
+## Updates in the Azure CLI extension
+
+### Add new RBAC role for blue-green deployment
+
+You need to add RBAC role `spring/apps/setActiveDeployments/action` to perform the following Azure CLI commands:
+
+```azurecli
+az spring app set-deployment \
+ --resource-group <resource-group-name> \
+ --service <service-instance-name> \
+ --name <app-name> \
+ --deployment <deployment-name>
+az spring app unset-deployment \
+ --resource-group <resource-group-name> \
+ --service <service-instance-name> \
+ --name <app-name>
+```
+
+### Output updates
+
+If you're using the Azure CLI `spring-cloud` extension with a version lower than 3.0.0, and you want to upgrade the extension version or migrate to the `spring` extension, then you should take care of the following output updates.
+
+- `az spring app` command output: Remove `properties.activeDeploymentName`. Use `properties.activeDeployment.name` instead.
+- `az spring app` command output: Remove `properties.createdTime`. Use `systemData.createdAt` instead.
+- `az spring app` command output: Remove `properties.activeDeployment.properties.deploymentSettings.cpu`. Use `properties.activeDeployment.properties.deploymentSettings.resourceRequests.cpu` instead.
+- `az spring app` command output: Remove `properties.activeDeployment.properties.deploymentSettings.memoryInGB`. Use `properties.activeDeployment.properties.deploymentSettings.resourceRequests.memory` instead.
+- `az spring app` command output: Remove `properties.activeDeployment.properties.deploymentSettings.jvmOptions`. Use `properties.activeDeployment.properties.source.jvmOptions` instead.
+- `az spring app` command output: Remove `properties.activeDeployment.properties.deploymentSettings.runtimeVersion`. Use `properties.activeDeployment.properties.source.runtimeVersion` instead.
+- `az spring app` command output: Remove `properties.activeDeployment.properties.deploymentSettings.netCoreMainEntryPath`. Use `properties.activeDeployment.properties.source.netCoreMainEntryPath` instead.
static-web-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/quotas.md
The following quotas exist for Azure Static Web Apps.
| Plan size | 500 MB max app size for a single deployment, and 0.50 GB max for all staging and production environments | 500 MB max app size for a single deployment, and 2.00 GB max combined across all staging and production environments | | Pre-production environments | 3 | 10 | | Custom domains | 2 per app | 5 per app |
+| Allowed IP ranges | Unavailable | 25 |
| Authorization (built-in roles) | Unlimited end-users that may authenticate with built-in `authenticated` role | Unlimited end-users that may authenticate with built-in `authenticated` role | | Authorization (custom roles) | Maximum of 25 end-users that may belong to custom roles via [invitations](authentication-authorization.md?tabs=invitations#role-management) | Maximum of 25 end-users that may belong to custom roles via [invitations](authentication-authorization.md?tabs=invitations#role-management), or unlimited end-users that may be assigned custom roles via [serverless function](authentication-authorization.md?tabs=function#role-management) |
storage Access Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md
Title: Hot, Cool, and Archive access tiers for blob data
-description: Azure storage offers different access tiers so that you can store your blob data in the most cost-effective manner based on how it is being used. Learn about the Hot, Cool, and Archive access tiers for Blob Storage.
+description: Azure storage offers different access tiers so that you can store your blob data in the most cost-effective manner based on how it's being used. Learn about the Hot, Cool, and Archive access tiers for Blob Storage.
Previously updated : 02/28/2022 Last updated : 05/18/2022
# Hot, Cool, and Archive access tiers for blob data
-Data stored in the cloud grows at an exponential pace. To manage costs for your expanding storage needs, it can be helpful to organize your data based on how frequently it will be accessed and how long it will be retained. Azure storage offers different access tiers so that you can store your blob data in the most cost-effective manner based on how it is being used. Azure Storage access tiers include:
+We sometimes use the first person plural in content.
+
+Data stored in the cloud grows at an exponential pace. To manage costs for your expanding storage needs, it can be helpful to organize your data based on how frequently it will be accessed and how long it will be retained. Azure storage offers different access tiers so that you can store your blob data in the most cost-effective manner based on how it's being used. Azure Storage access tiers include:
- **Hot tier** - An online tier optimized for storing data that is accessed or modified frequently. The Hot tier has the highest storage costs, but the lowest access costs. - **Cool tier** - An online tier optimized for storing data that is infrequently accessed or modified. Data in the Cool tier should be stored for a minimum of 30 days. The Cool tier has lower storage costs and higher access costs compared to the Hot tier.
Example usage scenarios for the Hot tier include:
Usage scenarios for the Cool access tier include: - Short-term data backup and disaster recovery.-- Older data sets that are not used frequently, but are expected to be available for immediate access.
+- Older data sets that aren't used frequently, but are expected to be available for immediate access.
- Large data sets that need to be stored in a cost-effective way while additional data is being gathered for processing. To learn how to move a blob to the Hot or Cool tier, see [Set a blob's access tier](access-tiers-online-manage.md). Data in the Cool tier has slightly lower availability, but offers the same high durability, retrieval latency, and throughput characteristics as the Hot tier. For data in the Cool tier, slightly lower availability and higher access costs may be acceptable trade-offs for lower overall storage costs, as compared to the Hot tier. For more information, see [SLA for storage](https://azure.microsoft.com/support/legal/sla/storage/v1_5/).
-A blob in the Cool tier in a general-purpose v2 accounts is subject to an early deletion penalty if it is deleted or moved to a different tier before 30 days has elapsed. This charge is prorated. For example, if a blob is moved to the Cool tier and then deleted after 21 days, you'll be charged an early deletion fee equivalent to 9 (30 minus 21) days of storing that blob in the Cool tier.
+A blob in the Cool tier in a general-purpose v2 account is subject to an early deletion penalty if it's deleted or moved to a different tier before 30 days has elapsed. This charge is prorated. For example, if a blob is moved to the Cool tier and then deleted after 21 days, you'll be charged an early deletion fee equivalent to 9 (30 minus 21) days of storing that blob in the Cool tier.
The Hot and Cool tiers support all redundancy configurations. For more information about data redundancy options in Azure Storage, see [Azure Storage redundancy](../common/storage-redundancy.md).
Data must remain in the Archive tier for at least 180 days or be subject to an e
While a blob is in the Archive tier, it can't be read or modified. To read or download a blob in the Archive tier, you must first rehydrate it to an online tier, either Hot or Cool. Data in the Archive tier can take up to 15 hours to rehydrate, depending on the priority you specify for the rehydration operation. For more information about blob rehydration, see [Overview of blob rehydration from the Archive tier](archive-rehydrate-overview.md).
-An archived blob's metadata remains available for read access, so that you can list the blob and its properties, metadata, and index tags. Metadata for a blob in the Archive tier is read-only, while blob index tags can be read or written. Snapshots are not supported for archived blobs.
+An archived blob's metadata remains available for read access, so that you can list the blob and its properties, metadata, and index tags. Metadata for a blob in the Archive tier is read-only, while blob index tags can be read or written. Snapshots aren't supported for archived blobs.
The following operations are supported for blobs in the Archive tier:
The following operations are supported for blobs in the Archive tier:
- [Set Blob Tags](/rest/api/storageservices/set-blob-tags) - [Set Blob Tier](/rest/api/storageservices/set-blob-tier)
-Only storage accounts that are configured for LRS, GRS, or RA-GRS support moving blobs to the Archive tier. The Archive tier is not supported for ZRS, GZRS, or RA-GZRS accounts. For more information about redundancy configurations for Azure Storage, see [Azure Storage redundancy](../common/storage-redundancy.md).
+Only storage accounts that are configured for LRS, GRS, or RA-GRS support moving blobs to the Archive tier. The Archive tier isn't supported for ZRS, GZRS, or RA-GZRS accounts. For more information about redundancy configurations for Azure Storage, see [Azure Storage redundancy](../common/storage-redundancy.md).
To change the redundancy configuration for a storage account that contains blobs in the Archive tier, you must first rehydrate all archived blobs to the Hot or Cool tier. Microsoft recommends that you avoid changing the redundancy configuration for a storage account that contains archived blobs if at all possible, because rehydration operations can be costly and time-consuming.
Migrating a storage account from LRS to GRS is supported as long as no blobs wer
Storage accounts have a default access tier setting that indicates the online tier in which a new blob is created. The default access tier setting can be set to either Hot or Cool. Users can override the default setting for an individual blob when uploading the blob or changing its tier.
-The default access tier for a new general-purpose v2 storage account is set to the Hot tier by default. You can change the default access tier setting when you create a storage account or after it is created. If you do not change this setting on the storage account or explicitly set the tier when uploading a blob, then a new blob is uploaded to the Hot tier by default.
+The default access tier for a new general-purpose v2 storage account is set to the Hot tier by default. You can change the default access tier setting when you create a storage account or after it's created. If you don't change this setting on the storage account or explicitly set the tier when uploading a blob, then a new blob is uploaded to the Hot tier by default.
A blob that doesn't have an explicitly assigned tier infers its tier from the default account access tier setting. If a blob's access tier is inferred from the default account access tier setting, then the Azure portal displays the access tier as **Hot (inferred)** or **Cool (inferred)**.
-Changing the default access tier setting for a storage account applies to all blobs in the account for which an access tier has not been explicitly set. If you toggle the default access tier setting from Hot to Cool in a general-purpose v2 account, then you are charged for write operations (per 10,000) for all blobs for which the access tier is inferred. You are charged for both read operations (per 10,000) and data retrieval (per GB) if you toggle from Cool to Hot in a general-purpose v2 account.
+Changing the default access tier setting for a storage account applies to all blobs in the account for which an access tier hasn't been explicitly set. If you toggle the default access tier setting from Hot to Cool in a general-purpose v2 account, then you're charged for write operations (per 10,000) for all blobs for which the access tier is inferred. You're charged for both read operations (per 10,000) and data retrieval (per GB) if you toggle from Cool to Hot in a general-purpose v2 account.
-When you create a legacy Blob Storage account, you must specify the default access tier setting as Hot or Cool at create time. There's no charge for changing the default account access tier setting from Hot to Cool in a legacy Blob Storage account. You are charged for both read operations (per 10,000) and data retrieval (per GB) if you toggle from Cool to Hot in a Blob Storage account. Microsoft recommends using general-purpose v2 storage accounts rather than Blob Storage accounts when possible.
+When you create a legacy Blob Storage account, you must specify the default access tier setting as Hot or Cool at create time. There's no charge for changing the default account access tier setting from Hot to Cool in a legacy Blob Storage account. You're charged for both read operations (per 10,000) and data retrieval (per GB) if you toggle from Cool to Hot in a Blob Storage account. Microsoft recommends using general-purpose v2 storage accounts rather than Blob Storage accounts when possible.
> [!NOTE] > The Archive tier is not supported as the default access tier for a storage account.
To explicitly set a blob's tier when you create it, specify the tier when you up
After a blob is created, you can change its tier in either of the following ways: -- By calling the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation, either directly or via a [lifecycle management](#blob-lifecycle-management) policy. Calling [Set Blob Tier](/rest/api/storageservices/set-blob-tier) is typically the best option when you are changing a blob's tier from a hotter tier to a cooler one.-- By calling the [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy a blob from one tier to another. Calling [Copy Blob](/rest/api/storageservices/copy-blob) is recommended for most scenarios where you are rehydrating a blob from the Archive tier to an online tier, or moving a blob from Cool to Hot. By copying a blob, you can avoid the early deletion penalty, if the required storage interval for the source blob has not yet elapsed. However, copying a blob results in capacity charges for two blobs, the source blob and the destination blob.
+- By calling the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation, either directly or via a [lifecycle management](#blob-lifecycle-management) policy. Calling [Set Blob Tier](/rest/api/storageservices/set-blob-tier) is typically the best option when you're changing a blob's tier from a hotter tier to a cooler one.
+- By calling the [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy a blob from one tier to another. Calling [Copy Blob](/rest/api/storageservices/copy-blob) is recommended for most scenarios where you're rehydrating a blob from the Archive tier to an online tier, or moving a blob from Cool to Hot. By copying a blob, you can avoid the early deletion penalty, if the required storage interval for the source blob hasn't yet elapsed. However, copying a blob results in capacity charges for two blobs, the source blob and the destination blob.
Changing a blob's tier from Hot to Cool or Archive is instantaneous, as is changing from Cool to Hot. Rehydrating a blob from the Archive tier to either the Hot or Cool tier can take up to 15 hours. Keep in mind the following points when moving a blob between the Cool and Archive tiers: -- If a blob's tier is inferred as Cool based on the storage account's default access tier and the blob is moved to the Archive tier, there is no early deletion charge.
+- If a blob's tier is inferred as Cool based on the storage account's default access tier and the blob is moved to the Archive tier, there's no early deletion charge.
- If a blob is explicitly moved to the Cool tier and then moved to the Archive tier, the early deletion charge applies. The following table summarizes the approaches you can take to move blobs between various tiers.
The following table summarizes the features of the Hot, Cool, and Archive access
| **Latency** <br> **(Time to first byte)** | Milliseconds | Milliseconds | Hours<sup>2</sup> | | **Supported redundancy configurations** | All | All | LRS, GRS, and RA-GRS<sup>3</sup> only |
-<sup>1</sup> Objects in the Cool tier on general-purpose v2 accounts have a minimum retention duration of 30 days. For Blob Storage accounts, there is no minimum retention duration for the Cool tier.
+<sup>1</sup> Objects in the Cool tier on general-purpose v2 accounts have a minimum retention duration of 30 days. For Blob Storage accounts, there's no minimum retention duration for the Cool tier.
<sup>2</sup> When rehydrating a blob from the Archive tier, you can choose either a standard or high rehydration priority option. Each offers different retrieval latencies and costs. For more information, see [Overview of blob rehydration from the Archive tier](archive-rehydrate-overview.md).
Changing the account access tier results in tier change charges for all blobs th
Keep in mind the following billing impacts when changing a blob's tier: -- When a blob is uploaded or moved between tiers, it is charged at the corresponding rate immediately upon upload or tier change.
+- When a blob is uploaded or moved between tiers, it's charged at the corresponding rate immediately upon upload or tier change.
- When a blob is moved to a cooler tier (Hot to Cool, Hot to Archive, or Cool to Archive), the operation is billed as a write operation to the destination tier, where the write operation (per 10,000) and data write (per GB) charges of the destination tier apply. - When a blob is moved to a warmer tier (Archive to Cool, Archive to Hot, or Cool to Hot), the operation is billed as a read from the source tier, where the read operation (per 10,000) and data retrieval (per GB) charges of the source tier apply. Early deletion charges for any blob moved out of the Cool or Archive tier may apply as well. - While a blob is being rehydrated from the Archive tier, that blob's data is billed as archived data until the data is restored and the blob's tier changes to Hot or Cool.
This table shows how this feature is supported in your account and the impact on
| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | |--|--|--|--|--|
-| [Standard general-purpose v2](https://docs.microsoft.com/azure/storage/common/storage-account-overview?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json#types-of-storage-accounts) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Premium block blobs](https://docs.microsoft.com/azure/storage/common/storage-account-overview?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json#types-of-storage-accounts) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| [Standard general-purpose v2](../common/storage-account-overview.md#types-of-storage-accounts) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| [Premium block blobs](../common/storage-account-overview.md#types-of-storage-accounts) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
storage Data Lake Storage Acl Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-powershell.md
To see an example that sets ACLs recursively in batches by specifying a batch si
When you *update* an ACL, you modify the ACL instead of replacing the ACL. For example, you can add a new security principal to the ACL without affecting other security principals listed in the ACL. To replace the ACL instead of update it, see the [Set ACLs](#set-acls) section of this article.
-To update an ACL, create a new ACL object with the ACL entry that you want to update, and then use that object in update ACL operation. Do not get the existing ACL, just provide ACL entries to be updated.
- This section shows you how to: - Update an ACL
storage Data Lake Storage Tutorial Extract Transform Load Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-tutorial-extract-transform-load-hive.md
If you don't have an Azure subscription, [create a free account](https://azure.m
- **A Secure Shell (SSH) client**: For more information, see [Connect to HDInsight (Hadoop) by using SSH](../../hdinsight/hdinsight-hadoop-linux-use-ssh-unix.md).
-## Download the flight data
-1. Browse to [Research and Innovative Technology Administration, Bureau of Transportation Statistics](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time).
+## Download, extract and then upload the data
-2. On the page, select the following values:
+In this section, you'll download sample flight data. Then, you'll upload that data to your HDInsight cluster and then copy that data to your Data Lake Storage Gen2 account.
- | Name | Value |
- | | |
- | Filter Year |2013 |
- | Filter Period |January |
- | Fields |Year, FlightDate, Reporting_Airline, IATA_CODE_Reporting_Airline, Flight_Number_Reporting_Airline, OriginAirportID, Origin, OriginCityName, OriginState, DestAirportID, Dest, DestCityName, DestState, DepDelayMinutes, ArrDelay, ArrDelayMinutes, CarrierDelay, WeatherDelay, NASDelay, SecurityDelay, LateAircraftDelay. |
+1. Download the [On_Time_Reporting_Carrier_On_Time_Performance_1987_present_2016_1.zip](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/tutorials/On_Time_Reporting_Carrier_On_Time_Performance_1987_present_2016_1.zip) file. This file contains the flight data.
- Clear all other fields.
-
-3. Select **Download**. You get a .zip file with the data fields you selected.
-
-## Extract and upload the data
-
-In this section, you'll upload data to your HDInsight cluster and then copy that data to your Data Lake Storage Gen2 account.
-
-1. Open a command prompt and use the following Secure Copy (Scp) command to upload the .zip file to the HDInsight cluster head node:
+2. Open a command prompt and use the following Secure Copy (Scp) command to upload the .zip file to the HDInsight cluster head node:
```bash scp <file-name>.zip <ssh-user-name>@<cluster-name>-ssh.azurehdinsight.net:<file-name.zip>
In this section, you'll upload data to your HDInsight cluster and then copy that
If you use a public key, you might need to use the `-i` parameter and specify the path to the matching private key. For example, `scp -i ~/.ssh/id_rsa <file_name>.zip <user-name>@<cluster-name>-ssh.azurehdinsight.net:`.
-2. After the upload has finished, connect to the cluster by using SSH. On the command prompt, enter the following command:
+3. After the upload has finished, connect to the cluster by using SSH. On the command prompt, enter the following command:
```bash ssh <ssh-user-name>@<cluster-name>-ssh.azurehdinsight.net ```
-3. Use the following command to unzip the .zip file:
+4. Use the following command to unzip the .zip file:
```bash unzip <file-name>.zip
In this section, you'll upload data to your HDInsight cluster and then copy that
The command extracts a **.csv** file.
-4. Use the following command to create the Data Lake Storage Gen2 container.
+5. Use the following command to create the Data Lake Storage Gen2 container.
```bash hadoop fs -D "fs.azure.createRemoteFileSystemDuringInitialization=true" -ls abfs://<container-name>@<storage-account-name>.dfs.core.windows.net/
In this section, you'll upload data to your HDInsight cluster and then copy that
Replace the `<storage-account-name>` placeholder with the name of your storage account.
-5. Use the following command to create a directory.
+6. Use the following command to create a directory.
```bash hdfs dfs -mkdir -p abfs://<container-name>@<storage-account-name>.dfs.core.windows.net/tutorials/flightdelays/data ```
-6. Use the following command to copy the *.csv* file to the directory:
+7. Use the following command to copy the *.csv* file to the directory:
```bash hdfs dfs -put "<file-name>.csv" abfs://<container-name>@<storage-account-name>.dfs.core.windows.net/tutorials/flightdelays/data/
storage Data Lake Storage Use Databricks Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-use-databricks-spark.md
If you don't have an Azure subscription, create a [free account](https://azure.m
This tutorial uses flight data from the Bureau of Transportation Statistics to demonstrate how to perform an ETL operation. You must download this data to complete the tutorial.
-1. Go to [Research and Innovative Technology Administration, Bureau of Transportation Statistics](https://www.transtats.bts.gov/DL_SelectFields.asp?gnoyr_VQ=FGJ).
+1. Download the [On_Time_Reporting_Carrier_On_Time_Performance_1987_present_2016_1.zip](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/tutorials/On_Time_Reporting_Carrier_On_Time_Performance_1987_present_2016_1.zip) file. This file contains the flight data.
-2. Select the **Prezipped File** check box to select all data fields.
-
-3. Select the **Download** button and save the results to your computer.
-
-4. Unzip the contents of the zipped file and make a note of the file name and the path of the file. You need this information in a later step.
+2. Unzip the contents of the zipped file and make a note of the file name and the path of the file. You need this information in a later step.
## Create an Azure Databricks service
storage Immutable Policy Configure Version Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-policy-configure-version-scope.md
Previously updated : 12/01/2021 Last updated : 05/17/2022 # Configure immutability policies for blob versions
-Immutable storage for Azure Blob Storage enables users to store business-critical data in a WORM (Write Once, Read Many) state. While in a WORM state, data cannot be modified or deleted for a user-specified interval. By configuring immutability policies for blob data, you can protect your data from overwrites and deletes. Immutability policies include time-based retention policies and legal holds. For more information about immutability policies for Blob Storage, see [Store business-critical blob data with immutable storage](immutable-storage-overview.md).
+Immutable storage for Azure Blob Storage enables users to store business-critical data in a WORM (Write Once, Read Many) state. While in a WORM state, data can't be modified or deleted for a user-specified interval. By configuring immutability policies for blob data, you can protect your data from overwrites and deletes. Immutability policies include time-based retention policies and legal holds. For more information about immutability policies for Blob Storage, see [Store business-critical blob data with immutable storage](immutable-storage-overview.md).
An immutability policy may be scoped either to an individual blob version or to a container. This article describes how to configure a version-level immutability policy. To learn how to configure container-level immutability policies, see [Configure immutability policies for containers](immutable-policy-configure-container-scope.md).
To enable support for version-level immutability when you create a storage accou
:::image type="content" source="media/immutable-policy-configure-version-scope/create-account-version-level-immutability.png" alt-text="Screenshot showing how to create a storage account with version-level immutability support":::
-After the storage account is created, you can configure a default version-level policy for the account. For more details, see [Configure a default time-based retention policy](#configure-a-default-time-based-retention-policy).
+After the storage account is created, you can configure a default version-level policy for the account. For more information, see [Configure a default time-based retention policy](#configure-a-default-time-based-retention-policy).
##### [PowerShell](#tab/azure-powershell)
If version-level immutability support is enabled for the storage account and the
Both new and existing containers can be configured to support version-level immutability. However, an existing container must undergo a migration process in order to enable support.
-Keep in mind that enabling version-level immutability support for a container does not make data in that container immutable. You must also either configure a default immutability policy for the container, or an immutability policy on a specific blob version. If you enabled version-level immutability for the storage account when it was created, you can also configure a default immutability policy for the account.
+Keep in mind that enabling version-level immutability support for a container doesn't make data in that container immutable. You must also either configure a default immutability policy for the container, or an immutability policy on a specific blob version. If you enabled version-level immutability for the storage account when it was created, you can also configure a default immutability policy for the account.
#### Enable version-level immutability for a new container
If version-level immutability support is enabled for a container and the contain
#### Migrate an existing container to support version-level immutability
-To configure version-level immutability policies for an existing container, you must migrate the container to support version-level immutable storage. Container migration may take some time and cannot be reversed. You can migrate ten containers at a time per storage account.
+To configure version-level immutability policies for an existing container, you must migrate the container to support version-level immutable storage. Container migration may take some time and can't be reversed. You can migrate 10 containers at a time per storage account.
To migrate an existing container to support version-level immutability policies, the container must have a container-level time-based retention policy configured. The migration fails unless the container has an existing policy. The retention interval for the container-level policy is maintained as the retention interval for the default version-level policy on the container.
-If the container has an existing container-level legal hold, then it cannot be migrated until the legal hold is removed.
+If the container has an existing container-level legal hold, then it can't be migrated until the legal hold is removed.
##### [Portal](#tab/azure-portal)
-To migrate a container to support version-level immutable storage in the Azure portal, follow these steps:
+To migrate a container to support version-level immutability policies in the Azure portal, follow these steps:
1. Navigate to the desired container. 1. Select the **More** button on the right, then select **Access policy**.
To migrate a container to support version-level immutable storage in the Azure p
:::image type="content" source="media/immutable-policy-configure-version-scope/migrate-existing-container.png" alt-text="Screenshot showing how to migrate an existing container to support version-level immutability":::
-While the migration operation is underway, the scope of the policy on the container shows as *Container*.
+While the migration operation is underway, the scope of the policy on the container shows as *Container*. Any operations related to managing version-level immutability policies aren't permitted while the container migration is in progress. Other operations on blob data will proceed normally during migration.
:::image type="content" source="media/immutable-policy-configure-version-scope/container-migration-in-process.png" alt-text="Screenshot showing container migration in process":::
To check the status of the long-running operation, read the operation's **JobSta
$migrationOperation.JobStateInfo.State ```
-If the container does not have an existing time-based retention policy when you attempt to migrate to version-level immutability, then the operation fails. The following example checks the value of the **JobStateInfo.State** property and displays the error message if the operation failed because the container-level policy does not exist.
+If the container doesn't have an existing time-based retention policy when you attempt to migrate to version-level immutability, then the operation fails. The following example checks the value of the **JobStateInfo.State** property and displays the error message if the operation failed because the container-level policy doesn't exist.
```azurepowershell if ($migrationOperation.JobStateInfo.State -eq "Failed") {
az storage container-rm show \
After you have enabled version-level immutability support for a storage account or for an individual container, you can specify a default version-level time-based retention policy for the account or container. When you specify a default policy for an account or container, that policy applies by default to all new blob versions that are created in the account or container. You can override the default policy for any individual blob version in the account or container.
-The default policy is not automatically applied to blob versions that existed before the default policy was configured.
+The default policy isn't automatically applied to blob versions that existed before the default policy was configured.
If you migrated an existing container to support version-level immutability, then the container-level policy that was in effect before the migration is migrated to a default version-level policy for the container.
Time-based retention policies maintain blob data in a WORM state for a specified
You have three options for configuring a time-based retention policy for a blob version: -- Option 1: You can configure a default policy on the storage account or container that applies to all objects in the account or container. Objects in the account or container will inherit the default policy unless you explicitly override it by configuring a policy on an individual blob version. For more details, see [Configure a default time-based retention policy](#configure-a-default-time-based-retention-policy).-- Option 2: You can configure a policy on the current version of the blob. This policy can override a default policy configured on the storage account or container, if a default policy exists and it is unlocked. By default, any previous versions that are created after the policy is configured will inherit the policy on the current version of the blob. For more details, see [Configure a retention policy on the current version of a blob](#configure-a-retention-policy-on-the-current-version-of-a-blob).-- Option 3: You can configure a policy on a previous version of a blob. This policy can override a default policy configured on the current version, if one exists and it is unlocked. For more details, see [Configure a retention policy on a previous version of a blob](#configure-a-retention-policy-on-a-previous-version-of-a-blob).
+- Option 1: You can configure a default policy on the storage account or container that applies to all objects in the account or container. Objects in the account or container will inherit the default policy unless you explicitly override it by configuring a policy on an individual blob version. For more information, see [Configure a default time-based retention policy](#configure-a-default-time-based-retention-policy).
+- Option 2: You can configure a policy on the current version of the blob. This policy can override a default policy configured on the storage account or container, if a default policy exists and it's unlocked. By default, any previous versions that are created after the policy is configured will inherit the policy on the current version of the blob. For more information, see [Configure a retention policy on the current version of a blob](#configure-a-retention-policy-on-the-current-version-of-a-blob).
+- Option 3: You can configure a policy on a previous version of a blob. This policy can override a default policy configured on the current version, if one exists and it's unlocked. For more information, see [Configure a retention policy on a previous version of a blob](#configure-a-retention-policy-on-a-previous-version-of-a-blob).
For more information on blob versioning, see [Blob versioning](versioning-overview.md).
You can view the properties for a blob to see whether a policy is enabled on the
### Configure a retention policy on a previous version of a blob
-You can also configure a time-based retention policy on a previous version of a blob. A previous version is always immutable in that it cannot be modified. However, a previous version can be deleted. A time-based retention policy protects against deletion while it is in effect.
+You can also configure a time-based retention policy on a previous version of a blob. A previous version is always immutable in that it can't be modified. However, a previous version can be deleted. A time-based retention policy protects against deletion while it is in effect.
To configure a time-based retention policy on a previous version of a blob, follow these steps:
az storage blob immutability-policy set \
When you use the Azure portal to upload a blob to a container that supports version-level immutability, you have several options for configuring a time-based retention policy for the new blob: -- Option 1: If a default retention policy is configured for the container, you can upload the blob with the container's policy. This option is selected by default when there is a retention policy on the container.
+- Option 1: If a default retention policy is configured for the container, you can upload the blob with the container's policy. This option is selected by default when there's a retention policy on the container.
- Option 2: If a default retention policy is configured for the container, you can choose to override the default policy, either by defining a custom retention policy for the new blob, or by uploading the blob with no policy. - Option 3: If no default policy is configured for the container, then you can upload the blob with a custom policy, or with no policy.
To configure a time-based retention policy when you upload a blob, follow these
1. Navigate to the desired container, and select **Upload**. 1. In the **Upload** blob dialog, expand the **Advanced** section.
-1. Configure the time-based retention policy for the new blob in the **Retention policy** field. If there is a default policy configured for the container, that policy is selected by default. You can also specify a custom policy for the blob.
+1. Configure the time-based retention policy for the new blob in the **Retention policy** field. If there's a default policy configured for the container, that policy is selected by default. You can also specify a custom policy for the blob.
:::image type="content" source="media/immutable-policy-configure-version-scope/configure-retention-policy-blob-upload.png" alt-text="Screenshot showing options for configuring retention policy on blob upload in Azure portal"::: ## Modify or delete an unlocked retention policy
-You can modify an unlocked time-based retention policy to shorten or lengthen the retention interval. You can also delete an unlocked policy. Editing or deleting an unlocked time-based retention policy for a blob version does not affect policies in effect for any other versions. If there is a default time-based retention policy in effect for the container, then the blob version with the modified or deleted policy will no longer inherit from the container.
+You can modify an unlocked time-based retention policy to shorten or lengthen the retention interval. You can also delete an unlocked policy. Editing or deleting an unlocked time-based retention policy for a blob version doesn't affect policies in effect for any other versions. If there's a default time-based retention policy in effect for the container, then the blob version with the modified or deleted policy will no longer inherit from the container.
### [Portal](#tab/azure-portal)
az storage blob immutability-policy delete \
## Lock a time-based retention policy
-When you have finished testing a time-based retention policy, you can lock the policy. A locked policy is compliant with SEC 17a-4(f) and other regulatory compliance. You can lengthen the retention interval for a locked policy up to five times, but you cannot shorten it.
+When you have finished testing a time-based retention policy, you can lock the policy. A locked policy is compliant with SEC 17a-4(f) and other regulatory compliance. You can lengthen the retention interval for a locked policy up to five times, but you can't shorten it.
-After a policy is locked, you cannot delete it. However, you can delete the blob after the retention interval has expired.
+After a policy is locked, you can't delete it. However, you can delete the blob after the retention interval has expired.
### [Portal](#tab/azure-portal)
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
Previously updated : 05/12/2022 Last updated : 05/24/2022
Premium block blobs are available in a subset of Azure regions:
- (North America) East US - (North America) East US 2 - (North America) West US 2
+- (North America) South Central US
- (South America) Brazil South #### Premium file share accounts
Only standard general-purpose v2 storage accounts support GZRS. GZRS is supporte
## Read access to data in the secondary region
-Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary region to protect against regional outages. However, that data is available to be read only if the customer or Microsoft initiates a failover from the primary to secondary region. When you enable read access to the secondary region, your data is always available to be read, including in a situation where the primary region becomes unavailable. For read access to the secondary region, enable read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS).
+Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary region to protect against regional outages. With an account configured for GRS or GZRS, data in the secondary region is not directly accessible to users or applications, unless a failover occurs. The failover process updates the DNS entry provided by Azure Storage so that the secondary endpoint becomes the new primary endpoint for your storage account. During the failover process, your data is inaccessible. After the failover is complete, you can read and write data to the new primary region. For more information about failover and disaster recovery, see [How an account failover works](storage-disaster-recovery-guidance.md#how-an-account-failover-works).
+
+If your applications require high availability, then you can configure your storage account for read access to the secondary region. When you enable read access to the secondary region, then your data is always available to be read from the secondary, including in a situation where the primary region becomes unavailable. Read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS) configurations permit read access to the secondary region.
+
+> [!CAUTION]
+> Because data is replicated asynchronously from the primary to the secondary region, the secondary region is typically behind the primary region in terms of write operations. If a disaster were to strike the primary region, it's likely that some data would be lost. For more information about how to plan for potential data loss, see [Anticipate data loss](storage-disaster-recovery-guidance.md#anticipate-data-loss).
> [!NOTE] > Azure Files does not support read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS).
storage File Sync Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-deployment-guide.md
description: Learn how to deploy Azure File Sync, from start to finish, using th
Previously updated : 04/12/2022 Last updated : 05/24/2022
Currently, pre-seeding approach has a few limitations -
## Self-service restore through Previous Versions and VSS (Volume Shadow Copy Service)
-> [!IMPORTANT]
-> The following information can only be used with version 9 (or above) of the storage sync agent. Versions lower than 9 will not have the StorageSyncSelfService cmdlets.
- Previous Versions is a Windows feature that allows you to utilize server-side VSS snapshots of a volume to present restorable versions of a file to an SMB client. This enables a powerful scenario, commonly referred to as self-service restore, directly for information workers instead of depending on the restore from an IT admin.
storage File Sync Disaster Recovery Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-disaster-recovery-best-practices.md
description: Learn about best practices for disaster recovery with Azure File Sy
Previously updated : 08/18/2021 Last updated : 05/24/2022
If you enable cloud tiering, don't implement an on-premises backup solution. Wit
If you decide to use an on-premises backup solution, backups should be performed on a server in the sync group with cloud tiering disabled. When performing a restore, use the volume-level or file-level restore options. Files restored using the file-level restore option will sync to all endpoints in the sync group and existing files will be replaced with the version restored from backup. Volume-level restores won't replace newer file versions in the cloud endpoint or other server endpoints.
-In Azure File Sync agent version 9 and above, [Volume Shadow Copy Service (VSS) snapshots](file-sync-deployment-guide.md#self-service-restore-through-previous-versions-and-vss-volume-shadow-copy-service) (including the **Previous Versions** tab) are supported on volumes with cloud tiering enabled. This allows you to perform self-service restores instead of relying on an admin to perform restores for you. However, you must enable previous version compatibility through PowerShell, which will increase your snapshot storage costs. VSS snapshots don't protect against disasters on the server endpoint itself, so they should only be used alongside cloud-side backups. For details, see [Self Service restore through Previous Versions and VSS](file-sync-deployment-guide.md#self-service-restore-through-previous-versions-and-vss-volume-shadow-copy-service).
+[Volume Shadow Copy Service (VSS) snapshots](file-sync-deployment-guide.md#self-service-restore-through-previous-versions-and-vss-volume-shadow-copy-service) (including the **Previous Versions** tab) are supported on volumes with cloud tiering enabled. This allows you to perform self-service restores instead of relying on an admin to perform restores for you. However, you must enable previous version compatibility through PowerShell, which will increase your snapshot storage costs. VSS snapshots don't protect against disasters on the server endpoint itself, so they should only be used alongside cloud-side backups. For details, see [Self Service restore through Previous Versions and VSS](file-sync-deployment-guide.md#self-service-restore-through-previous-versions-and-vss-volume-shadow-copy-service).
## Data redundancy
storage Files Nfs Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-nfs-protocol.md
description: Learn about file shares hosted in Azure Files using the Network Fil
Previously updated : 04/19/2022 Last updated : 05/25/2022
Azure Files offers two industry-standard file system protocols for mounting Azur
This article covers NFS Azure file shares. For information about SMB Azure file shares, see [SMB file shares in Azure Files](files-smb-protocol.md). > [!IMPORTANT]
-> Before using NFS file shares for production, see the [Troubleshoot Azure NFS file shares](storage-troubleshooting-files-nfs.md) article for a list of known issues.
+> NFS Azure file shares are not supported for Windows clients. Before using NFS Azure file shares for production, see the [Troubleshoot NFS Azure file shares](storage-troubleshooting-files-nfs.md) article for a list of known issues.
## Common scenarios NFS file shares are often used in the following scenarios:
NFS Azure file shares are only offered on premium file shares, which store data
## Workloads > [!IMPORTANT]
-> Before using NFS file shares for production, see the [Troubleshoot Azure NFS file shares](storage-troubleshooting-files-nfs.md) article for a list of known issues.
+> Before using NFS Azure file shares for production, see [Troubleshoot NFS Azure file shares](storage-troubleshooting-files-nfs.md) for a list of known issues.
NFS has been validated to work well with workloads such as SAP application layer, database backups, database replication, messaging queues, home directories for general purpose file servers, and content repositories for application workloads.
-The following workloads have known issues. See the [Troubleshoot Azure NFS file shares](storage-troubleshooting-files-nfs.md) article for list of known issues:
+The following workloads have known issues:
- Oracle Database will experience incompatibility with its dNFS feature.
storage Storage Files Quick Create Use Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-linux.md
Now that you've created an NFS share, to use it you have to mount it on your Lin
1. You should see **Connect to this NFS share from Linux** along with sample commands to use NFS on your Linux distribution and a provided mounting script. > [!IMPORTANT]
- > The provided mounting script will mount the NFS share only until the Linux machine is rebooted. To automatically mount the share every time the machine reboots, use a [static mount with /etc/fstab](storage-how-to-use-files-linux.md#static-mount-with-etcfstab).
+ > The provided mounting script will mount the NFS share only until the Linux machine is rebooted. To automatically mount the share every time the machine reboots, [add an entry in /etc/fstab](storage-how-to-use-files-linux.md#static-mount-with-etcfstab). For more information, enter the command `man fstab` from the Linux command line.
:::image type="content" source="media/storage-files-quick-create-use-linux/mount-nfs-share.png" alt-text="Screenshot showing how to connect to an N F S file share from Linux using a provided mounting script." lightbox="media/storage-files-quick-create-use-linux/mount-nfs-share.png" border="true":::
storage Storage Troubleshooting Files Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshooting-files-nfs.md
Title: Troubleshoot Azure NFS file share problems - Azure Files
-description: Troubleshoot Azure NFS file share problems.
+ Title: Troubleshoot NFS file share problems - Azure Files
+description: Troubleshoot NFS Azure file share problems.
Previously updated : 09/15/2020 Last updated : 05/25/2022
-# Troubleshoot Azure NFS file share problems
+# Troubleshoot NFS Azure file share problems
-This article lists some common problems and known issues related to Azure NFS file shares. It provides potential causes and workarounds when these problems are encountered.
+This article lists some common problems and known issues related to NFS Azure file shares. It provides potential causes and workarounds when these problems are encountered.
+
+> [!IMPORTANT]
+> NFS Azure file shares are not supported for Windows clients.
## Applies to | File share type | SMB | NFS |
NFS is only available on storage accounts with the following configuration:
Follow the instructions in our article: [How to create an NFS share](storage-files-how-to-create-nfs-shares.md).
-## Cannot connect to or mount an Azure NFS file share
+## Cannot connect to or mount an NFS Azure file share
### Cause 1: Request originates from a client in an untrusted network/untrusted IP
stream-analytics Capture Event Hub Data Parquet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/capture-event-hub-data-parquet.md
Use the following steps to configure a Stream Analytics job to capture data in A
1. For streaming blobs, the directory path pattern is expected to be a dynamic value. It's required for the date to be a part of the file path for the blob ΓÇô referenced as `{date}`. To learn about custom path patterns, see to [Azure Stream Analytics custom blob output partitioning](stream-analytics-custom-path-patterns-blob-storage-output.md). :::image type="content" source="./media/capture-event-hub-data-parquet/blob-configuration.png" alt-text="First screenshot showing the Blob window where you edit a blob's connection configuration." lightbox="./media/capture-event-hub-data-parquet/blob-configuration.png" ::: 1. Select **Connect**
-1. When the connection is established, you will see fields that are present in the output data.
+1. When the connection is established, you'll see fields that are present in the output data.
1. Select **Save** on the command bar to save your configuration. 1. Select **Start** on the command bar to start the streaming flow to capture data. Then in the Start Stream Analytics job window: 1. Choose the output start time.
Use the following steps to configure a Stream Analytics job to capture data in A
1. In the **Choose Output data error handling** list, select the behavior you want when the output of the job fails due to data error. Select **Retry** to have the job retry until it writes successfully or select another option. :::image type="content" source="./media/capture-event-hub-data-parquet/start-job.png" alt-text="Screenshot showing the Start Stream Analytics job window where you set the output start time, streaming units, and error handling." lightbox="./media/capture-event-hub-data-parquet/start-job.png" :::
+## Verify output
+Verify that the Parquet files are generated in the Azure Data Lake Storage container.
+++ The new job is shown on the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it. :::image type="content" source="./media/capture-event-hub-data-parquet/open-metrics-link.png" alt-text="Screenshot showing Open Metrics link selected." lightbox="./media/capture-event-hub-data-parquet/open-metrics-link.png" :::
+Here's an example screenshot of metrics showing input and output events.
++ ## Next steps Now you know how to use the Stream Analytics no code editor to create a job that captures Event Hubs data to Azure Data Lake Storage Gen2 in Parquet format. Next, you can learn more about Azure Stream Analytics and how to monitor the job that you created.
stream-analytics Event Hubs Parquet Capture Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/event-hubs-parquet-capture-tutorial.md
Previously updated : 05/23/2022 Last updated : 05/25/2022
Before you start, make sure you've completed the following steps:
### Query using Azure Synapse Serverless SQL 1. In the **Develop** hub, create a new **SQL script**.
-2. Paste the following script and **Run** it using the **Built-in** serverless SQL endpoint. Replace *container* and *adlsname* with the name of the container and ADLS Gen2 account used in the previous step.
- ``SQL
+2. Paste the following script and **Run** it using the **Built-in** serverless SQL endpoint. Replace *container* and *adlsname* with the name of the container and ADLS Gen2 account used in the previous step.
+ ```SQL
SELECT TOP 100 * FROM
stream-analytics Filter Ingest Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/filter-ingest-data-lake-storage-gen2.md
Previously updated : 05/08/2022 Last updated : 05/24/2022 # Filter and ingest to Azure Data Lake Storage Gen2 using the Stream Analytics no code editor
This article describes how you can use the no code editor to easily create a Str
:::image type="content" source="./media/filter-ingest-data-lake-storage-gen2/add-field.png" alt-text="Screenshot showing where you can add a field or remove, rename, or change a field type." lightbox="./media/filter-ingest-data-lake-storage-gen2/add-field.png" ::: 1. A live sample of incoming data in **Data preview** table under the diagram view. It automatically refreshes periodically. You can select **Pause streaming preview** to see a static view of sample input data. :::image type="content" source="./media/filter-ingest-data-lake-storage-gen2/sample-input.png" alt-text="Screenshot showing sample data on the Data preview tab." lightbox="./media/filter-ingest-data-lake-storage-gen2/sample-input.png" :::
-1. In the **Filter** area, select a field to filter the incoming data with a condition.
+1. Select the **Filter** tile. In the **Filter** area, select a field to filter the incoming data with a condition.
:::image type="content" source="./media/filter-ingest-data-lake-storage-gen2/filter-data.png" alt-text="Screenshot showing the Filter area where you can add a conditional filter." lightbox="./media/filter-ingest-data-lake-storage-gen2/filter-data.png" :::
-1. Select the Azure Data Lake Gen2 table to send your filtered data:
+1. Select the **Azure Data Lake Storage Gen2** tile. Select the **Azure Data Lake Gen2** account to send your filtered data:
1. Select the **subscription**, **storage account name**, and **container** from the drop-down menu. 1. After the **subscription** is selected, the **authentication method** and **storage account key** should be automatically filled in. Select **Connect**. For more information about the fields and to see examples of path pattern, see [Blob storage and Azure Data Lake Gen2 output from Azure Stream Analytics](blob-storage-azure-data-lake-gen2-output.md).
This article describes how you can use the no code editor to easily create a Str
1. After your select **Start**, the job starts running within two minutes. :::image type="content" source="./media/filter-ingest-data-lake-storage-gen2/no-code-start-job.png" alt-text="Screenshot showing the Start Stream Analytics job window." lightbox="./media/filter-ingest-data-lake-storage-gen2/no-code-start-job.png" :::
-You can see the job under the Process Data section in the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it or stop and restart it, as needed.
+You can see the job under the Process Data section in the **Stream Analytics jobs** tab. Select **Refresh** until you see the job status as **Running**. Select **Open metrics** to monitor it or stop and restart it, as needed.
:::image type="content" source="./media/filter-ingest-data-lake-storage-gen2/no-code-list-jobs.png" alt-text="Screenshot showing the Stream Analytics jobs tab." lightbox="./media/filter-ingest-data-lake-storage-gen2/no-code-list-jobs.png" :::
+Here's a sample **Metrics** page:
+++
+## Verify data in Data Lake Storage
+
+1. You should see files created in the container you specified.
+
+ :::image type="content" source="./media/filter-ingest-data-lake-storage-gen2/filtered-data-file.png" alt-text="Screenshot showing the generated file with filtered data in the Azure Data Lake Storage." lightbox="./media/filter-ingest-data-lake-storage-gen2/filtered-data-file.png" :::
+1. Download and open the file to confirm that you see only the filtered data. In the following example, you see data with **SwitchNum** set to **US**.
+
+ ```json
+ {"RecordType":"MO","SystemIdentity":"d0","FileNum":"548","SwitchNum":"US","CallingNum":"345697969","CallingIMSI":"466921402416657","CalledNum":"012332886","CalledIMSI":"466923101048691","DateS":"20220524","TimeType":0,"CallPeriod":0,"ServiceType":"S","Transfer":0,"OutgoingTrunk":"419","MSRN":"1416960750071","callrecTime":"2022-05-25T02:07:10Z","EventProcessedUtcTime":"2022-05-25T02:07:50.5478116Z","PartitionId":0,"EventEnqueuedUtcTime":"2022-05-25T02:07:09.5140000Z", "TimeS":null,"CallingCellID":null,"CalledCellID":null,"IncomingTrunk":null,"CalledNum2":null,"FCIFlag":null}
+ {"RecordType":"MO","SystemIdentity":"d0","FileNum":"552","SwitchNum":"US","CallingNum":"012351287","CallingIMSI":"262021390056324","CalledNum":"012301973","CalledIMSI":"466922202613463","DateS":"20220524","TimeType":3,"CallPeriod":0,"ServiceType":"V","Transfer":0,"OutgoingTrunk":"442","MSRN":"886932428242","callrecTime":"2022-05-25T02:07:13Z","EventProcessedUtcTime":"2022-05-25T02:07:50.5478116Z","PartitionId":0,"EventEnqueuedUtcTime":"2022-05-25T02:07:12.7350000Z", "TimeS":null,"CallingCellID":null,"CalledCellID":null,"IncomingTrunk":null,"CalledNum2":null,"FCIFlag":null}
+ {"RecordType":"MO","SystemIdentity":"d0","FileNum":"559","SwitchNum":"US","CallingNum":"456757102","CallingIMSI":"466920401237309","CalledNum":"345617823","CalledIMSI":"466923000886460","DateS":"20220524","TimeType":1,"CallPeriod":696,"ServiceType":"V","Transfer":1,"OutgoingTrunk":"419","MSRN":"886932429155","callrecTime":"2022-05-25T02:07:22Z","EventProcessedUtcTime":"2022-05-25T02:07:50.5478116Z","PartitionId":0,"EventEnqueuedUtcTime":"2022-05-25T02:07:21.9190000Z", "TimeS":null,"CallingCellID":null,"CalledCellID":null,"IncomingTrunk":null,"CalledNum2":null,"FCIFlag":null}
+ ```
++ ## Next steps Learn more about Azure Stream Analytics and how to monitor the job you've created.
synapse-analytics Sqlpool Create Restore Point https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/backuprestore/sqlpool-create-restore-point.md
Title: Create a user defined restore point for a dedicated SQL pool description: Learn how to use the Azure portal to create a user-defined restore point for dedicated SQL pool in Azure Synapse Analytics. -+ Last updated 10/29/2020 -+ # User-defined restore points
synapse-analytics How To Discover Connect Analyze Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/catalog-and-governance/how-to-discover-connect-analyze-azure-purview.md
Last updated 12/16/2020 -+ # Discover, connect, and explore data in Synapse using Microsoft Purview
synapse-analytics Migrate To Synapse Analytics Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/migrate-to-synapse-analytics-guide.md
ms.devlang:
- Previously updated : 03/10/2021+ Last updated : 05/24/2022 # Migrate a data warehouse to a dedicated SQL pool in Azure Synapse Analytics
Performing a successful migration requires you to migrate your table schemas, co
The Customer Advisory Team has some great Azure Synapse Analytics (formerly Azure SQL Data Warehouse) guidance published as blog posts. For more information on migration, see [Migrating data to Azure SQL Data Warehouse in practice](/archive/blogs/sqlcat/migrating-data-to-azure-sql-data-warehouse-in-practice).
+For more information specifically about migrations from Netezza or Teradata to Azure Synapse Analytics, start at the first step of a seven-article sequence on migrations:
+
+- [Netezza to Azure Synapse Analytics migrations](netezz)
+- [Teradata to Azure Synapse Analytics migrations](teradat)
+ ## Migration assets from real-world engagements For more assistance with completing this migration scenario, see the following resources. They were developed in support of a real-world migration project engagement.
For more assistance with completing this migration scenario, see the following r
| [Handling data encoding issues while loading data to Azure Synapse Analytics](https://azure.microsoft.com/blog/handling-data-encoding-issues-while-loading-data-to-sql-data-warehouse/) | This blog post provides insight on some of the data encoding issues you might encounter while using PolyBase to load data to SQL Data Warehouse. This article also provides some options that you can use to overcome such issues and load the data successfully. | | [Getting table sizes in Azure Synapse Analytics dedicated SQL pool](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Getting%20table%20sizes%20in%20SQL%20DW.pdf) | One of the key tasks that an architect must perform is to get metrics about a new environment post-migration. Examples include collecting load times from on-premises to the cloud and collecting PolyBase load times. One of the most important tasks is to determine the storage size in SQL Data Warehouse compared to the customer's current platform. | + The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform. ## Videos Watch how [Walgreens migrated its retail inventory system](https://www.youtube.com/watch?v=86dhd8N1lH4) with about 100 TB of data from Netezza to Azure Synapse Analytics in record time.+
+> [!TIP]
+> For more information on Synapse migrations, see [Azure Synapse Analytics migration guides](index.yml).
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/1-design-performance-migration.md
This paper looks at schema migration with a goal of equivalent or better perform
When migrating from a Netezza environment, there are some specific topics to consider in addition to the more general subjects described in this article.
-#### Choosing the workload for the initial migration
+#### Choose the workload for the initial migration
Legacy Netezza environments have typically evolved over time to encompass multiple subject areas and mixed workloads. When deciding where to start on an initial migration project, choose an area that can:
There are third-party vendors who offer tools and services to automate migration
#### SQL DML syntax differences
-There are a few differences in SQL Data Manipulation Language (DML) syntax between Netezza SQL and Azure Synapse (T-SQL) that you should be aware during migration:
+There are a few differences in SQL Data Manipulation Language (DML) syntax between Netezza SQL and Azure Synapse (T-SQL) that you should be aware of during migration:
- `STRPOS`: In Netezza, the `STRPOS` function returns the position of a substring within a string. The equivalent function in Azure Synapse is `CHARINDEX`, with the order of the arguments reversed. For example, `SELECT STRPOS('abcdef','def')...` in Netezza is equivalent to `SELECT CHARINDEX('def','abcdef')...` in Azure Synapse.
In Netezza, a sequence is a named database object created via `CREATE SEQUENCE`
Within Azure Synapse, there's no `CREATE SEQUENCE`. Sequences are handled via use of [IDENTITY](/sql/t-sql/statements/create-table-transact-sql-identity-property?msclkid=8ab663accfd311ec87a587f5923eaa7b) columns or using SQL code to create the next sequence number in a series.
-### Extracting metadata and data from a Netezza environment
+### Extract metadata and data from a Netezza environment
#### Data Definition Language (DDL) generation
Ensure that statistics on data tables are up to date by building in a [statistic
PolyBase is the most efficient method for loading large amounts of data into the warehouse since it can leverage parallel loading streams. For more information, see [PolyBase data loading strategy](/azure/synapse-analytics/sql/load-data-overview).
-#### Use Workload management
+#### Use workload management
-Use [Workload management](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management?context=/azure/synapse-analytics/context/context) instead of resource classes. ETL would be in its own workgroup and should be configured to have more resources per query (less concurrency by more resources). For more information, see [What is dedicated SQL pool in Azure Synapse Analytics](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is).
+Use [workload management](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management?context=/azure/synapse-analytics/context/context) instead of resource classes. ETL would be in its own workgroup and should be configured to have more resources per query (less concurrency by more resources). For more information, see [What is dedicated SQL pool in Azure Synapse Analytics](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is).
## Next steps
synapse-analytics 2 Etl Load Migration Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/2-etl-load-migration-considerations.md
Even if a data model change is an intended part of the overall migration, it's g
When migrating from Netezza, often the existing data model is already suitable for as-is migration to Azure Synapse.
-#### Migrating data marts&mdash;stay physical or go virtual?
+#### Migrate data marts - stay physical or go virtual?
> [!TIP] > Virtualizing data marts can save on storage and processing resources.
If a third-party ETL tool is already in use, and especially if there's a large i
If you decide to retain an existing third-party ETL tool, there may be benefits to running that tool within the Azure environment (rather than on an existing on-premises ETL server) and having Azure Data Factory handle the overall orchestration of the existing workflows. One particular benefit is that less data needs to be downloaded from Azure, processed, and then uploaded back into Azure. So, decision 4 is whether to leave the existing tool running as-is or to move it into the Azure environment to achieve cost, performance, and scalability benefits.
-### Re-engineering existing Netezza-specific scripts
+### Re-engineer existing Netezza-specific scripts
If some or all the existing Netezza warehouse ETL/ELT processing is handled by custom scripts that utilize Netezza-specific utilities, such as nzsql or nzload, then these scripts need to be recoded for the new Azure Synapse environment. Similarly, if ETL processes were implemented using stored procedures in Netezza, then these will also have to be recoded.
One way of testing Netezza SQL for compatibility with Azure Synapse is to captur
[Microsoft partners](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration) offer tools and services to migrate Netezza SQL and stored procedures to Azure Synapse.
-### Using existing third-party ETL tools
+### Use third-party ETL tools
As described in the previous section, in many cases the existing legacy data warehouse system will already be populated and maintained by third-party ETL products. For a list of Microsoft data integration partners for Azure Synapse, see [Data integration partners](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration).
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/3-security-access-operations.md
See [Resource classes for workload management](/azure/sql-data-warehouse/resourc
This information can also be used for capacity planning, determining the resources required for additional users or application workload. This also applies to planning scale up/scale downs of compute resources for cost-effective support of 'peaky' workloads.
-### Scaling compute resources
+### Scale compute resources
> [!TIP] > A major benefit of Azure is the ability to independently scale up and down compute resources on demand to handle peaky workloads cost-effectively.
synapse-analytics 4 Visualization Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/4-visualization-reporting.md
Last updated 05/24/2022
This article is part four of a seven part series that provides guidance on how to migrate from Netezza to Azure Synapse Analytics. This article provides best practices for visualization and reporting.
-## Accessing Azure Synapse Analytics using Microsoft and third-party BI tools
+## Access Azure Synapse Analytics using Microsoft and third-party BI tools
Almost every organization accesses data warehouses and data marts using a range of BI tools and applications, such as:
There's a lot to think about here, so let's look at all this in more detail.
> [!TIP] > A lift and shift data warehouse migration are likely to minimize any disruption to reports, dashboards, and other visualizations.
-## Minimizing the impact of data warehouse migration on BI tools and reports using data virtualization
+## Minimize the impact of data warehouse migration on BI tools and reports using data virtualization
> [!TIP] > Data virtualization allows you to shield business users from structural changes during migration so that they remain unaware of changes.
This breaks the dependency between business users utilizing self-service BI tool
By introducing data virtualization, any schema alternations made during data warehouse and data mart migration to Azure Synapse (to optimize performance, for example) can be hidden from business users because they only access virtual tables in the data virtualization layer. If structural changes are needed, only the mappings between the data warehouse or data marts, and any virtual tables would need to be changed so that users remain unaware of those changes and unaware of the migration. [Microsoft partners](/azure/synapse-analytics/partner/data-integration) provides a useful data virtualization software.
-## Identifying high priority reports to migrate first
+## Identify high priority reports to migrate first
A key question when migrating your existing reports and dashboards to Azure Synapse is which ones to migrate first. Several factors can drive the decision. For example:
A key question when migrating your existing reports and dashboards to Azure Syna
These factors are discussed in more detail later in this article.
-Whatever the decision is, it must involve the business, since they produce the reports and dashboards, and consume the insights these artifacts provide in support of the decisions that are made around your business. That said, if most reports and dashboards can be migrated seamlessly, with minimal effort, and offer up like- for-like results, simply by pointing your BI tool(s) at Azure Synapse, instead of your legacy data warehouse system, then everyone benefits. Therefore, if it's that straight forward and there's no reliance on legacy system proprietary SQL extensions, then there's no doubt that the above ease of migration option breeds confidence.
+Whatever the decision is, it must involve the business, since they produce the reports and dashboards, and consume the insights these artifacts provide in support of the decisions that are made around your business. That said, if most reports and dashboards can be migrated seamlessly, with minimal effort, and offer up like-for-like results, simply by pointing your BI tool(s) at Azure Synapse, instead of your legacy data warehouse system, then everyone benefits. Therefore, if it's that straight forward and there's no reliance on legacy system proprietary SQL extensions, then there's no doubt that the above ease of migration option breeds confidence.
-### Migrating reports based on usage
+### Migrate reports based on usage
-Usage is interesting, since it's an indicator of business value. Reports and dashboards that are never used clearly aren't contributing to supporting any decisions and$1 [don't currently offer any value. So, do you've any mechanism for finding out which reports, and dashboards are currently not used? Several BI tools provide statistics on usage, which would be an obvious place to start.
+Usage is interesting, since it's an indicator of business value. Reports and dashboards that are never used clearly aren't contributing to supporting any decisions and don't currently offer any value. So, do you have any mechanism for finding out which reports and dashboards are currently not used? Several BI tools provide statistics on usage, which would be an obvious place to start.
If your legacy data warehouse has been up and running for many years, there's a high chance you could have hundreds, if not thousands, of reports in existence. In these situations, usage is an important indicator to the business value of a specific report or dashboard. In that sense, it's worth compiling an inventory of the reports and dashboards you've and defining their business purpose and usage statistics. For those that aren't used at all, it's an appropriate time to seek a business decision, to determine if it necessary to de-commission those reports to optimize your migration efforts. A key question worth asking when deciding to de-commission unused reports is: are they unused because people don't know they exist, or is it because they offer no business value, or have they been superseded by others?
-### Migrating reports based on business value
+### Migrate reports based on business value
Usage on its own isn't a clear indicator of business value. There needs to be a deeper business context to determine the value to the business. In an ideal world, we would like to know the contribution of the insights produced in a report to the bottom line of the business. That's exceedingly difficult to determine, since every decision made, and its dependency on the insights in a specific report, would need to be recorded along with the contribution that each decision makes to the bottom line of the business. You would also need to do this overtime.
It's also worthwhile to classify reports and dashboards as operational, tactical
While this may seem too time consuming, you need a mechanism to understand the contribution of reports and dashboards to business value, whether you're migrating or not. Catalogs like Azure Data Catalog are becoming very important because they give you the ability to catalog reports and dashboards, automatically capture the metadata associated with them, and let business users tag and rate them to help you understand business value.
-### Migrating reports based on data migration strategy
+### Migrate reports based on data migration strategy
> [!TIP] > Data migration strategy could also dictate which reports and visualizations get migrated first.
Additionally, any report, dashboard, or other visualization in an application or
- Issues SQL queries, which include proprietary SQL functions peculiar to the SQL dialect of your legacy data warehouse DBMS, that have no equivalent in Azure Synapse.
-### Gauging the impact of SQL incompatibilities on your reporting portfolio
+### Gauge the impact of SQL incompatibilities on your reporting portfolio
You can't rely on documentation associated with reports, dashboards, and other visualizations to gauge how big of an impact SQL incompatibility may have on the portfolio of embedded query services, reports, dashboards, and other visualizations you're intending to migrate to Azure Synapse. There must be a more precise way of doing that.
-#### Using EXPLAIN statements to find SQL incompatibilities
+#### Use EXPLAIN statements to find SQL incompatibilities
> [!TIP] > Gauge the impact of SQL incompatibilities by harvesting your DBMS log files and running `EXPLAIN` statements.
One way is to get a hold of the SQL log files of your legacy data warehouse. Use
Metadata from your legacy data warehouse DBMS will also help you when it comes to views. Again, you can capture and view SQL statements, and `EXPLAIN` them as described previously to identify incompatible SQL in views.
-## Testing report and dashboard migration to Azure Synapse Analytics
+## Test report and dashboard migration to Azure Synapse Analytics
> [!TIP] > Test performance and tune to minimize compute costs.
In terms of security, the best way to do this is to create roles, assign access
It's also important to communicate the cut-over to all users, so they know what's changing and what to expect.
-## Analyzing lineage to understand dependencies between reports, dashboards, and data
+## Analyze lineage to understand dependencies between reports, dashboards, and data
> [!TIP] > Having access to metadata and data lineage from reports all the way back to data source is critical for verifying that migrated reports are working correctly.
This substantially simplifies the data migration process, because the business w
Several ETL tools provide end-to-end lineage capability, and you may be able to make use of this via your existing ETL tool if you're continuing to use it with Azure Synapse. Microsoft [Synapse pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](/azure/data-factory/introduction?msclkid=2ccc66eccfde11ecaa58877e9d228779) lets you view lineage in mapping flows. Also, [Microsoft partners](/azure/synapse-analytics/partner/data-integration) provide automated metadata discovery, data lineage, and lineage comparison tools.
-## Migrating BI tool semantic layers to Azure Synapse Analytics
+## Migrate BI tool semantic layers to Azure Synapse Analytics
> [!TIP] > Some BI tools have semantic layers that simplify business user access to physical data structures in your data warehouse or data mart, like SAP Business Objects and IBM Cognos.
synapse-analytics 5 Minimize Sql Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/5-minimize-sql-issues.md
Title: "Minimizing SQL issues for Netezza migrations"
+ Title: "Minimize SQL issues for Netezza migrations"
description: Learn how to minimize the risk of SQL issues when migrating from Netezza to Azure Synapse.
Last updated 05/24/2022
-# Minimizing SQL issues for Netezza migrations
+# Minimize SQL issues for Netezza migrations
This article is part five of a seven part series that provides guidance on how to migrate from Netezza to Azure Synapse Analytics. This article provides best practices for minimizing SQL issues.
synapse-analytics 6 Microsoft Third Party Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/6-microsoft-third-party-migration-tools.md
This article is part six of a seven part series that provides guidance on how to
## Data warehouse migration tools
-Migrating your existing data warehouse to Azure Synapse enables you to utilize:
+By migrating your existing data warehouse to Azure Synapse, you benefit from:
-- A globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database
+- A globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database.
-- The rich Microsoft analytical ecosystem that exists on Azure consists of technologies to help modernize your data warehouse once it's migrated and extend your analytical capabilities to drive new value
+- The rich Microsoft analytical ecosystem that exists on Azure. This ecosystem consists of technologies to help modernize your data warehouse once it's migrated, and extends your analytical capabilities to drive new value.
-Several tools from Microsoft and third-party partner vendors can help you migrate your existing data warehouse to Azure Synapse.
+Several tools from Microsoft and third-party partner vendors can help you migrate your existing data warehouse to Azure Synapse. These tools include:
-They include:
+- Microsoft data and database migration tools.
-- Microsoft data and database migration tools
+- Third-party data warehouse automation tools to automate and document the migration to Azure Synapse.
-- Third-party data warehouse automation tools to automate and document the migration to Azure Synapse
+- Third-party data warehouse migration tools to migrate schema and data to Azure Synapse.
-- Third-party data warehouse migration tools to migrate schema and data to Azure Synapse
+- Third-party tools to minimize the impact on SQL differences between your existing data warehouse DBMS and Azure Synapse.
-- Third-party tools to minimize the impact on SQL differences between your existing data warehouse DBMS and Azure Synapse-
-Let's look at these in more detail.
+The following sections discuss these tools in more detail.
## Microsoft data migration tools > [!TIP] > Data Factory includes tools to help migrate your data and your entire data warehouse to Azure.
-Microsoft offers several tools to help you migrate your existing data warehouse to Azure Synapse. They are:
+Microsoft offers several tools to help you migrate your existing data warehouse to Azure Synapse, such as:
-- Microsoft Azure Data Factory
+- Microsoft Azure Data Factory.
-- Microsoft services for physical data transfer
+- Microsoft services for physical data transfer.
-- Microsoft services for data ingestion
+- Microsoft services for data ingestion.
### Microsoft Azure Data Factory Microsoft Azure Data Factory is a fully managed, pay-as-you-use, hybrid data integration service for highly scalable ETL and ELT processing. It uses Spark to process and analyze data in parallel and in memory to maximize throughput. > [!TIP]
-> Data Factory allows you to build scalable data integration pipelines code free.
+> Data Factory allows you to build scalable data integration pipelines code-free.
-[Azure Data Factory connectors](/azure/data-factory/connector-overview?msclkid=00086e4acff211ec9263dee5c7eb6e69) connect to external data sources and databases and have templates for common data integration tasks. A visual front-end, browser-based, GUI enables non-programmers to create and run process pipelines to ingest, transform, and load data, while more experienced programmers have the option to incorporate custom code if necessary, such as Python programs.
+[Azure Data Factory connectors](/azure/data-factory/connector-overview?msclkid=00086e4acff211ec9263dee5c7eb6e69) connect to external data sources and databases and have templates for common data integration tasks. A visual front-end, browser-based UI enables non-programmers to create and run process pipelines to ingest, transform, and load data. More experienced programmers have the option to incorporate custom code, such as Python programs.
> [!TIP] > Data Factory enables collaborative development between business and IT professionals.
The next screenshot shows a Data Factory wrangling data flow.
:::image type="content" source="../media/6-microsoft-3rd-party-migration-tools/azure-data-factory-wrangling-dataflows.png" border="true" alt-text="Screenshot showing an example of Azure Data Factory wrangling dataflows.":::
-Development of simple or comprehensive ETL and ELT processes without coding or maintenance, with a few clicks. These processes ingest, move, prepare, transform, and process your data. Design and manage scheduling and triggers in Azure Data Factory to build an automated data integration and loading environment. Define, manage, and schedule PolyBase bulk data load processes in Data Factory.
+You can develop simple or comprehensive ETL and ELT processes without coding or maintenance with a few clicks. These processes ingest, move, prepare, transform, and process your data. You can design and manage scheduling and triggers in Azure Data Factory to build an automated data integration and loading environment. In Data Factory, you can define, manage, and schedule PolyBase bulk data load processes.
> [!TIP] > Data Factory includes tools to help migrate your data and your entire data warehouse to Azure.
-Use Data Factory to implement and manage a hybrid environment that includes on-premises, cloud, streaming and SaaS data&mdash;for example, from applications like Salesforce&mdash;in a secure and consistent way.
+You can use Data Factory to implement and manage a hybrid environment that includes on-premises, cloud, streaming and SaaS data&mdash;for example, from applications like Salesforce&mdash;in a secure and consistent way.
-A new capability in Data Factory is wrangling data flows. This opens up Data Factory to business users to make use of platform and allows them to visually discover, explore, and prepare data at scale without writing code. This easy-to-use Data Factory capability is like Microsoft Excel Power Query or Microsoft Power BI Dataflows, where self-service data preparation business users use a spreadsheet style user interface, with drop-down transforms, to prepare and integrate data.
+A new capability in Data Factory is wrangling data flows. This opens up Data Factory to business users who want to visually discover, explore, and prepare data at scale without writing code. This capability, similar to Microsoft Excel Power Query or Microsoft Power BI Dataflows, offers self-service data preparation. Business users can prepare and integrate data through a spreadsheet style user interface with drop-down transform options.
Azure Data Factory is the recommended approach for implementing data integration and ETL/ELT processes for an Azure Synapse environment, especially if existing legacy processes need to be refactored.
Azure Data Factory is the recommended approach for implementing data integration
#### Azure ExpressRoute
-Azure ExpressRoute creates private connections between Azure data centers and infrastructure on your premises or in a collocation environment. ExpressRoute connections don't go over the public Internet, and they offer more reliability, faster speeds, and lower latencies than typical Internet connections. In some cases, using ExpressRoute connections to transfer data between on-premises systems and Azure can give you significant cost benefits.
+Azure ExpressRoute creates private connections between Azure data centers and infrastructure on your premises or in a collocation environment. ExpressRoute connections don't go over the public Internet, and they offer more reliability, faster speeds, and lower latencies than typical internet connections. In some cases, by using ExpressRoute connections to transfer data between on-premises systems and Azure, you gain significant cost benefits.
#### AzCopy
-[AzCopy](/azure/storage/common/storage-use-azcopy-v10) is a command line utility that copies files to Azure Blob Storage via a standard internet connection. You can use it to upload extracted, compressed, delimited text files before loading via PolyBase or native parquet reader (if the exported files are parquet) in a warehouse migration project. Individual files, file selections, and file directories can be uploaded.
+[AzCopy](/azure/storage/common/storage-use-azcopy-v10) is a command line utility that copies files to Azure Blob Storage via a standard internet connection. In a warehouse migration project, you can use AzCopy to upload extracted, compressed, and delimited text files before loading through PolyBase, or a native Parquet reader if the exported files are Parquet format. AzCopy can upload individual files, file selections, or file directories.
#### Azure Data Box
-Microsoft offers a service called Azure Data Box. This service writes data to be migrated to a physical storage device. This device is then shipped to an Azure data center and loaded into cloud storage. This service can be cost-effective for large volumes of data&mdash;for example, tens or hundreds of terabytes&mdash;or where network bandwidth isn't readily available and is typically used for the one-off historical data load when migrating a large amount of data to Azure Synapse.
+Microsoft offers a service called Azure Data Box. This service writes data to be migrated to a physical storage device. This device is then shipped to an Azure data center and loaded into cloud storage. The service can be cost-effective for large volumes of data&mdash;for example, tens or hundreds of terabytes&mdash;or where network bandwidth isn't readily available. Azure Data Box is typically used for one-off historical data load when migrating a large amount of data to Azure Synapse.
-Another service available is Data Box Gateway, a virtualized cloud storage gateway device that resides on your premises and sends your images, media, and other data to Azure. Use Data Box Gateway for one-off migration tasks or ongoing incremental data uploads.
+Another service is Data Box Gateway, a virtualized cloud storage gateway device that resides on your premises and sends your images, media, and other data to Azure. Use Data Box Gateway for one-off migration tasks or ongoing incremental data uploads.
### Microsoft services for data ingestion
PolyBase can also directly read from files compressed with gzip&mdash;this reduc
> [!TIP] > Invoke PolyBase from Azure Data Factory as part of a migration pipeline.
-PolyBase is tightly integrated with Azure Data Factory (see next section) to enable data load ETL/ELT processes to be rapidly developed and scheduled via a visual GUI, leading to higher productivity and fewer errors than hand-written code.
+PolyBase is tightly integrated with Azure Data Factory to enable data load ETL/ELT processes to be rapidly developed and scheduled through a visual GUI, leading to higher productivity and fewer errors than hand-written code.
PolyBase is the recommended data load method for Azure Synapse, especially for high-volume data. PolyBase loads data using the `CREATE TABLE AS` or `INSERT...SELECT` statements&mdash;CTAS achieves the highest possible throughput as it minimizes the amount of logging required. Compressed delimited text files are the most efficient input format. For maximum throughput, split very large input files into multiple smaller files and load these in parallel. For fastest loading to a staging table, define the target table as type `HEAP` and use round-robin distribution.
-There are some limitations in PolyBase. Rows to be loaded must be less than 1 MB in length. Fixed-width format or nested data such as JSON and XML aren't directly readable.
+However, PolyBase has some limitations. Rows to be loaded must be less than 1 MB in length. Fixed-width format or nested data, such as JSON and XML, aren't directly readable.
-## Microsoft partners to help you migrate your data warehouse to Azure Synapse Analytics
+## Microsoft partners can help you migrate your data warehouse to Azure Synapse Analytics
In addition to tools that can help you with various aspects of data warehouse migration, there are several practiced [Microsoft partners](/azure/synapse-analytics/partner/data-integration) that can bring their expertise to help you move your legacy on-premises data warehouse platform to Azure Synapse.
synapse-analytics 7 Beyond Data Warehouse Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/7-beyond-data-warehouse-migration.md
In addition, there's an opportunity to integrate Azure Synapse with Microsoft pa
Let's look at these in more detail to understand how you can take advantage of the technologies in Microsoft's analytical ecosystem to modernize your data warehouse once you've migrated to Azure Synapse.
-## Offloading data staging and ETL processing to Azure Data Lake and Azure Data Factory
+## Offload data staging and ETL processing to Azure Data Lake and Azure Data Factory
Enterprises today have a key problem resulting from digital transformation. So much new data is being generated and captured for analysis, and much of this data is finding its way into data warehouses. A good example is transaction data created by opening online transaction processing (OLTP) systems to self-service access from mobile devices. These OLTP systems are the main sources of data to a data warehouse, and with customers now driving the transaction rate rather than employees, data in data warehouse staging tables has been growing rapidly in volume. The rapid influx of data into the enterprise, along with new sources of data like Internet of Things (IoT), means that companies need to find a way to deal with unprecedented data growth and scale data integration ETL processing beyond current levels. One way to do this is to offload ingestion, data cleansing, transformation and integration to a data lake and process it at scale there, as part of a data warehouse modernization program.
-> [!TIP]
-> Offload ELT processing to Azure Data Lake and still run at scale as your data volumes grow.
- Once you've migrated your data warehouse to Azure Synapse, Microsoft provides the ability to modernize your ETL processing by ingesting data into, and staging data in, Azure Data Lake Storage. You can then clean, transform and integrate your data at scale using Data Factory before loading it into Azure Synapse in parallel using PolyBase.
+For ELT strategies, consider offloading ELT processing to Azure Data Lake to easily scale as your data volume or frequency grows.
+ ### Microsoft Azure Data Factory > [!TIP]
Data Factory can support multiple use cases, including:
#### Data sources
-Data Factory lets you use [connectors](/azure/data-factory/connector-overview) from both cloud and on-premises data sources. Agent software, known as a *self-hosted integration runtime*, securely accesses on-premises data sources and supports secure, scalable data transfer.
+Azure Data Factory lets you use [connectors](/azure/data-factory/connector-overview) from both cloud and on-premises data sources. Agent software, known as a *self-hosted integration runtime*, securely accesses on-premises data sources and supports secure, scalable data transfer.
-#### Transforming data using Data Factory
+#### Transform data using Azure Data Factory
> [!TIP]
-> Professional ETL developers can use Data Factory mapping data flows to clean, transform and integrate data without the need to write code.
+> Professional ETL developers can use Azure Data Factory mapping data flows to clean, transform and integrate data without the need to write code.
Within a Data Factory pipeline, ingest, clean, transform, integrate, and, if necessary, analyze any type of data from these sources. This includes structured, semi-structured&mdash;such as JSON or Avro&mdash;and unstructured data.
Extend Data Factory transformational and analytical functionality by adding a li
Store integrated data and any results from analytics included in a Data Factory pipeline in one or more data stores such as Azure Data Lake storage, Azure Synapse, or Azure HDInsight (Hive Tables). Invoke other activities to act on insights produced by a Data Factory analytical pipeline.
-#### Utilizing Spark to scale data integration
+#### Utilize Spark to scale data integration
Under the covers, Data Factory utilizes Azure Synapse Spark Pools&mdash;Microsoft's Spark-as-a-service offering&mdash;at run time to clean and integrate data on the Microsoft Azure cloud. This enables it to clean, integrate, and analyze high-volume and very high-velocity data (such as click stream data) at scale. Microsoft intends to execute Data Factory pipelines on other Spark distributions. In addition to executing ETL jobs on Spark, Data Factory can also invoke Pig scripts and Hive queries to access and transform data stored in Azure HDInsight.
-#### Linking self-service data prep and Data Factory ETL processing using wrangling data flows
+#### Link self-service data prep and Data Factory ETL processing using wrangling data flows
> [!TIP] > Data Factory support for wrangling data flows in addition to mapping data flows means that business and IT can work together on a common platform to integrate data.
Another new capability in Data Factory is wrangling data flows. This lets busine
This differs from Excel and Power BI, as Data Factory wrangling data flows uses Power Query Online to generate M code and translate it into a massively parallel in-memory Spark job for cloud scale execution. The combination of mapping data flows and wrangling data flows in Data Factory lets IT professional ETL developers and business users collaborate to prepare, integrate, and analyze data for a common business purpose. The preceding Data Factory mapping data flow diagram shows how both Data Factory and Azure Synapse Spark Pool Notebooks can be combined in the same Data Factory pipeline. This allows IT and business to be aware of what each has created. Mapping data flows and wrangling data flows can then be available for reuse to maximize productivity and consistency and minimize reinvention.
-#### Linking Data and Analytics in Analytical Pipelines
+#### Link data and analytics in analytical pipelines
-In addition to cleaning and transforming data, Data Factory can combine data integration and analytics in the same pipeline. Use Data Factory to create both data integration and analytical pipelines&mdash;the latter being an extension of the former. Drop an analytical model into a pipeline so that clean, integrated data can be stored to provide predictions or recommendations. Act on this information immediately or store it in your data warehouse to provide you with new insights and recommendations that can be viewed in BI tools.
+In addition to cleaning and transforming data, Azure Data Factory can combine data integration and analytics in the same pipeline. Use Data Factory to create both data integration and analytical pipelines&mdash;the latter being an extension of the former. Drop an analytical model into a pipeline so that clean, integrated data can be stored to provide predictions or recommendations. Act on this information immediately or store it in your data warehouse to provide you with new insights and recommendations that can be viewed in BI tools.
Models developed code-free with Azure ML Studio or with the Azure Machine Learning Service SDK using Azure Synapse Spark Pool Notebooks or using R in RStudio can be invoked as a service from within a Data Factory pipeline to batch score your data. Analysis happens at scale by executing Spark machine learning pipelines on Azure Synapse Spark Pool Notebooks.
ML.NET is an open-source and cross-platform machine learning framework (Windows,
#### Visual Studio .NET for Apache Spark
-Visual Studio .NET for Apache® Spark™ aims to make Spark accessible to .NET developers across all Spark APIs. It takes Spark support beyond R, Scala, Python, and Java to .NET. While initially only available on Apache Spark on HDInsight, Microsoft intends to make this available on Azure Synapse Spark Pool Notebook.
+Visual Studio .NET for Apache&reg; Spark&trade; aims to make Spark accessible to .NET developers across all Spark APIs. It takes Spark support beyond R, Scala, Python, and Java to .NET. While initially only available on Apache Spark on HDInsight, Microsoft intends to make this available on Azure Synapse Spark Pool Notebook.
-### Utilizing Azure Analytics with your data warehouse
+### Utilize Azure Analytics with your data warehouse
> [!TIP] > Train, test, evaluate, and execute machine learning models at scale on Azure Synapse Spark Pool Notebook using data in your Azure Synapse.
-Combine machine learning models built using the tools with Azure Synapse by.
+Combine machine learning models built using the tools with Azure Synapse by:
- Using machine learning models in batch mode or in real time to produce new insights, and add them to what you already know in Azure Synapse.
In terms of machine learning model development, data scientists can use RStudio,
In addition, you can ingest big data&mdash;such as social network data or review website data&mdash;into Azure Data Lake, then prepare and analyze it at scale on Azure Synapse Spark Pool Notebook, using natural language processing to score sentiment about your products or your brand. Add these scores to your data warehouse to understand the impact of&mdash;for example&mdash;negative sentiment on product sales, and to leverage big data analytics to add to what you already know in your data warehouse.
-## Integrating live streaming data into Azure Synapse Analytics
+## Integrate live streaming data into Azure Synapse Analytics
When analyzing data in a modern data warehouse, you must be able to analyze streaming data in real time and join it with historical data in your data warehouse. An example of this would be combining IoT data with product or asset data.
To do this, ingest streaming data via Microsoft Event Hubs or other technologies
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-datalake-streaming-data.png" border="true" alt-text="Screenshot of Azure Synapse Analytics with streaming data in an Azure Data Lake.":::
-## Creating a logical data warehouse using PolyBase
+## Create a logical data warehouse using PolyBase
> [!TIP] > PolyBase simplifies access to multiple underlying analytical data stores on Azure to simplify access for business users.
These additional analytical platforms have emerged because of the explosion of n
- Other external data, such as open government data and weather data.
-This data is over and above the structured transaction data and master data sources that typically feed data warehouses. These new data sources include semi-structured data (like JSON, XLM, or Avro) or unstructured data (like text, voice, image, or video) which is more complex to process and analyze. This data could be very high volume, high velocity, or both.
+This data is over and above the structured transaction data and master data sources that typically feed data warehouses. These new data sources include semi-structured data (like JSON, XML, or Avro) or unstructured data (like text, voice, image, or video) which is more complex to process and analyze. This data could be very high volume, high velocity, or both.
As a result, the need for new kinds of more complex analysis has emerged, such as natural language processing, graph analysis, deep learning, streaming analytics, or complex analysis of large volumes of structured data. All of this is typically not happening in a data warehouse, so it's not surprising to see different analytical platforms for different types of analytical workloads, as shown in this diagram.
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/1-design-performance-migration.md
Microsoft Azure is a globally available, highly secure, scalable cloud environme
:::image type="content" source="../media/1-design-performance-migration/azure-synapse-ecosystem.png" border="true" alt-text="Chart showing the Azure Synapse ecosystem of supporting tools and capabilities."::: > [!TIP]
-> Azure Synapse gives best-of-breed performance and price-performance in independent benchmark.
+> Azure Synapse gives best-of-breed performance and price-performance in independent benchmarks.
Azure Synapse provides best-of-breed relational database performance by using techniques such as massively parallel processing (MPP) and multiple levels of automated caching for frequently used data. See the results of this approach in independent benchmarks such as the one run recently by [GigaOm](https://research.gigaom.com/report/data-warehouse-cloud-benchmark/), which compares Azure Synapse to other popular cloud data warehouse offerings. Customers who have migrated to this environment have seen many benefits including:
This paper looks at schema migration with a goal of equivalent or better perform
When migrating from a Teradata environment, there are some specific topics to consider in addition to the more general subjects described in this article.
-#### Choosing the workload for the initial migration
+#### Choose the workload for the initial migration
Legacy Teradata environments have typically evolved over time to encompass multiple subject areas and mixed workloads. When deciding where to start on an initial migration project, choose an area that can:
This is a good fit for existing Teradata environments where a single data mart i
##### Phased approach incorporating modifications
-In cases where a legacy warehouse has evolved over a long time, you may need to reengineer to maintain the required performance levels or to support new data like IoT steams. Migrate to Azure Synapse to get the benefits of a scalable cloud environment as part of the re-engineering process. Migration could include a change in the underlying data model, such as a move from an Inmon model to a data vault.
+In cases where a legacy warehouse has evolved over a long time, you may need to re-engineer to maintain the required performance levels or to support new data like IoT steams. Migrate to Azure Synapse to get the benefits of a scalable cloud environment as part of the re-engineering process. Migration could include a change in the underlying data model, such as a move from an Inmon model to a data vault.
Microsoft recommends moving the existing data model as-is to Azure (optionally using a VM Teradata instance in Azure) and using the performance and flexibility of the Azure environment to apply the re-engineering changes, leveraging Azure's capabilities to make the changes without impacting the existing source system.
-#### Using a VM Teradata instance as part of a migration
+#### Use an Azure VM Teradata instance as part of a migration
> [!TIP]
-> Use Azure's VM capability to create a temporary Teradata instance to speed up migration and minimize impact on the source system.
+> Use Azure VMs to create a temporary Teradata instance to speed up migration and minimize impact on the source system.
When migrating from an on-premises Teradata environment, you can leverage the Azure environment. Azure provides cheap cloud storage and elastic scalability to create a Teradata instance within a VM in Azure, collocating with the target Azure Synapse environment.
Teradata supports special table types for time series and temporal data. The syn
Teradata implements the temporal query functionality via query rewriting to add additional filters within a temporal query to limit the applicable date range. If this functionality is currently used in the source Teradata environment and is to be migrated, add this additional filtering into the relevant temporal queries.
-The Azure environment also includes specific features for complex analytics on time- series data at a scale called [time series insights](https://azure.microsoft.com/services/time-series-insights/). This is aimed at IoT data analysis applications and may be more appropriate for this use case.
+The Azure environment also includes specific features for complex analytics on time-series data at a scale called [time series insights](https://azure.microsoft.com/services/time-series-insights/). This is aimed at IoT data analysis applications and may be more appropriate for this use case.
#### SQL DML syntax differences
-There are a few differences in SQL Data Manipulation Language (DML) syntax between Teradata SQL and Azure Synapse (T-SQL) that you should be aware during migration:
+There are a few differences in SQL Data Manipulation Language (DML) syntax between Teradata SQL and Azure Synapse (T-SQL) that you should be aware of during migration:
- `QUALIFY`&mdash;Teradata supports the `QUALIFY` operator. For example:
Azure Synapse doesn't support trigger creation, but trigger creation can be impl
With Azure Synapse, sequences are handled in a similar way to Teradata. Use [IDENTITY](/sql/t-sql/statements/create-table-transact-sql-identity-property?msclkid=8ab663accfd311ec87a587f5923eaa7b) columns or using SQL code to create the next sequence number in a series.
-### Extracting metadata and data from a Teradata environment
+### Extract metadata and data from a Teradata environment
#### Data Definition Language (DDL) generation
Recommended data formats for the extracted data include delimited text files (al
For more detailed information on the process of migrating data and ETL from a Teradata environment, see Section 2.1. Data Migration ETL and Load from Teradata.
-## Performance Recommendations for Teradata Migrations
+## Performance recommendations for Teradata migrations
This article provides general information and guidelines about use of performance optimization techniques for Azure Synapse and adds specific recommendations for use when migrating from a Teradata environment.
For large table-large table joins, hash distributing one or, ideally, both table
Another way to achieve local joins for small table-large table joins&mdash;typically dimension table to fact table in a star schema model&mdash;is to replicate the smaller dimension table across all nodes. This ensures that any value of the join key of the larger table will have a matching dimension row locally available. The overhead of replicating the dimension tables is relatively low, provided the tables aren't very large (see [Design guidance for replicated tables](/azure/synapse-analytics/sql-data-warehouse/design-guidance-for-replicated-tables))&mdash;in which case, the hash distribution approach as described above is more appropriate. For more information, see [Distributed tables design](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute).
-#### Data Indexing
+#### Data indexing
Azure Synapse provides several indexing options, but these are different from the indexing options implemented in Teradata. More details of the different indexing options are described in [table indexes](/azure/sql-data-warehouse/sql-data-warehouse-tables-index).
Ensure that statistics on data tables are up to date by building in a [statistic
PolyBase is the most efficient method for loading large amounts of data into the warehouse since it can leverage parallel loading streams. For more information, see [PolyBase data loading strategy](/azure/synapse-analytics/sql/load-data-overview).
-#### Use Workload management
+#### Use workload management
-Use [Workload management](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management?context=/azure/synapse-analytics/context/context) instead of resource classes. ETL would be in its own workgroup and should be configured to have more resources per query (less concurrency by more resources). For more information, see [What is dedicated SQL pool in Azure Synapse Analytics](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is).
+Use [workload management](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management?context=/azure/synapse-analytics/context/context) instead of resource classes. ETL would be in its own workgroup and should be configured to have more resources per query (less concurrency by more resources). For more information, see [What is dedicated SQL pool in Azure Synapse Analytics](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is).
## Next steps
synapse-analytics 2 Etl Load Migration Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/2-etl-load-migration-considerations.md
Even if a data model change is an intended part of the overall migration, it's g
When migrating from Teradata, consider creating a Teradata environment in a VM within Azure as a stepping stone in the migration process.
-#### Using a VM Teradata instance as part of a migration
+#### Use a VM Teradata instance as part of a migration
One optional approach for migrating from an on-premises Teradata environment is to leverage the Azure environment to create a Teradata instance in a VM within Azure, co-located with the target Azure Synapse environment. This is possible because Azure provides cheap cloud storage and elastic scalability.
With this approach, standard Teradata utilities, such as Teradata Parallel Data
- The migration process is orchestrated and controlled entirely within the Azure environment.
-#### Migrating data marts - stay physical or go virtual?
+#### Migrate data marts - stay physical or go virtual?
> [!TIP] > Virtualizing data marts can save on storage and processing resources.
If a third-party ETL tool is already in use, and especially if there's a large i
If you decide to retain an existing third-party ETL tool, there may be benefits to running that tool within the Azure environment (rather than on an existing on-premises ETL server) and having Azure Data Factory handle the overall orchestration of the existing workflows. One particular benefit is that less data needs to be downloaded from Azure, processed, and then uploaded back into Azure. So, decision 4 is whether to leave the existing tool running as-is or move it into the Azure environment to achieve cost, performance, and scalability benefits.
-### Re-engineering existing Teradata-specific scripts
+### Re-engineer existing Teradata-specific scripts
If some or all the existing Teradata warehouse ETL/ELT processing is handled by custom scripts that utilize Teradata-specific utilities, such as BTEQ, MLOAD, or TPT, these scripts need to be recoded for the new Azure Synapse environment. Similarly, if ETL processes were implemented using stored procedures in Teradata, then these will also have to be recoded. > [!TIP] > The inventory of ETL tasks to be migrated should include scripts and stored procedures.
-Some elements of the ETL process are easy to migrate&mdash;for example, by simple bulk data load into a staging table from an external file. It may even be possible to automate those parts of the process, for example, by using PolyBase instead of fast load or MLOAD. If the exported files are Parquet, you can use a native Parquet reader, which is a faster option than PolyBase. Other parts of the process that contain arbitrary complex SQL and/or stored procedures will take more time to reengineer.
+Some elements of the ETL process are easy to migrate&mdash;for example, by simple bulk data load into a staging table from an external file. It may even be possible to automate those parts of the process, for example, by using PolyBase instead of fast load or MLOAD. If the exported files are Parquet, you can use a native Parquet reader, which is a faster option than PolyBase. Other parts of the process that contain arbitrary complex SQL and/or stored procedures will take more time to re-engineer.
One way of testing Teradata SQL for compatibility with Azure Synapse is to capture some representative SQL statements from Teradata logs, then prefix those queries with `EXPLAIN`, and then&mdash;assuming a like-for-like migrated data model in Azure Synapse&mdash;run those `EXPLAIN` statements in Azure Synapse. Any incompatible SQL will generate an error, and the error information can determine the scale of the recoding task. [Microsoft partners](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration) offer tools and services to migrate Teradata SQL and stored procedures to Azure Synapse.
-### Using existing third party ETL tools
+### Use third party ETL tools
As described in the previous section, in many cases the existing legacy data warehouse system will already be populated and maintained by third-party ETL products. For a list of Microsoft data integration partners for Azure Synapse, see [Data Integration partners](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration).
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/3-security-access-operations.md
See [Resource classes for workload management](/azure/sql-data-warehouse/resourc
This information can also be used for capacity planning, determining the resources required for additional users or application workload. This also applies to planning scale up/scale downs of compute resources for cost-effective support of 'peaky' workloads.
-### Scaling compute resources
+### Scale compute resources
> [!TIP] > A major benefit of Azure is the ability to independently scale up and down compute resources on demand to handle peaky workloads cost-effectively.
synapse-analytics 4 Visualization Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/4-visualization-reporting.md
Last updated 05/24/2022
This article is part four of a seven part series that provides guidance on how to migrate from Teradata to Azure Synapse Analytics. This article provides best practices for visualization and reporting.
-## Accessing Azure Synapse Analytics using Microsoft and third-party BI tools
+## Access Azure Synapse Analytics using Microsoft and third-party BI tools
Almost every organization accesses data warehouses and data marts by using a range of BI tools and applications, such as:
There's a lot to think about here, so let's look at all this in more detail.
> [!TIP] > A lift and shift data warehouse migration are likely to minimize any disruption to reports, dashboards, and other visualizations.
-## Minimizing the impact of data warehouse migration on BI tools and reports using data virtualization
+## Minimize the impact of data warehouse migration on BI tools and reports using data virtualization
> [!TIP] > Data virtualization allows you to shield business users from structural changes during migration so that they remain unaware of changes.
This breaks the dependency between business users utilizing self-service BI tool
By introducing data virtualization, any schema alternations made during data warehouse and data mart migration to Azure Synapse (to optimize performance, for example) can be hidden from business users because they only access virtual tables in the data virtualization layer. If structural changes are needed, only the mappings between the data warehouse or data marts, and any virtual tables would need to be changed so that users remain unaware of those changes and unaware of the migration. [Microsoft partners](/azure/synapse-analytics/partner/data-integration) provides a useful data virtualization software.
-## Identifying high priority reports to migrate first
+## Identify high priority reports to migrate first
A key question when migrating your existing reports and dashboards to Azure Synapse is which ones to migrate first. Several factors can drive the decision. For example:
A key question when migrating your existing reports and dashboards to Azure Syna
These factors are discussed in more detail later in this article.
-Whatever the decision is, it must involve the business, since they produce the reports and dashboards, and consume the insights these artifacts provide in support of the decisions that are made around your business. That said, if most reports and dashboards can be migrated seamlessly, with minimal effort, and offer up like- for-like results, simply by pointing your BI tool(s) at Azure Synapse, instead of your legacy data warehouse system, then everyone benefits. Therefore, if it's that straight forward and there's no reliance on legacy system proprietary SQL extensions, then there's no doubt that the above ease of migration option breeds confidence.
+Whatever the decision is, it must involve the business, since they produce the reports and dashboards, and consume the insights these artifacts provide in support of the decisions that are made around your business. That said, if most reports and dashboards can be migrated seamlessly, with minimal effort, and offer up like-for-like results, simply by pointing your BI tool(s) at Azure Synapse, instead of your legacy data warehouse system, then everyone benefits. Therefore, if it's that straight forward and there's no reliance on legacy system proprietary SQL extensions, then there's no doubt that the above ease of migration option breeds confidence.
-### Migrating reports based on usage
+### Migrate reports based on usage
-Usage is interesting, since it's an indicator of business value. Reports and dashboards that are never used clearly aren't contributing to supporting any decisions and$1 [don't currently offer any value. So, do you've any mechanism for finding out which reports, and dashboards are currently not used? Several BI tools provide statistics on usage, which would be an obvious place to start.
+Usage is interesting, since it's an indicator of business value. Reports and dashboards that are never used clearly aren't contributing to supporting any decisions and don't currently offer any value. So, do you have any mechanism for finding out which reports and dashboards are currently not used? Several BI tools provide statistics on usage, which would be an obvious place to start.
If your legacy data warehouse has been up and running for many years, there's a high chance you could have hundreds, if not thousands, of reports in existence. In these situations, usage is an important indicator to the business value of a specific report or dashboard. In that sense, it's worth compiling an inventory of the reports and dashboards you've and defining their business purpose and usage statistics. For those that aren't used at all, it's an appropriate time to seek a business decision, to determine if it necessary to de-commission those reports to optimize your migration efforts. A key question worth asking when deciding to de-commission unused reports is: are they unused because people don't know they exist, or is it because they offer no business value, or have they been superseded by others?
-### Migrating reports based on business value
+### Migrate reports based on business value
Usage on its own isn't a clear indicator of business value. There needs to be a deeper business context to determine the value to the business. In an ideal world, we would like to know the contribution of the insights produced in a report to the bottom line of the business. That's exceedingly difficult to determine, since every decision made, and its dependency on the insights in a specific report, would need to be recorded along with the contribution that each decision makes to the bottom line of the business. You would also need to do this overtime.
It's also worthwhile to classify reports and dashboards as operational, tactical
While this may seem too time consuming, you need a mechanism to understand the contribution of reports and dashboards to business value, whether you're migrating or not. Catalogs like Azure Data Catalog are becoming very important because they give you the ability to catalog reports and dashboards, automatically capture the metadata associated with them, and let business users tag and rate them to help you understand business value.
-### Migrating reports based on data migration strategy
+### Migrate reports based on data migration strategy
> [!TIP] > Data migration strategy could also dictate which reports and visualizations get migrated first.
Additionally, any report, dashboard, or other visualization in an application or
- Issues SQL queries, which include proprietary SQL functions peculiar to the SQL dialect of your legacy data warehouse DBMS, that have no equivalent in Azure Synapse.
-### Gauging the impact of SQL incompatibilities on your reporting portfolio
+### Gauge the impact of SQL incompatibilities on your reporting portfolio
You can't rely on documentation associated with reports, dashboards, and other visualizations to gauge how big of an impact SQL incompatibility may have on the portfolio of embedded query services, reports, dashboards, and other visualizations you're intending to migrate to Azure Synapse. There must be a more precise way of doing that.
-#### Using EXPLAIN statements to find SQL incompatibilities
+#### Use EXPLAIN statements to find SQL incompatibilities
> [!TIP] > Gauge the impact of SQL incompatibilities by harvesting your DBMS log files and running `EXPLAIN` statements.
One way is to get a hold of the SQL log files of your legacy data warehouse. Use
Metadata from your legacy data warehouse DBMS will also help you when it comes to views. Again, you can capture and view SQL statements, and `EXPLAIN` them as described previously to identify incompatible SQL in views.
-## Testing report and dashboard migration to Azure Synapse Analytics
+## Test report and dashboard migration to Azure Synapse Analytics
> [!TIP] > Test performance and tune to minimize compute costs.
In terms of security, the best way to do this is to create roles, assign access
It's also important to communicate the cut-over to all users, so they know what's changing and what to expect.
-## Analyzing lineage to understand dependencies between reports, dashboards, and data
+## Analyze lineage to understand dependencies between reports, dashboards, and data
> [!TIP] > Having access to metadata and data lineage from reports all the way back to data source is critical for verifying that migrated reports are working correctly.
This substantially simplifies the data migration process, because the business w
Several ETL tools provide end-to-end lineage capability, and you may be able to make use of this via your existing ETL tool if you're continuing to use it with Azure Synapse. Microsoft [Synapse pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](/azure/data-factory/introduction?msclkid=2ccc66eccfde11ecaa58877e9d228779) lets you view lineage in mapping flows. Also, [Microsoft partners](/azure/synapse-analytics/partner/data-integration) provide automated metadata discovery, data lineage, and lineage comparison tools.
-## Migrating BI tool semantic layers to Azure Synapse Analytics
+## Migrate BI tool semantic layers to Azure Synapse Analytics
> [!TIP] > Some BI tools have semantic layers that simplify business user access to physical data structures in your data warehouse or data mart, like SAP Business Objects and IBM Cognos.
synapse-analytics 5 Minimize Sql Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/5-minimize-sql-issues.md
Title: "Minimizing SQL issues for Teradata migrations"
+ Title: "Minimize SQL issues for Teradata migrations"
description: Learn how to minimize the risk of SQL issues when migrating from Teradata to Azure Synapse.
Last updated 05/24/2022
-# Minimizing SQL issues for Teradata migrations
+# Minimize SQL issues for Teradata migrations
This article is part five of a seven part series that provides guidance on how to migrate from Teradata to Azure Synapse Analytics. This article provides best practices for minimizing SQL issues.
This combination of SQL and dimensional data models simplifies migration to Azur
While the SQL language has been standardized, individual vendors have in some cases implemented proprietary extensions. This document highlights potential SQL differences you may encounter while migrating from a legacy Teradata environment, and to provide workarounds.
-### Using a VM Teradata instance as part of a migration
+### Use an Azure VM Teradata instance as part of a migration
> [!TIP]
-> Use the VM capability in Azure to create a temporary Teradata instance to speed up migration and minimize impact on the source system.
+> Use an Azure VM to create a temporary Teradata instance to speed up migration and minimize impact on the source system.
Leverage the Azure environment when running a migration from an on-premises Teradata environment. Azure provides affordable cloud storage and elastic scalability to create a Teradata instance within a VM in Azure, collocated with the target Azure Synapse environment.
synapse-analytics 6 Microsoft Third Party Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/6-microsoft-third-party-migration-tools.md
This article is part six of a seven part series that provides guidance on how to
## Data warehouse migration tools
-Migrating your existing data warehouse to Azure Synapse enables you to utilize:
+By migrating your existing data warehouse to Azure Synapse, you benefit from:
-- A globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database
+- A globally secure, scalable, low-cost, cloud-native, pay-as-you-use analytical database.
-- The rich Microsoft analytical ecosystem that exists on Azure consists of technologies to help modernize your data warehouse once it's migrated and extend your analytical capabilities to drive new value
+- The rich Microsoft analytical ecosystem that exists on Azure. This ecosystem consists of technologies to help modernize your data warehouse once it's migrated, and extends your analytical capabilities to drive new value.
-Several tools from Microsoft and third-party partner vendors can help you migrate your existing data warehouse to Azure Synapse.
+Several tools from Microsoft and third-party partner vendors can help you migrate your existing data warehouse to Azure Synapse. These tools include:
-They include:
+- Microsoft data and database migration tools.
-- Microsoft data and database migration tools
+- Third-party data warehouse automation tools to automate and document the migration to Azure Synapse.
-- Third-party data warehouse automation tools to automate and document the migration to Azure Synapse
+- Third-party data warehouse migration tools to migrate schema and data to Azure Synapse.
-- Third-party data warehouse migration tools to migrate schema and data to Azure Synapse
+- Third-party tools to minimize the impact on SQL differences between your existing data warehouse DBMS and Azure Synapse.
-- Third-party tools to minimize the impact on SQL differences between your existing data warehouse DBMS and Azure Synapse-
-Let's look at these in more detail.
+The following sections discuss these tools in more detail.
## Microsoft data migration tools > [!TIP] > Data Factory includes tools to help migrate your data and your entire data warehouse to Azure.
-Microsoft offers several tools to help you migrate your existing data warehouse to Azure Synapse. They are:
+Microsoft offers several tools to help you migrate your existing data warehouse to Azure Synapse, such as:
-- Microsoft Azure Data Factory
+- Microsoft Azure Data Factory.
-- Microsoft services for physical data transfer
+- Microsoft services for physical data transfer.
-- Microsoft services for data ingestion
+- Microsoft services for data ingestion.
### Microsoft Azure Data Factory Microsoft Azure Data Factory is a fully managed, pay-as-you-use, hybrid data integration service for highly scalable ETL and ELT processing. It uses Spark to process and analyze data in parallel and in memory to maximize throughput. > [!TIP]
-> Data Factory allows you to build scalable data integration pipelines code free.
+> Data Factory allows you to build scalable data integration pipelines code-free.
-[Azure Data Factory connectors](/azure/data-factory/connector-overview?msclkid=00086e4acff211ec9263dee5c7eb6e69) connect to external data sources and databases and have templates for common data integration tasks. A visual front-end, browser-based, GUI enables non-programmers to create and run process pipelines to ingest, transform, and load data, while more experienced programmers have the option to incorporate custom code if necessary, such as Python programs.
+[Azure Data Factory connectors](/azure/data-factory/connector-overview?msclkid=00086e4acff211ec9263dee5c7eb6e69) connect to external data sources and databases and have templates for common data integration tasks. A visual front-end, browser-based UI enables non-programmers to create and run process pipelines to ingest, transform, and load data. More experienced programmers have the option to incorporate custom code, such as Python programs.
> [!TIP] > Data Factory enables collaborative development between business and IT professionals.
The next screenshot shows a Data Factory wrangling data flow.
:::image type="content" source="../media/6-microsoft-3rd-party-migration-tools/azure-data-factory-wrangling-dataflows.png" border="true" alt-text="Screenshot showing an example of Azure Data Factory wrangling dataflows.":::
-Development of simple or comprehensive ETL and ELT processes without coding or maintenance, with a few clicks. These processes ingest, move, prepare, transform, and process your data. Design and manage scheduling and triggers in Azure Data Factory to build an automated data integration and loading environment. Define, manage, and schedule PolyBase bulk data load processes in Data Factory.
+You can develop simple or comprehensive ETL and ELT processes without coding or maintenance with a few clicks. These processes ingest, move, prepare, transform, and process your data. You can design and manage scheduling and triggers in Azure Data Factory to build an automated data integration and loading environment. In Data Factory, you can define, manage, and schedule PolyBase bulk data load processes.
> [!TIP] > Data Factory includes tools to help migrate your data and your entire data warehouse to Azure.
-Use Data Factory to implement and manage a hybrid environment that includes on-premises, cloud, streaming and SaaS data&mdash;for example, from applications like Salesforce&mdash;in a secure and consistent way.
+You can use Data Factory to implement and manage a hybrid environment that includes on-premises, cloud, streaming and SaaS data&mdash;for example, from applications like Salesforce&mdash;in a secure and consistent way.
-A new capability in Data Factory is wrangling data flows. This opens up Data Factory to business users to make use of platform and allows them to visually discover, explore and prepare data at scale without writing code. This easy-to-use Data Factory capability is like Microsoft Excel Power Query or Microsoft Power BI Dataflows, where self-service data preparation business users use a spreadsheet style user interface, with drop-down transforms, to prepare and integrate data.
+A new capability in Data Factory is wrangling data flows. This opens up Data Factory to business users who want to visually discover, explore, and prepare data at scale without writing code. This capability, similar to Microsoft Excel Power Query or Microsoft Power BI Dataflows, offers self-service data preparation. Business users can prepare and integrate data through a spreadsheet style user interface with drop-down transform options.
Azure Data Factory is the recommended approach for implementing data integration and ETL/ELT processes for an Azure Synapse environment, especially if existing legacy processes need to be refactored.
Azure Data Factory is the recommended approach for implementing data integration
#### Azure ExpressRoute
-Azure ExpressRoute creates private connections between Azure data centers and infrastructure on your premises or in a colocation environment. ExpressRoute connections don't go over the public Internet, and they offer more reliability, faster speeds, and lower latencies than typical Internet connections. In some cases, using ExpressRoute connections to transfer data between on-premises systems and Azure can give you significant cost benefits.
+Azure ExpressRoute creates private connections between Azure data centers and infrastructure on your premises or in a collocation environment. ExpressRoute connections don't go over the public Internet, and they offer more reliability, faster speeds, and lower latencies than typical internet connections. In some cases, by using ExpressRoute connections to transfer data between on-premises systems and Azure, you gain significant cost benefits.
#### AzCopy
-[AzCopy](/azure/storage/common/storage-use-azcopy-v10) is a command line utility that copies files to Azure Blob Storage via a standard internet connection. You can use it to upload extracted, compressed, delimited text files before loading via PolyBase or native parquet reader (if the exported files are parquet) in a warehouse migration project. Individual files, file selections, and file directories can be uploaded.
+[AzCopy](/azure/storage/common/storage-use-azcopy-v10) is a command line utility that copies files to Azure Blob Storage via a standard internet connection. In a warehouse migration project, you can use AzCopy to upload extracted, compressed, and delimited text files before loading through PolyBase, or a native Parquet reader if the exported files are Parquet format. AzCopy can upload individual files, file selections, or file directories.
#### Azure Data Box
-Microsoft offers a service called Azure Data Box. This service writes data to be migrated to a physical storage device. This device is then shipped to an Azure data center and loaded into cloud storage. This service can be cost-effective for large volumes of data&mdash;for example, tens or hundreds of terabytes&mdash;or where network bandwidth isn't readily available and is typically used for the one-off historical data load when migrating a large amount of data to Azure Synapse.
+Microsoft offers a service called Azure Data Box. This service writes data to be migrated to a physical storage device. This device is then shipped to an Azure data center and loaded into cloud storage. The service can be cost-effective for large volumes of data&mdash;for example, tens or hundreds of terabytes&mdash;or where network bandwidth isn't readily available. Azure Data Box is typically used for one-off historical data load when migrating a large amount of data to Azure Synapse.
-Another service available is Data Box Gateway, a virtualized cloud storage gateway device that resides on your premises and sends your images, media, and other data to Azure. Use Data Box Gateway for one-off migration tasks or ongoing incremental data uploads.
+Another service is Data Box Gateway, a virtualized cloud storage gateway device that resides on your premises and sends your images, media, and other data to Azure. Use Data Box Gateway for one-off migration tasks or ongoing incremental data uploads.
### Microsoft services for data ingestion
PolyBase can also directly read from files compressed with gzip&mdash;this reduc
> [!TIP] > Invoke PolyBase from Azure Data Factory as part of a migration pipeline.
-PolyBase is tightly integrated with Azure Data Factory (see next section) to enable data load ETL/ELT processes to be rapidly developed and scheduled via a visual GUI, leading to higher productivity and fewer errors than hand-written code.
+PolyBase is tightly integrated with Azure Data Factory to enable data load ETL/ELT processes to be rapidly developed and scheduled through a visual GUI, leading to higher productivity and fewer errors than hand-written code.
PolyBase is the recommended data load method for Azure Synapse, especially for high-volume data. PolyBase loads data using the `CREATE TABLE AS` or `INSERT...SELECT` statements&mdash;CTAS achieves the highest possible throughput as it minimizes the amount of logging required. Compressed delimited text files are the most efficient input format. For maximum throughput, split very large input files into multiple smaller files and load these in parallel. For fastest loading to a staging table, define the target table as type `HEAP` and use round-robin distribution.
-There are some limitations in PolyBase. Rows to be loaded must be less than 1 MB in length. Fixed-width format or nested data such as JSON and XML aren't directly readable.
+However, PolyBase has some limitations. Rows to be loaded must be less than 1 MB in length. Fixed-width format or nested data, such as JSON and XML, aren't directly readable.
-## Microsoft partners to help you migrate your data warehouse to Azure Synapse Analytics
+## Microsoft partners can help you migrate your data warehouse to Azure Synapse Analytics
In addition to tools that can help you with various aspects of data warehouse migration, there are several practiced [Microsoft partners](/azure/synapse-analytics/partner/data-integration) that can bring their expertise to help you move your legacy on-premises data warehouse platform to Azure Synapse.
synapse-analytics 7 Beyond Data Warehouse Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/7-beyond-data-warehouse-migration.md
In addition, there's an opportunity to integrate Azure Synapse with Microsoft pa
Let's look at these in more detail to understand how you can take advantage of the technologies in Microsoft's analytical ecosystem to modernize your data warehouse once you've migrated to Azure Synapse.
-## Offloading data staging and ETL processing to Azure Data Lake and Azure Data Factory
+## Offload data staging and ETL processing to Azure Data Lake and Azure Data Factory
Enterprises today have a key problem resulting from digital transformation. So much new data is being generated and captured for analysis, and much of this data is finding its way into data warehouses. A good example is transaction data created by opening online transaction processing (OLTP) systems to self-service access from mobile devices. These OLTP systems are the main sources of data to a data warehouse, and with customers now driving the transaction rate rather than employees, data in data warehouse staging tables has been growing rapidly in volume. This, along with other new data&mdash;like Internet of Things (IoT) data, coming into the enterprise, means that companies need to find a way to deal with unprecedented data growth and scale data integration ETL processing beyond current levels. One way to do this is to offload ingestion, data cleansing, transformation and integration to a data lake and process it at scale there, as part of a data warehouse modernization program.
-> [!TIP]
-> Offload ELT processing to Azure Data Lake and still run at scale as your data volumes grow.
- Once you've migrated your data warehouse to Azure Synapse, Microsoft provides the ability to modernize your ETL processing by ingesting data into, and staging data in, Azure Data Lake Storage. You can then clean, transform and integrate your data at scale using Data Factory before loading it into Azure Synapse in parallel using PolyBase.
+For ELT strategies, consider offloading ELT processing to Azure Data Lake to easily scale as your data volume or frequency grows.
+ ### Microsoft Azure Data Factory > [!TIP] > Data Factory allows you to build scalable data integration pipelines code free.
-[Microsoft Azure Data Factory](https://azure.microsoft.com/services/data-factory/)] is a pay-as-you-use, hybrid data integration service for highly scalable ETL and ELT processing. Data Factory provides a simple web-based user interface to build data integration pipelines, in a code-free manner that can:
+[Microsoft Azure Data Factory](https://azure.microsoft.com/services/data-factory/) is a pay-as-you-use, hybrid data integration service for highly scalable ETL and ELT processing. Data Factory provides a simple web-based user interface to build data integration pipelines, in a code-free manner that can:
- Data Factory allows you to build scalable data integration pipelines code free. Easily acquire data at scale. Pay only for what you use and connect to on premises, cloud, and SaaS based data sources.
Data Factory can support multiple use cases, including:
Data Factory lets you connect with [connectors](/azure/data-factory/connector-overview) from both cloud and on-premises data sources. Agent software, known as a Self-Hosted Integration Runtime, securely accesses on-premises data sources and supports secure, scalable data transfer.
-#### Transforming data using Data Factory
+#### Transform data using Azure Data Factory
> [!TIP]
-> Professional ETL developers can use Data Factory mapping data flows to clean, transform and integrate data without the need to write code.
+> Professional ETL developers can use Azure Data Factory mapping data flows to clean, transform and integrate data without the need to write code.
Within a Data Factory pipeline, ingest, clean, transform, integrate, and, if necessary, analyze any type of data from these sources. This includes structured, semi-structured&mdash;such as JSON or Avro&mdash;and unstructured data.
Extend Data Factory transformational and analytical functionality by adding a li
Store integrated data and any results from analytics included in a Data Factory pipeline in one or more data stores such as Azure Data Lake storage, Azure Synapse, or Azure HDInsight (Hive Tables). Invoke other activities to act on insights produced by a Data Factory analytical pipeline.
-#### Utilizing Spark to scale data integration
+#### Utilize Spark to scale data integration
Under the covers, Data Factory utilizes Azure Synapse Spark Pools&mdash;Microsoft's Spark-as-a-service offering&mdash;at run time to clean and integrate data on the Microsoft Azure cloud. This enables it to clean, integrate, and analyze high-volume and very high-velocity data (such as click stream data) at scale. Microsoft intends to execute Data Factory pipelines on other Spark distributions. In addition to executing ETL jobs on Spark, Data Factory can also invoke Pig scripts and Hive queries to access and transform data stored in Azure HDInsight.
-#### Linking self-service data prep and Data Factory ETL processing using wrangling data flows
+#### Link self-service data prep and Data Factory ETL processing using wrangling data flows
> [!TIP] > Data Factory support for wrangling data flows in addition to mapping data flows means that business and IT can work together on a common platform to integrate data.
Another new capability in Data Factory is wrangling data flows. This lets busine
This differs from Excel and Power BI, as Data Factory wrangling data flows uses Power Query Online to generate M code and translate it into a massively parallel in-memory Spark job for cloud scale execution. The combination of mapping data flows and wrangling data flows in Data Factory lets IT professional ETL developers and business users collaborate to prepare, integrate, and analyze data for a common business purpose. The preceding Data Factory mapping data flow diagram shows how both Data Factory and Azure Synapse Spark Pool Notebooks can be combined in the same Data Factory pipeline. This allows IT and business to be aware of what each has created. Mapping data flows and wrangling data flows can then be available for reuse to maximize productivity and consistency and minimize reinvention.
-#### Linking Data and Analytics in Analytical Pipelines
+#### Link data and analytics in analytical pipelines
In addition to cleaning and transforming data, Data Factory can combine data integration and analytics in the same pipeline. Use Data Factory to create both data integration and analytical pipelines&mdash;the latter being an extension of the former. Drop an analytical model into a pipeline so that clean, integrated data can be stored to provide predictions or recommendations. Act on this information immediately or store it in your data warehouse to provide you with new insights and recommendations that can be viewed in BI tools.
ML.NET is an open-source and cross-platform machine learning framework (Windows,
#### Visual Studio .NET for Apache Spark
-Visual Studio .NET for Apache® Spark™ aims to make Spark accessible to .NET developers across all Spark APIs. It takes Spark support beyond R, Scala, Python, and Java to .NET. While initially only available on Apache Spark on HDInsight, Microsoft intends to make this available on Azure Synapse Spark Pool Notebook.
+Visual Studio .NET for Apache&reg; Spark&trade; aims to make Spark accessible to .NET developers across all Spark APIs. It takes Spark support beyond R, Scala, Python, and Java to .NET. While initially only available on Apache Spark on HDInsight, Microsoft intends to make this available on Azure Synapse Spark Pool Notebook.
-### Utilizing Azure Analytics with your data warehouse
+### Utilize Azure Analytics with your data warehouse
> [!TIP] > Train, test, evaluate, and execute machine learning models at scale on Azure Synapse Spark Pool Notebook using data in your Azure Synapse.
-Combine machine learning models built using the tools with Azure Synapse by.
+Combine machine learning models built using the tools with Azure Synapse by:
- Using machine learning models in batch mode or in real time to produce new insights, and add them to what you already know in Azure Synapse.
In terms of machine learning model development, data scientists can use RStudio,
In addition, you can ingest big data&mdash;such as social network data or review website data&mdash;into Azure Data Lake, then prepare and analyze it at scale on Azure Synapse Spark Pool Notebook, using natural language processing to score sentiment about your products or your brand. Add these scores to your data warehouse to understand the impact of&mdash;for example&mdash;negative sentiment on product sales, and to leverage big data analytics to add to what you already know in your data warehouse.
-## Integrating live streaming data into Azure Synapse Analytics
+## Integrate live streaming data into Azure Synapse Analytics
When analyzing data in a modern data warehouse, you must be able to analyze streaming data in real time and join it with historical data in your data warehouse. An example of this would be combining IoT data with product or asset data.
To do this, ingest streaming data via Microsoft Event Hubs or other technologies
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-datalake-streaming-data.png" border="true" alt-text="Screenshot of Azure Synapse Analytics with streaming data in an Azure Data Lake.":::
-## Creating a logical data warehouse using PolyBase
+## Create a logical data warehouse using PolyBase
> [!TIP] > PolyBase simplifies access to multiple underlying analytical data stores on Azure to simplify access for business users.
These additional analytical platforms have emerged because of the explosion of n
- Other external data, such as open government data and weather data.
-This data is over and above the structured transaction data and master data sources that typically feed data warehouses. These new data sources include semi-structured data (like JSON, XLM, or Avro) or unstructured data (like text, voice, image, or video) which is more complex to process and analyze. This data could be very high volume, high velocity, or both.
+This data is over and above the structured transaction data and master data sources that typically feed data warehouses. These new data sources include semi-structured data (like JSON, XML, or Avro) or unstructured data (like text, voice, image, or video) which is more complex to process and analyze. This data could be very high volume, high velocity, or both.
As a result, the need for new kinds of more complex analysis has emerged, such as natural language processing, graph analysis, deep learning, streaming analytics, or complex analysis of large volumes of structured data. All of this is typically not happening in a data warehouse, so it's not surprising to see different analytical platforms for different types of analytical workloads, as shown in this diagram.
synapse-analytics Business Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/business-intelligence.md
Last updated 07/09/2021 -+
synapse-analytics Data Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/data-integration.md
Last updated 03/27/2019 -+
synapse-analytics Data Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/data-management.md
Last updated 04/17/2018 -+
synapse-analytics Cheat Sheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/cheat-sheet.md
Last updated 11/04/2019 -+ # Cheat sheet for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytic
synapse-analytics Column Level Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/column-level-security.md
Last updated 04/19/2020 -+ tags: azure-synapse
synapse-analytics Create Data Warehouse Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/create-data-warehouse-portal.md
description: Create and query a dedicated SQL pool (formerly SQL DW) using the A
-+ Last updated 05/28/2019
synapse-analytics Create Data Warehouse Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/create-data-warehouse-powershell.md
description: Quickly create a dedicated SQL pool (formerly SQL DW) with a server
-+ Last updated 4/11/2019
synapse-analytics Design Elt Data Loading https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/design-elt-data-loading.md
Last updated 11/20/2020 -+
synapse-analytics Disable Geo Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/disable-geo-backup.md
Title: Disable geo-backups description: How-to guide for disabling geo-backups for a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics -+ Last updated 01/06/2021 -+
synapse-analytics Fivetran Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/fivetran-quickstart.md
Last updated 10/12/2018 -+
synapse-analytics Load Data From Azure Blob Storage Using Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/load-data-from-azure-blob-storage-using-copy.md
Last updated 11/23/2020 -+
synapse-analytics Load Data Wideworldimportersdw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/load-data-wideworldimportersdw.md
Last updated 01/12/2021 -+
synapse-analytics Manage Compute With Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/manage-compute-with-azure-functions.md
Last updated 04/27/2018 -+
synapse-analytics Massively Parallel Processing Mpp Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/massively-parallel-processing-mpp-architecture.md
Last updated 11/04/2019 -+ # Dedicated SQL pool (formerly SQL DW) architecture in Azure Synapse Analytics
synapse-analytics Pause And Resume Compute Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/pause-and-resume-compute-portal.md
description: Use the Azure portal to pause compute for dedicated SQL pool to sav
-+ Last updated 11/23/2020
synapse-analytics Pause And Resume Compute Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/pause-and-resume-compute-powershell.md
description: You can use Azure PowerShell to pause and resume dedicated SQL pool
-+ Last updated 03/20/2019
synapse-analytics Performance Tuning Materialized Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/performance-tuning-materialized-views.md
Title: Performance tune with materialized views description: Learn about recommendations and considerations you should know as you use materialized views to improve your query performance. -- Last updated 08/17/2021+ -+ # Performance tune with materialized views
synapse-analytics Performance Tuning Ordered Cci https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/performance-tuning-ordered-cci.md
Last updated 04/13/2021 -+
synapse-analytics Quickstart Scale Compute Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-powershell.md
description: You can scale compute for dedicated SQL pool (formerly SQL DW) usin
-+ Last updated 03/09/2022
synapse-analytics Quickstart Scale Compute Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-tsql.md
Last updated 03/09/2022-+
synapse-analytics Single Region Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/single-region-residency.md
Title: Single region residency description: How-to guide for configuring single region residency for a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics -+ Last updated 05/15/2021 -+
synapse-analytics Sql Data Warehouse Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-authentication.md
Last updated 04/02/2019 -+ tag: azure-synapse
synapse-analytics Sql Data Warehouse Concept Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-concept-recommendations.md
Last updated 06/26/2020 -+
synapse-analytics Sql Data Warehouse Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-connect-overview.md
Last updated 04/17/2018 -+
synapse-analytics Sql Data Warehouse Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-connection-strings.md
Last updated 04/17/2018 -+
synapse-analytics Sql Data Warehouse Continuous Integration And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-continuous-integration-and-deployment.md
Last updated 02/04/2020 -+ # Continuous integration and deployment for dedicated SQL pool in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Develop Best Practices Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-best-practices-transactions.md
Title: Optimizing transactions
-description: Learn how to optimize the performance of your transactional code in dedicated SQL pool while minimizing risk for long rollbacks.
--
+description: Learn how to optimize the performance of your transactional code in an Azure Synapse Analytics dedicated SQL pool while minimizing risk for long rollbacks.
Last updated 04/19/2018--+++
synapse-analytics Sql Data Warehouse Develop Ctas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-ctas.md
Last updated 03/26/2019 -+
synapse-analytics Sql Data Warehouse Develop Dynamic Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-dynamic-sql.md
Last updated 04/17/2018 -+
synapse-analytics Sql Data Warehouse Develop Group By Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-group-by-options.md
Last updated 04/17/2018 -+
synapse-analytics Sql Data Warehouse Develop Label https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-label.md
Last updated 04/17/2018 -+
synapse-analytics Sql Data Warehouse Develop Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-loops.md
Last updated 04/17/2018 -+
synapse-analytics Sql Data Warehouse Develop Stored Procedures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-stored-procedures.md
Last updated 04/02/2019 -+
synapse-analytics Sql Data Warehouse Develop Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-transactions.md
Title: Use transactions in Azure Synapse Analytics SQL pool description: This article includes tips for implementing transactions and developing solutions in Synapse SQL pool.-- Last updated 03/22/2019-++ -+ # Use transactions in a SQL pool in Azure Synapse
synapse-analytics Sql Data Warehouse Develop User Defined Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-user-defined-schemas.md
Last updated 04/17/2018 -+
synapse-analytics Sql Data Warehouse Develop Variable Assignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-variable-assignment.md
Last updated 04/17/2018 -+
synapse-analytics Sql Data Warehouse Get Started Analyze With Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-get-started-analyze-with-azure-machine-learning.md
Last updated 07/15/2020 -+ tag: azure-Synapse
synapse-analytics Sql Data Warehouse Get Started Connect Sqlcmd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-get-started-connect-sqlcmd.md
Last updated 04/17/2018 -+
synapse-analytics Sql Data Warehouse Get Started Create Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-get-started-create-support-ticket.md
Last updated 03/10/2020 -+
synapse-analytics Sql Data Warehouse How To Convert Resource Classes Workload Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-convert-resource-classes-workload-groups.md
Last updated 08/13/2020 -+
synapse-analytics Sql Data Warehouse How To Monitor Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-monitor-cache.md
Last updated 11/20/2020 -+
synapse-analytics Sql Data Warehouse Install Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-install-visual-studio.md
Last updated 05/11/2020 -+ # Getting started with Visual Studio 2019
synapse-analytics Sql Data Warehouse Integrate Azure Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-integrate-azure-stream-analytics.md
Last updated 9/25/2020 -+
synapse-analytics Sql Data Warehouse Load From Azure Blob Storage With Polybase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-load-from-azure-blob-storage-with-polybase.md
Last updated 11/20/2020 -+
synapse-analytics Sql Data Warehouse Load From Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-load-from-azure-data-lake-store.md
Last updated 11/20/2020 -+
synapse-analytics Sql Data Warehouse Manage Compute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-compute-overview.md
Last updated 11/12/2019 -+
synapse-analytics Sql Data Warehouse Manage Compute Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-compute-rest-api.md
Last updated 03/09/2022-+
synapse-analytics Sql Data Warehouse Manage Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor.md
Last updated 11/15/2021 -+
synapse-analytics Sql Data Warehouse Overview Develop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-develop.md
Last updated 08/29/2018 -+ # Design decisions and coding techniques for a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Overview Integrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-integrate.md
Last updated 04/17/2018 -+
synapse-analytics Sql Data Warehouse Overview Manage Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-manage-security.md
Last updated 04/17/2018 -+ tags: azure-synapse
synapse-analytics Sql Data Warehouse Overview Manageability Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-manageability-monitoring.md
Last updated 08/27/2018 -+
synapse-analytics Sql Data Warehouse Query Ssms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-query-ssms.md
Title: Connect to dedicated SQL pool (formerly SQL DW) with SSMS description: Use SQL Server Management Studio (SSMS) to connect to and query a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. -- Last updated 04/17/2018--+++
synapse-analytics Sql Data Warehouse Query Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-query-visual-studio.md
Last updated 08/15/2019 -+
synapse-analytics Sql Data Warehouse Reference Collation Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-collation-types.md
Title: Data warehouse collation types description: Collation types supported for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. -+
synapse-analytics Sql Data Warehouse Reference Powershell Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-powershell-cmdlets.md
Last updated 04/17/2018 -+
synapse-analytics Sql Data Warehouse Reference Tsql Language Elements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-tsql-language-elements.md
Last updated 06/13/2018 -+
synapse-analytics Sql Data Warehouse Reference Tsql Statements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-tsql-statements.md
Last updated 05/01/2019 -+
synapse-analytics Sql Data Warehouse Reference Tsql System Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-tsql-system-views.md
Last updated 01/06/2020 -+
synapse-analytics Sql Data Warehouse Restore Deleted Dw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-deleted-dw.md
Title: Restore a deleted dedicated SQL pool (formerly SQL DW) description: How to guide for restoring a deleted dedicated SQL pool in Azure Synapse Analytics.-+ Last updated 08/29/2018 -+
synapse-analytics Sql Data Warehouse Restore From Geo Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-from-geo-backup.md
Title: Restore a dedicated SQL pool from a geo-backup description: How-to guide for geo-restoring a dedicated SQL pool in Azure Synapse Analytics-+ Last updated 11/13/2020 -+
synapse-analytics Sql Data Warehouse Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-points.md
Last updated 07/03/2019 -+
synapse-analytics Sql Data Warehouse Service Capacity Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-service-capacity-limits.md
Last updated 2/19/2020 -+
synapse-analytics Sql Data Warehouse Source Control Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-source-control-integration.md
Last updated 08/23/2019 -+ # Source Control Integration for dedicated SQL pool in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Table Constraints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-table-constraints.md
Last updated 09/05/2019 -+
synapse-analytics Sql Data Warehouse Tables Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-identity.md
Last updated 07/20/2020 -+
synapse-analytics Sql Data Warehouse Tables Statistics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-statistics.md
Title: Create and update statistics on tables description: Recommendations and examples for creating and updating query-optimization statistics on tables in dedicated SQL pool.-- Last updated 05/09/2018--+++
actualRowCounts.logical_table_name,
statsRowCounts.stats_row_count, actualRowCounts.actual_row_count, row_count_difference = CASE
- WHEN actualRowCounts.actual_row_count >= statsRowCounts.stats_row_count THEN actualRowCounts.actual_row_count - statsRowCounts.stats_row_count
- ELSE statsRowCounts.stats_row_count - actualRowCounts.actual_row_count
+ WHEN actualRowCounts.actual_row_count >= statsRowCounts.stats_row_count THEN actualRowCounts.actual_row_count - statsRowCounts.stats_row_count
+ ELSE statsRowCounts.stats_row_count - actualRowCounts.actual_row_count
END, percent_deviation_from_actual = CASE
- WHEN actualRowCounts.actual_row_count = 0 THEN statsRowCounts.stats_row_count
- WHEN statsRowCounts.stats_row_count = 0 THEN actualRowCounts.actual_row_count
- WHEN actualRowCounts.actual_row_count >= statsRowCounts.stats_row_count THEN CONVERT(NUMERIC(18, 0), CONVERT(NUMERIC(18, 2), (actualRowCounts.actual_row_count - statsRowCounts.stats_row_count)) / CONVERT(NUMERIC(18, 2), actualRowCounts.actual_row_count) * 100)
- ELSE CONVERT(NUMERIC(18, 0), CONVERT(NUMERIC(18, 2), (statsRowCounts.stats_row_count - actualRowCounts.actual_row_count)) / CONVERT(NUMERIC(18, 2), actualRowCounts.actual_row_count) * 100)
+ WHEN actualRowCounts.actual_row_count = 0 THEN statsRowCounts.stats_row_count
+ WHEN statsRowCounts.stats_row_count = 0 THEN actualRowCounts.actual_row_count
+ WHEN actualRowCounts.actual_row_count >= statsRowCounts.stats_row_count THEN CONVERT(NUMERIC(18, 0), CONVERT(NUMERIC(18, 2), (actualRowCounts.actual_row_count - statsRowCounts.stats_row_count)) / CONVERT(NUMERIC(18, 2), actualRowCounts.actual_row_count) * 100)
+ ELSE CONVERT(NUMERIC(18, 0), CONVERT(NUMERIC(18, 2), (statsRowCounts.stats_row_count - actualRowCounts.actual_row_count)) / CONVERT(NUMERIC(18, 2), actualRowCounts.actual_row_count) * 100)
END from (
- select distinct object_id from sys.stats where stats_id > 1
+ select distinct object_id from sys.stats where stats_id > 1
) objIdsWithStats left join (
- select object_id, sum(rows) as stats_row_count from sys.partitions group by object_id
+ select object_id, sum(rows) as stats_row_count from sys.partitions group by object_id
) statsRowCounts on objIdsWithStats.object_id = statsRowCounts.object_id left join (
- SELECT sm.name [schema] ,
- tb.name logical_table_name ,
- tb.object_id object_id ,
- SUM(rg.row_count) actual_row_count
- FROM sys.schemas sm
- INNER JOIN sys.tables tb ON sm.schema_id = tb.schema_id
- INNER JOIN sys.pdw_table_mappings mp ON tb.object_id = mp.object_id
- INNER JOIN sys.pdw_nodes_tables nt ON nt.name = mp.physical_name
- INNER JOIN sys.dm_pdw_nodes_db_partition_stats rg ON rg.object_id = nt.object_id
- AND rg.pdw_node_id = nt.pdw_node_id
- AND rg.distribution_id = nt.distribution_id
- WHERE rg.index_id = 1
- GROUP BY sm.name, tb.name, tb.object_id
+ SELECT sm.name [schema] ,
+ tb.name logical_table_name ,
+ tb.object_id object_id ,
+ SUM(rg.row_count) actual_row_count
+ FROM sys.schemas sm
+ INNER JOIN sys.tables tb ON sm.schema_id = tb.schema_id
+ INNER JOIN sys.pdw_table_mappings mp ON tb.object_id = mp.object_id
+ INNER JOIN sys.pdw_nodes_tables nt ON nt.name = mp.physical_name
+ INNER JOIN sys.dm_pdw_nodes_db_partition_stats rg ON rg.object_id = nt.object_id
+ AND rg.pdw_node_id = nt.pdw_node_id
+ AND rg.distribution_id = nt.distribution_id
+ WHERE rg.index_id = 1
+ GROUP BY sm.name, tb.name, tb.object_id
) actualRowCounts on objIdsWithStats.object_id = actualRowCounts.object_id
synapse-analytics Sql Data Warehouse Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-troubleshoot-connectivity.md
Last updated 03/27/2019 -+
synapse-analytics Sql Data Warehouse Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-videos.md
Last updated 02/15/2019 -+
synapse-analytics Striim Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/striim-quickstart.md
Last updated 10/12/2018 -+
synapse-analytics What Is A Data Warehouse Unit Dwu Cdwu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/what-is-a-data-warehouse-unit-dwu-cdwu.md
Last updated 11/22/2019 -+
synapse-analytics Data Loading Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/data-loading-best-practices.md
Last updated 08/26/2021 -+
synapse-analytics Develop Materialized View Performance Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-materialized-view-performance-tuning.md
Last updated 04/15/2020 -+ # Performance tuning with materialized views using dedicated SQL pool in Azure Synapse Analytics
synapse-analytics Develop Stored Procedures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-stored-procedures.md
Last updated 11/03/2020 -+ # Stored procedures using Synapse SQL in Azure Synapse Analytics
synapse-analytics Develop Transaction Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-transaction-best-practices.md
Last updated 04/15/2020 -+ # Optimize transactions with dedicated SQL pool in Azure Synapse Analytics
synapse-analytics Develop Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-transactions.md
Title: Use transactions
+ Title: Use transactions with dedicated SQL pool in Azure Synapse Analytics
description: Tips for implementing transactions with dedicated SQL pool in Azure Synapse Analytics for developing solutions.-- Last updated 04/15/2020--+++ # Use transactions with dedicated SQL pool in Azure Synapse Analytics
synapse-analytics Load Data Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/load-data-overview.md
Last updated 04/15/2020 -+ # Design a PolyBase data loading strategy for dedicated SQL pool in Azure Synapse Analytics
synapse-analytics Overview Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/overview-architecture.md
Last updated 04/15/2020 -+ # Azure Synapse SQL architecture
virtual-desktop Configure Vm Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-vm-gpu.md
Title: Configure GPU for Azure Virtual Desktop - Azure description: How to enable GPU-accelerated rendering and encoding in Azure Virtual Desktop.-+ Last updated 05/06/2019-+ # Configure graphics processing unit (GPU) acceleration for Azure Virtual Desktop
virtual-desktop Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/network-connectivity.md
Title: Understanding Azure Virtual Desktop network connectivity description: Learn about Azure Virtual Desktop network connectivity-+ Last updated 11/16/2020-+ # Understanding Azure Virtual Desktop network connectivity
virtual-desktop Rdp Bandwidth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-bandwidth.md
Title: Remote Desktop Protocol bandwidth requirements Azure Virtual Desktop - Azure description: Understanding RDP bandwidth requirements for Azure Virtual Desktop.-+ Last updated 11/16/2020-+ # Remote Desktop Protocol (RDP) bandwidth requirements
virtual-desktop Rdp Quality Of Service Qos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-quality-of-service-qos.md
Title: Implement Quality of Service (QoS) for Azure Virtual Desktop description: How to set up QoS for Azure Virtual Desktop.-+ Last updated 10/18/2021-+ # Implement Quality of Service (QoS) for Azure Virtual Desktop
virtual-desktop Screen Capture Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/screen-capture-protection.md
Title: Azure Virtual Desktop screen capture protection description: How to set up screen capture protection for Azure Virtual Desktop.-+ Last updated 08/30/2021-+
virtual-desktop Shortpath Public https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/shortpath-public.md
Title: Azure Virtual Desktop RDP Shortpath for public networks (preview) - Azure description: How to set up RDP Shortpath for public networks for Azure Virtual Desktop (preview).-+ Last updated 04/13/2022-+ # Azure Virtual Desktop RDP Shortpath for public networks (preview)
virtual-desktop Shortpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/shortpath.md
Title: Azure Virtual Desktop RDP Shortpath for managed networks description: How to set up RDP Shortpath for managed networks for Azure Virtual Desktop.-+ Last updated 03/08/2022-+ # Azure Virtual Desktop RDP Shortpath for managed networks
virtual-desktop Configure Vm Gpu 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/configure-vm-gpu-2019.md
Title: Configure GPU for Azure Virtual Desktop (classic) - Azure description: How to enable GPU-accelerated rendering and encoding in Azure Virtual Desktop (classic).-+ Last updated 03/30/2020-+ # Configure graphics processing unit (GPU) acceleration for Azure Virtual Desktop (classic)
virtual-machines Eav4 Easv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/eav4-easv4-series.md
Eav4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor
[VM Generation Support](generation-2.md): Generations 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
-[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
<br> Easv4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor that can achieve a boosted maximum frequency of 3.35GHz and use premium SSD. The Easv4-series sizes are ideal for memory-intensive enterprise applications.
virtual-machines Create Upload Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-generic.md
The [Azure Linux Agent](../extensions/agent-linux.md) `waagent` provisions a Lin
If you want to mount, format and create swap you can either: 1. Pass this in as a cloud-init config every time you create a VM through `customdata`. This is the recommended method. 2. Use a cloud-init directive baked into the image that will do this every time the VM is created.
- ```
+
+ ```
echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF #cloud-config
The [Azure Linux Agent](../extensions/agent-linux.md) `waagent` provisions a Lin
- ["ephemeral0.1", "/mnt"] - ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"] EOF
- ```
+
+ ```
1. Deprovision. > [!CAUTION]
virtual-machines Maintenance And Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-and-updates.md
Most nonzero-impact maintenance pauses the VM for less than 10 seconds. In certa
Memory-preserving maintenance works for more than 90 percent of Azure VMs. It doesn't work for G, L, M, N, and H series. Azure increasingly uses live-migration technologies and improves memory-preserving maintenance mechanisms to reduce the pause durations.
-These maintenance operations that don't require a reboot are applied one fault domain at a time. They stop if they receive any warning health signals from platform monitoring tools.
+These maintenance operations that don't require a reboot are applied one fault domain at a time. They stop if they receive any warning health signals from platform monitoring tools. Maintenance operations that do not require a reboot may occur simultaneously in paired regions or Availability Zones. For a given change, the deployment are mostly sequenced across Availability Zones and across Region pairs, but there can be overlap at the tail.
These types of updates can affect some applications. When the VM is live-migrated to a different host, some sensitive workloads might show a slight performance degradation in the few minutes leading up to the VM pause. To prepare for VM maintenance and reduce impact during Azure maintenance, try [using Scheduled Events for Windows](./windows/scheduled-events.md) or [Linux](./linux/scheduled-events.md) for such applications.
Availability zones are unique physical locations within an Azure region. Each zo
An availability zone is a combination of a fault domain and an update domain. If you create three or more VMs across three zones in an Azure region, your VMs are effectively distributed across three fault domains and three update domains. The Azure platform recognizes this distribution across update domains to make sure that VMs in different zones are not updated at the same time.
-Each infrastructure update rolls out zone by zone, within a single region. But, you can have deployment going on in Zone 1, and different deployment going in Zone 2, at the same time. Deployments are not all serialized. But, a single deployment only rolls out one zone at a time to reduce risk.
+Each infrastructure update rolls out zone by zone, within a single region. But, you can have deployment going on in Zone 1, and different deployment going in Zone 2, at the same time. Deployments are not all serialized. But, a single deployment that requires a reboot only rolls out one zone at a time to reduce risk. In general, updates that require a reboot are avoided when possible, and Azure attempts to use Live Migration or provide customers control.
#### Virtual machine scale sets
virtual-network Tutorial Migrate Ilip Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/tutorial-migrate-ilip-nat.md
Previously updated : 2/07/2022 Last updated : 5/25/2022
In this section, youΓÇÖll create a NAT gateway with the IP address you previousl
2. In **NAT gateways**, select **+ Create**.
-3. In **Create network address translation (NAT) gateway**, enter or select the following information.
+3. In **Create network address translation (NAT) gateway**, enter or select the following information in the **Basics** tab.
| Setting | Value | | - | -- |
virtual-network Tutorial Migrate Outbound Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/tutorial-migrate-outbound-nat.md
Previously updated : 1/11/2022 Last updated : 5/25/2022
In this tutorial, you learn how to:
* The load balancer name used in the examples is **myLoadBalancer**. > [!NOTE]
-> Virtual Network NAT provides outbound connectivity for standard internal load balancers. To configure create a NAT gateway resource and associate it to your subnet. For more information on integrating a NAT gateway with your internal load balancers, see [Tutorial: Integrate NAT gateway with an internal load balancer - Azure portal - Virtual Network NAT](tutorial-nat-gateway-load-balancer-internal-portal.md).
+> Virtual Network NAT provides outbound connectivity for standard internal load balancers. For more information on integrating a NAT gateway with your internal load balancers, see [Tutorial: Integrate a NAT gateway with an internal load balancer using Azure portal](tutorial-nat-gateway-load-balancer-internal-portal.md).
## Migrate default outbound access
In this section, youΓÇÖll learn how to change your outbound connectivity method
3. In **NAT gateways**, select **+ Create**.
-4. In **Create network address translation (NAT) gateway**, enter or select the following information.
+4. In **Create network address translation (NAT) gateway**, enter or select the following information in the **Basics** tab.
| Setting | Value | | - | -- |
In this section, youΓÇÖll create a NAT gateway with the IP address previously us
2. In **NAT gateways**, select **+ Create**.
-3. In **Create network address translation (NAT) gateway**, enter or select the following information.
+3. In **Create network address translation (NAT) gateway**, enter or select the following information in the **Basics** tab.
| Setting | Value | | - | -- |
virtual-network Tutorial Nat Gateway Load Balancer Internal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/tutorial-nat-gateway-load-balancer-internal-portal.md
Previously updated : 08/04/2021 Last updated : 05/24/2022
In this section, you'll create a virtual network and subnet.
8. Select **Save**.
-9. Select the **Security** tab.
+9. Select the **Security** tab or select the **Next: Security** button at the bottom of the page.
10. Under **BastionHost**, select **Enable**. Enter this information:
During the creation of the load balancer, you'll configure:
| **Instance details** | | | Name | Enter **myLoadBalancer** | | Region | Select **(US) East US**. |
- | Type | Select **Internal**. |
| SKU | Leave the default **Standard**. |
+ | Type | Select **Internal**. |
4. Select **Next: Frontend IP configuration** at the bottom of the page.
-5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
+5. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**.
6. Enter **LoadBalancerFrontend** in **Name**.
-7. Select **myBackendSubnet** in **Subnet**.
+7. Select **myVNet** in **Virtual network**.
+
+8. Select **myBackendSubnet** in **Subnet**.
-8. Select **Dynamic** for **Assignment**.
+9. Select **Dynamic** for **Assignment**.
-9. Select **Zone-redundant** in **Availability zone**.
+10. Select **Zone-redundant** in **Availability zone**.
> [!NOTE] > In regions with [Availability Zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../../availability-zones/az-overview.md).
-10. Select **Add**.
+11. Select **Add**.
-11. Select **Next: Backend pools** at the bottom of the page.
+12. Select **Next: Backend pools** at the bottom of the page.
-12. In the **Backend pools** tab, select **+ Add a backend pool**.
+13. In the **Backend pools** tab, select **+ Add a backend pool**.
-13. Enter **myBackendPool** for **Name** in **Add backend pool**.
+14. Enter **myBackendPool** for **Name** in **Add backend pool**.
-14. Select **NIC** or **IP Address** for **Backend Pool Configuration**.
+15. Select **NIC** or **IP Address** for **Backend Pool Configuration**.
-15. Select **IPv4** or **IPv6** for **IP version**.
+16. Select **IPv4** or **IPv6** for **IP version**.
-16. Select **Add**.
+17. Select **Add**.
-17. Select the **Next: Inbound rules** button at the bottom of the page.
+18. Select the **Next: Inbound rules** button at the bottom of the page.
-18. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
+19. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
-19. In **Add load balancing rule**, enter or select the following information:
+20. In **Add load balancing rule**, enter or select the following information:
| Setting | Value | | - | -- | | Name | Enter **myHTTPRule** | | IP Version | Select **IPv4** or **IPv6** depending on your requirements. | | Frontend IP address | Select **LoadBalancerFrontend**. |
+ | Backend pool | Select **myBackendPool**. |
| Protocol | Select **TCP**. | | Port | Enter **80**. | | Backend port | Enter **80**. |
- | Backend pool | Select **myBackendPool**. |
| Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. | | Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. | | TCP reset | Select **Enabled**. | | Floating IP | Select **Disabled**. |
-20. Select **Add**.
+21. Select **Add**.
-21. Select the blue **Review + create** button at the bottom of the page.
+22. Select the blue **Review + create** button at the bottom of the page.
-22. Select **Create**.
+23. Select **Create**.
## Create virtual machines
In this section, you'll create a NAT gateway and assign it to the subnet in the
In this section, we'll test the NAT gateway. We'll first discover the public IP of the NAT gateway. We'll then connect to the test virtual machine and verify the outbound connection through the NAT gateway.
-1. Find the public IP address for the NAT gateway on the **Overview** screen. Select **All services** in the left-hand menu, select **All resources**, and then select **myPublicIP**.
+1. Select **Resource groups** in the left-hand menu, select the **TutorIntLBNAT-rg** resource group, and then from the resources list, select **myNATgatewayIP**.
2. Make note of the public IP address: :::image type="content" source="./media/tutorial-nat-gateway-load-balancer-internal-portal/find-public-ip.png" alt-text="Screenshot of discover public IP address of NAT gateway." border="true":::
-3. Select **All services** in the left-hand menu, select **All resources**, and then from the resources list, select **myVM1** that is located in the **TutorIntLBNAT-rg** resource group.
+3. Select **Resource groups** in the left-hand menu, select the **TutorIntLBNAT-rg** resource group, and then from the resources list, select **myVM1**.
4. On the **Overview** page, select **Connect**, then **Bastion**.
-5. Select the blue **Use Bastion** button.
-
-6. Enter the username and password entered during VM creation.
+5. Enter the username and password entered during VM creation.
-7. Open **Internet Explorer** on **myVM1**.
+6. Open **Internet Explorer** on **myVM1**.
-8. Enter **https://whatsmyip.com** in the address bar.
+7. Enter **https://whatsmyip.com** in the address bar.
-9. Verify the IP address displayed matches the NAT gateway address you noted in the previous step:
+8. Verify the IP address displayed matches the NAT gateway address you noted in the previous step:
:::image type="content" source="./media/tutorial-nat-gateway-load-balancer-internal-portal/my-ip.png" alt-text="Screenshot of Internet Explorer showing external outbound IP." border="true":::
virtual-network Tutorial Nat Gateway Load Balancer Public Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/tutorial-nat-gateway-load-balancer-public-portal.md
Previously updated : 03/19/2021 Last updated : 05/24/2022
An Azure account with an active subscription. [Create an account for free](https
In this section, you'll create a virtual network and subnet.
-1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results.
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
2. In **Virtual networks**, select **+ Create**.
In this section, you'll create a virtual network and subnet.
8. Select **Save**.
-9. Select the **Security** tab.
+9. Select the **Security** tab or select the **Next: Security** button at the bottom of the page.
10. Under **BastionHost**, select **Enable**. Enter this information:
During the creation of the load balancer, you'll configure:
| **Instance details** | | | Name | Enter **myLoadBalancer** | | Region | Select **(US) East US**. |
- | Type | Select **Public**. |
| SKU | Leave the default **Standard**. |
+ | Type | Select **Public**. |
| Tier | Leave the default **Regional**. | 4. Select **Next: Frontend IP configuration** at the bottom of the page.
-5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
+5. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**.
6. Enter **LoadBalancerFrontend** in **Name**.
During the creation of the load balancer, you'll configure:
| Name | Enter **myHTTPRule** | | IP Version | Select **IPv4** or **IPv6** depending on your requirements. | | Frontend IP address | Select **LoadBalancerFrontend**. |
+ | Backend pool | Select **myBackendPool**. |
| Protocol | Select **TCP**. | | Port | Enter **80**. | | Backend port | Enter **80**. |
- | Backend pool | Select **myBackendPool**. |
| Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. | | Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. |
In this section, you'll create a NAT gateway and assign it to the subnet in the
In this section, we'll test the NAT gateway. We'll first discover the public IP of the NAT gateway. We'll then connect to the test virtual machine and verify the outbound connection through the NAT gateway.
-1. Find the public IP address for the NAT gateway on the **Overview** screen. Select **All services** in the left-hand menu, select **All resources**, and then select **myPublicIP**.
+1. Select **Resource groups** in the left-hand menu, select the **TutorPubLBNAT-rg** resource group, and then from the resources list, select **myNATgatewayIP**.
2. Make note of the public IP address: :::image type="content" source="./media/tutorial-nat-gateway-load-balancer-public-portal/find-public-ip.png" alt-text="Screenshot discover public IP address of NAT gateway." border="true":::
-3. Select **All services** in the left-hand menu, select **All resources**, and then from the resources list, select **myVM1** that is located in the **TutorPubLBNAT-rg** resource group.
+3. Select **Resource groups** in the left-hand menu, select the **TutorPubLBNAT-rg** resource group, and then from the resources list, select **myVM1**.
4. On the **Overview** page, select **Connect**, then **Bastion**.
-5. Select the blue **Use Bastion** button.
-
-6. Enter the username and password entered during VM creation.
+5. Enter the username and password entered during VM creation.
-7. Open **Internet Explorer** on **myVM1**.
+6. Open **Internet Explorer** on **myVM1**.
-8. Enter **https://whatsmyip.com** in the address bar.
+7. Enter **https://whatsmyip.com** in the address bar.
-9. Verify the IP address displayed matches the NAT gateway address you noted in the previous step:
+8. Verify the IP address displayed matches the NAT gateway address you noted in the previous step:
:::image type="content" source="./media/tutorial-nat-gateway-load-balancer-public-portal/my-ip.png" alt-text="Screenshot Internet Explorer showing external outbound IP." border="true":::
web-application-firewall Waf Front Door Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-monitor.md
Azure Web Application Firewall (WAF) monitoring and logging are provided through
## Azure Monitor
-WAF with FrontDoor log is integrated with [Azure Monitor](../../azure-monitor/overview.md). Azure Monitor allows you to track diagnostic information including WAF alerts and logs. You can configure WAF monitoring within the Front Door resource in the portal under the **Diagnostics** tab or through the Azure Monitor service directly.
+Front Door's WAF log is integrated with [Azure Monitor](../../azure-monitor/overview.md). Azure Monitor enables you to track diagnostic information including WAF alerts and logs. You can configure WAF monitoring within the Front Door resource in the portal under the **Diagnostics** tab, through infrastructure as code approaches, or by using the Azure Monitor service directly.
From Azure portal, go to Front Door resource type. From **Monitoring**/**Metrics** tab on the left, you can add **WebApplicationFirewallRequestCount** to track number of requests that match WAF rules. Custom filters can be created based on action types and rule names.
From Azure portal, go to Front Door resource type. From **Monitoring**/**Metrics
## Logs and diagnostics
-WAF with Front Door provides detailed reporting on each threat it detects. Logging is integrated with Azure Diagnostics logs and alerts are recorded in a json format. These logs can be integrated with [Azure Monitor logs](../../azure-monitor/insights/azure-networking-analytics.md).
+WAF with Front Door provides detailed reporting on each request, and each threat that it detects. Logging is integrated with Azure's diagnostics logs and alerts. These logs can be integrated with [Azure Monitor logs](../../azure-monitor/insights/azure-networking-analytics.md).
![WAFDiag](../media/waf-frontdoor-monitor/waf-frontdoor-diagnostics.png)
-[FrontDoorAccessLog](../../frontdoor/standard-premium/how-to-logs.md#access-log) logs all requests. `FrontDoorWebApplicationFirewalllog` logs any request that matches a WAF rule and each log entry has the following schema.
+Front Door provides two types of logs: access logs and WAF logs.
-For logging on the classic tier, use [FrontdoorAccessLog](../../frontdoor/front-door-diagnostics.md) logs for Front Door requests and `FrontdoorWebApplicationFirewallLog` logs for matched WAF rules using the following schema:
+### Access logs
-| Property | Description |
-| - | - |
-|Action|Action taken on the request. WAF log shows all action values. WAF metrics show all action values, except *Log*.|
-| ClientIp | The IP address of the client that made the request. If there was an X-Forwarded-For header in the request, then the Client IP is picked from the header field. |
-| ClientPort | The IP port of the client that made the request. |
-| Details|Additional details on the matched request |
-|| matchVariableName: http parameter name of the request matched, for example, header names (max chars 100)|
-|| matchVariableValue: values that triggered the match (max chars 100)|
-| Host | The host header of the matched request |
-| Policy | The name of the WAF policy that the request matched. |
-| PolicyMode | Operations mode of the WAF policy. Possible values are "Prevention" and "Detection" |
-| RequestUri | Full URI of the matched request. |
-| RuleName | The name of the WAF rule that the request matched. |
-| SocketIp | The source IP address seen by WAF. This IP address is based on TCP session, independent of any request headers.|
-| TrackingReference | The unique reference string that identifies a request served by Front Door, also sent as X-Azure-Ref header to the client. Required for searching details in the access logs for a specific request. |
+
+The **FrontDoorAccessLog** includes all requests that go through Front Door. For more information on the Front Door access log, including the log schema, see [Azure Front Door logs](../../frontdoor/standard-premium/how-to-logs.md#access-log).
-The following query example returns WAF logs on blocked requests:
::: zone pivot="front-door-classic"
-``` WAFlogQuery
+
+The **FrontdoorAccessLog** includes all requests that go through Front Door. For more information on the Front Door access log, including the log schema, see [Monitoring metrics and logs in Azure Front Door (classic)](../../frontdoor/front-door-diagnostics.md).
++
+The following example query returns the access log entries:
++
+```kusto
AzureDiagnostics
-| where ResourceType == "FRONTDOORS" and Category == "FrontdoorWebApplicationFirewallLog"
-| where action_s == "Block"
+| where ResourceProvider == "MICROSOFT.CDN" and Category == "FrontDoorAccessLog"
``` ::: zone-end
-``` WAFlogQuery
-AzureDiagnostics
-| where ResourceProvider == "MICROSOFT.CDN" and Category == "FrontDoorWebApplicationFirewallLog"
-| where action_s == "Block"
+```kusto
+AzureDiagnostics
+| where ResourceType == "FRONTDOORS" and Category == "FrontdoorAccessLog"
```+ ::: zone-end
-Here is an example of a logged request in WAF log:
+The following shows an example log entry:
-``` WAFlogQuerySample
+```json
{
- "time": "2020-06-09T22:32:17.8376810Z",
- "category": "FrontdoorWebApplicationFirewallLog",
- "operationName": "Microsoft.Network/FrontDoorWebApplicationFirewallLog/Write",
- "properties":
- {
- "clientIP":"xxx.xxx.xxx.xxx",
- "clientPort":"52097",
- "socketIP":"xxx.xxx.xxx.xxx",
- "requestUri":"https://wafdemofrontdoorwebapp.azurefd.net:443/?q=%27%20or%201=1",
- "ruleName":"Microsoft_DefaultRuleSet-1.1-SQLI-942100",
- "policy":"WafDemoCustomPolicy",
- "action":"Block",
- "host":"wafdemofrontdoorwebapp.azurefd.net",
- "trackingReference":"08Q3gXgAAAAAe0s71BET/QYwmqtpHO7uAU0pDRURHRTA1MDgANjMxNTAwZDAtOTRiNS00YzIwLTljY2YtNjFhNzMyOWQyYTgy",
- "policyMode":"prevention",
- "details":
- {
- "matches":
- [{
- "matchVariableName":"QueryParamValue:q",
- "matchVariableValue":"' or 1=1"
- }]
- }
- }
+ "time": "2020-06-09T22:32:17.8383427Z",
+ "category": "FrontdoorAccessLog",
+ "operationName": "Microsoft.Network/FrontDoor/AccessLog/Write",
+ "properties": {
+ "trackingReference": "08Q3gXgAAAAAe0s71BET/QYwmqtpHO7uAU0pDRURHRTA1MDgANjMxNTAwZDAtOTRiNS00YzIwLTljY2YtNjFhNzMyOWQyYTgy",
+ "httpMethod": "GET",
+ "httpVersion": "2.0",
+ "requestUri": "https://wafdemofrontdoorwebapp.azurefd.net:443/?q=%27%20or%201=1",
+ "requestBytes": "715",
+ "responseBytes": "380",
+ "userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4157.0 Safari/537.36 Edg/85.0.531.1",
+ "clientIp": "xxx.xxx.xxx.xxx",
+ "socketIp": "xxx.xxx.xxx.xxx",
+ "clientPort": "52097",
+ "timeTaken": "0.003",
+ "securityProtocol": "TLS 1.2",
+ "routingRuleName": "WAFdemoWebAppRouting",
+ "rulesEngineMatchNames": [],
+ "backendHostname": "wafdemowebappuscentral.azurewebsites.net:443",
+ "sentToOriginShield": false,
+ "httpStatusCode": "403",
+ "httpStatusDetails": "403",
+ "pop": "SJC",
+ "cacheStatus": "CONFIG_NOCACHE"
+ }
} ```
-The following example query returns AccessLogs entries:
+### WAF logs
++
+The **FrontDoorWebApplicationFirewallLog** includes requests that match a WAF rule.
+ ::: zone pivot="front-door-classic"
-``` AccessLogQuery
-AzureDiagnostics
-| where ResourceType == "FRONTDOORS" and Category == "FrontdoorAccessLog"
+The **FrontdoorWebApplicationFirewallLog** includes any request that matches a WAF rule.
++
+The following table shows the values logged for each request:
+
+| Property | Description |
+| - | - |
+| Action |Action taken on the request. Logs include requests with all actions. Metrics include requests with all actions except *Log*.|
+| ClientIp | The IP address of the client that made the request. If there was an `X-Forwarded-For` header in the request, the client IP address is taken from that header field instead. |
+| ClientPort | The IP port of the client that made the request. |
+| Details | Additional details on the request, including any threats that were detected. <br />matchVariableName: HTTP parameter name of the request matched, for example, header names (up to 100 characters maximum).<br /> matchVariableValue: Values that triggered the match (up to 100 characters maximum). |
+| Host | The `Host` header of the request. |
+| Policy | The name of the WAF policy that processed the request. |
+| PolicyMode | Operations mode of the WAF policy. Possible values are `Prevention` and `Detection`. |
+| RequestUri | Full URI of the request. |
+| RuleName | The name of the WAF rule that the request matched. |
+| SocketIp | The source IP address seen by WAF. This IP address is based on the TCP session, and does not consider any request headers. |
+| TrackingReference | The unique reference string that identifies a request served by Front Door. This value is sent to the client in the `X-Azure-Ref` response header. Use this field when searching for a specific request in the log. |
+
+The following example query shows the requests that were blocked by the Front Door WAF:
++
+```kusto
+AzureDiagnostics
+| where ResourceType == "FRONTDOORS" and Category == "FrontdoorWebApplicationFirewallLog"
+| where action_s == "Block"
```+ ::: zone-end ::: zone pivot="front-door-standard-premium"
-``` AccessLogQuery
-AzureDiagnostics
-| where ResourceProvider == "MICROSOFT.CDN" and Category == "FrontDoorAccessLog"
+```kusto
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.CDN" and Category == "FrontDoorWebApplicationFirewallLog"
+| where action_s == "Block"
```+ ::: zone-end
-Here is an example of a logged request in Access log:
+The following shows an example log entry, including the reason that the request was blocked:
-``` AccessLogSample
+```json
{
-"time": "2020-06-09T22:32:17.8383427Z",
-"category": "FrontdoorAccessLog",
-"operationName": "Microsoft.Network/FrontDoor/AccessLog/Write",
- "properties":
- {
- "trackingReference":"08Q3gXgAAAAAe0s71BET/QYwmqtpHO7uAU0pDRURHRTA1MDgANjMxNTAwZDAtOTRiNS00YzIwLTljY2YtNjFhNzMyOWQyYTgy",
- "httpMethod":"GET",
- "httpVersion":"2.0",
- "requestUri":"https://wafdemofrontdoorwebapp.azurefd.net:443/?q=%27%20or%201=1",
- "requestBytes":"715",
- "responseBytes":"380",
- "userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4157.0 Safari/537.36 Edg/85.0.531.1",
- "clientIp":"xxx.xxx.xxx.xxx",
- "socketIp":"xxx.xxx.xxx.xxx",
- "clientPort":"52097",
- "timeTaken":"0.003",
- "securityProtocol":"TLS 1.2",
- "routingRuleName":"WAFdemoWebAppRouting",
- "rulesEngineMatchNames":[],
- "backendHostname":"wafdemowebappuscentral.azurewebsites.net:443",
- "sentToOriginShield":false,
- "httpStatusCode":"403",
- "httpStatusDetails":"403",
- "pop":"SJC",
- "cacheStatus":"CONFIG_NOCACHE"
+ "time": "2020-06-09T22:32:17.8376810Z",
+ "category": "FrontdoorWebApplicationFirewallLog",
+ "operationName": "Microsoft.Network/FrontDoorWebApplicationFirewallLog/Write",
+ "properties": {
+ "clientIP": "xxx.xxx.xxx.xxx",
+ "clientPort": "52097",
+ "socketIP": "xxx.xxx.xxx.xxx",
+ "requestUri": "https://wafdemofrontdoorwebapp.azurefd.net:443/?q=%27%20or%201=1",
+ "ruleName": "Microsoft_DefaultRuleSet-1.1-SQLI-942100",
+ "policy": "WafDemoCustomPolicy",
+ "action": "Block",
+ "host": "wafdemofrontdoorwebapp.azurefd.net",
+ "trackingReference": "08Q3gXgAAAAAe0s71BET/QYwmqtpHO7uAU0pDRURHRTA1MDgANjMxNTAwZDAtOTRiNS00YzIwLTljY2YtNjFhNzMyOWQyYTgy",
+ "policyMode": "prevention",
+ "details": {
+ "matches": [
+ {
+ "matchVariableName": "QueryParamValue:q",
+ "matchVariableValue": "' or 1=1"
+ }
+ ]
}
+ }
}- ``` ## Next steps