Updates from: 04/01/2022 01:11:18
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Verification Proofing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-verification-proofing.md
Microsoft partners with the following ISV partners.
| ISV partner | Description and integration walkthroughs | |:-|:--|
+| ![Screenshot of a eid-me logo](./medi) is an identity verification and decentralized digital identity solution for Canadian citizens. It enables organizations to meet Identity Assurance Level (IAL) 2 and Know Your Customer (KYC) requirements. |
|![Screenshot of an Experian logo.](./medi) is an Identity verification and proofing provider that performs risk assessments based on user attributes to prevent fraud. | |![Screenshot of an IDology logo.](./medi) is an Identity verification and proofing provider with ID verification solutions, fraud prevention solutions, compliance solutions, and others.| |![Screenshot of a Jumio logo.](./medi) is an ID verification service, which enables real-time automated ID verification, safeguarding customer data. |
Microsoft partners with the following ISV partners.
## Next steps
-Select a partner in the tables mentioned to learn how to integrate their solution with Azure AD B2C.
+Select a partner in the tables mentioned to learn how to integrate their solution with Azure AD B2C.
active-directory Active Directory V2 Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-v2-protocols.md
Previously updated : 03/23/2022 Last updated : 03/31/2022
-# OAuth 2.0 and OpenID Connect in the Microsoft identity platform
+# OAuth 2.0 and OpenID Connect (OIDC) in the Microsoft identity platform
-The Microsoft identity platform offers authentication and authorization services using standards-compliant implementations of OAuth 2.0 and OpenID Connect (OIDC) 1.0.
+You don't need to learn OAuth or OpenID Connect (OIDC) at the protocol level to use the Microsoft identity platform. You will, however, encounter these and other protocol terms and concepts as you use the identity platform to add auth functionality to your apps.
-You don't need to learn OAuth and OIDC at the protocol level to use the Microsoft identity platform. However, debugging your apps can be made easier by learning a few basics of the protocols and their implementation on the identity platform.
+As you work with the Azure portal, our documentation, and our authentication libraries, knowing a few basics like these can make your integration and debugging tasks easier.
## Roles in OAuth 2.0
Your app's registration also holds information about the authentication and auth
## Endpoints
-Authorization servers like the Microsoft identity platform provide a set of HTTP endpoints for use by the parties in an auth flow to execute the flow.
+The Microsoft identity platform offers authentication and authorization services using standards-compliant implementations of OAuth 2.0 and OpenID Connect (OIDC) 1.0. Standards-compliant authorization servers like the Microsoft identity platform provide a set of HTTP endpoints for use by the parties in an auth flow to execute the flow.
The endpoint URIs for your app are generated for you when you register or configure your app in Azure AD. The endpoints you use in your app's code depend on the application's type and the identities (account types) it should support.
https://login.microsoftonline.com/<issuer>/oauth2/v2.0/token
To find the endpoints for an application you've registered, in the [Azure portal](https://portal.azure.com) navigate to:
-**Azure Active Directory** > **App registrations** > *{YOUR-APPLICATION}* > **Endpoints**
+**Azure Active Directory** > **App registrations** > \<YOUR-APPLICATION\> > **Endpoints**
## Next steps Next, learn about the OAuth 2.0 authentication flows used by each application type and the libraries you can use in your apps to perform them: * [Authentication flows and application scenarios](authentication-flows-app-scenarios.md)
-* [Microsoft authentication libraries](reference-v2-libraries.md)
+* [Microsoft Authentication Library (MSAL)](msal-overview.md)
-Always prefer using an authentication library over making raw HTTP calls to execute auth flows. However, if you have an app that requires it or you'd like to learn more about the identity platform's implementation of OAuth and OIDC, see:
+**We strongly advise against crafting your own library or raw HTTP calls to execute authentication flows.** A [Microsoft authentication library](reference-v2-libraries.md) is safer and much easier. However, if your scenario prevents you from using our libraries or you'd just like to learn more about the identity platform's implementation, we have protocol reference:
-* [OpenID Connect](v2-protocols-oidc.md) - User sign-in, sign-out, and single sign-on (SSO)
* [Authorization code grant flow](v2-oauth2-auth-code-flow.md) - Single-page apps (SPA), mobile apps, native (desktop) applications * [Client credentials flow](v2-oauth2-client-creds-grant-flow.md) - Server-side processes, scripts, daemons
-* [On-behalf-of (OBO) flow](v2-oauth2-on-behalf-of-flow.md) - Web APIs that call another web API on a user's behalf
+* [On-behalf-of (OBO) flow](v2-oauth2-on-behalf-of-flow.md) - Web APIs that call another web API on a user's behalf
+* [OpenID Connect](v2-protocols-oidc.md) - User sign-in, sign-out, and single sign-on (SSO)
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust.md
Previously updated : 02/09/2022 Last updated : 03/30/2022 -+ #Customer intent: As an application developer, I want to configure a federated credential on an app registration so I can create a trust relationship with an external identity provider and use workload identity federation to access Azure AD protected resources without managing secrets. # Configure an app to trust an external identity provider (preview)
-This article describes how to create a trust relationship between an application in Azure Active Directory (Azure AD) and an external identity provider (IdP). You can then configure an external software workload to exchange a token from the external IdP for an access token from Microsoft identity platform. The external workload can access Azure AD protected resources without needing to manage secrets (in supported scenarios). To learn more about the token exchange workflow, read about [workload identity federation](workload-identity-federation.md). You establish the trust relationship by configuring a federated identity credential on your app registration by using Microsoft Graph.
+This article describes how to create a trust relationship between an application in Azure Active Directory (Azure AD) and an external identity provider (IdP). You can then configure an external software workload to exchange a token from the external IdP for an access token from Microsoft identity platform. The external workload can access Azure AD protected resources without needing to manage secrets (in supported scenarios). To learn more about the token exchange workflow, read about [workload identity federation](workload-identity-federation.md). You establish the trust relationship by configuring a federated identity credential on your app registration by using Microsoft Graph or the Azure portal.
Anyone with permissions to create an app registration and add a secret or certificate can add a federated identity credential. If the **Users can register applications** switch in the [User Settings](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/UserSettings) blade is set to **No**, however, you won't be able to create an app registration or configure the federated identity credential. Find an admin to configure the federated identity credential on your behalf. Anyone in the Application Administrator or Application Owner roles can do this.
Find the object ID of the app (not the application (client) ID), which you need
Get the information for your external IdP and software workload, which you need in the following steps.
-The Microsoft Graph beta endpoint (`https://graph.microsoft.com/beta`) exposes REST APIs to create, update, delete [federatedIdentityCredentials](/graph/api/resources/federatedidentitycredential?view=graph-rest-beta&preserve-view=true) on applications. Launch [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) and sign in to your tenant.
+The Microsoft Graph beta endpoint (`https://graph.microsoft.com/beta`) exposes REST APIs to create, update, delete [federatedIdentityCredentials](/graph/api/resources/federatedidentitycredential?view=graph-rest-beta&preserve-view=true) on applications. Launch [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) and sign in to your tenant in order to run Microsoft Graph commands from AZ CLI.
-## Configure a federated identity credential
+## Configure a federated identity credential on an app
-Run the Microsoft Graph [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) operation on your app (specified by the object ID of the app).
+When you configure a federated identity credential on an app, there are several important pieces of information to provide.
*issuer* and *subject* are the key pieces of information needed to set up the trust relationship. *issuer* is the URL of the external identity provider and must match the `issuer` claim of the external token being exchanged. *subject* is the identifier of the external software workload and must match the `sub` (`subject`) claim of the external token being exchanged. *subject* has no fixed format, as each IdP uses their own - sometimes a GUID, sometimes a colon delimited identifier, sometimes arbitrary strings. The combination of `issuer` and `subject` must be unique on the app. When the external software workload requests Microsoft identity platform to exchange the external token for an access token, the *issuer* and *subject* values of the federated identity credential are checked against the `issuer` and `subject` claims provided in the external token. If that validation check passes, Microsoft identity platform issues an access token to the external software workload.
Run the Microsoft Graph [create a new federated identity credential](/graph/api/
*description* is the un-validated, user-provided description of the federated identity credential. ### GitHub Actions example
-Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) on your app (specified by the object ID of the app). The *issuer* identifies GitHub as the external token issuer. *subject* identifies the GitHub organization, repo, and environment for your GitHub Actions workflow. When the GitHub Actions workflow requests Microsoft identity platform to exchange a GitHub token for an access token, the values in the federated identity credential are checked against the provided GitHub token.
+
+# [Azure CLI](#tab/azure-cli)
+
+Run the [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) command on your app (specified by the object ID of the app). Specify the *name*, *issuer*, *subject*, and other parameters.
+
+For examples, see [Configure an app to trust a GitHub repo](workload-identity-federation-create-trust-github.md?tabs=microsoft-graph).
+
+# [Portal](#tab/azure-portal)
+
+Find your app registration in the [App Registrations](https://aka.ms/appregistrations) experience of the Azure portal. Select **Certificates & secrets** in the left nav pane, select the **Federated credentials** tab, and select **Add credential**.
+
+Select the **GitHub Actions deploying Azure resources** scenario from the dropdown menu. Fill in the **Organization**, **Repository**, **Entity type**, and other fields.
+
+For examples, see [Configure an app to trust a GitHub repo](workload-identity-federation-create-trust-github.md?tabs=azure-portal).
+++
+### Kubernetes example
+
+# [Azure CLI](#tab/azure-cli)
+
+Run the following command to configure a federated identity credential on an app and create a trust relationship with a Kubernetes service account. Specify the following parameters:
+
+- *issuer* is your service account issuer URL (the [OIDC issuer URL](/azure/aks/cluster-configuration#oidc-issuer-preview) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
+- *subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`.
+- *name* is the name of the federated credential, which cannot be changed later.
+- *audiences* lists the audiences that can appear in the 'aud' claim of the external token. This field is mandatory, and defaults to "api://AzureADTokenExchange".
```azurecli
-az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials' --body '{"name":"Testing","issuer":"https://token.actions.githubusercontent.com/","subject":"repo:octo-org/octo-repo:environment:Production","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials' --body '{"name":"Kubernetes-federated-credential","issuer":"https://aksoicwesteurope.blob.core.windows.net/9d80a3e1-2a87-46ea-ab16-e629589c541c/","subject":"system:serviceaccount:erp8asle:pod-identity-sa","description":"Kubernetes service account federated credential","audiences":["api://AzureADTokenExchange"]}'
``` And you get the response:
And you get the response:
"audiences": [ "api://AzureADTokenExchange" ],
- "description": "Testing",
- "id": "1aa3e6a7-464c-4cd2-88d3-90db98132755",
- "issuer": "https://token.actions.githubusercontent.com/",
- "name": "Testing",
- "subject": "repo:octo-org/octo-repo:environment:Production"
+ "description": "Kubernetes service account federated credential",
+ "id": "51ecf9c3-35fc-4519-a28a-8c27c6178bca",
+ "issuer": "https://aksoicwesteurope.blob.core.windows.net/9d80a3e1-2a87-46ea-ab16-e629589c541c/",
+ "name": "Kubernetes-federated-credential",
+ "subject": "system:serviceaccount:erp8asle:pod-identity-sa"
} ```
-### Kubernetes example
-Run the following command to configure a federated identity credential on an app and create a trust relationship with a Kubernetes service account. The *issuer* is your service account issuer URL. *subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`.
+# [Portal](#tab/azure-portal)
+
+Find your app registration in the [App Registrations](https://aka.ms/appregistrations) experience of the Azure portal. Select **Certificates & secrets** in the left nav pane, select the **Federated credentials** tab, and select **Add credential**.
+
+Select the **Kubernetes accessing Azure resources** scenario from the dropdown menu.
+
+Fill in the **Cluster issuer URL**, **Namespace**, **Service account name**, and **Name** fields:
+
+- **Cluster issuer URL** is the [OIDC issuer URL](/azure/aks/cluster-configuration#oidc-issuer-preview) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.
+- **Service account name** is the name of the Kubernetes service account, which provides an identity for processes that run in a Pod.
+- **Namespace** is the service account namespace.
+- **Name** is the name of the federated credential, which cannot be changed later.
+++
+### Other identity providers example
+
+# [Azure CLI](#tab/azure-cli)
+
+Run the following command to configure a federated identity credential on an app and create a trust relationship with an external identity provider. Specify the following parameters (using a software workload running in Google Cloud as an example):
+
+- *name* is the name of the federated credential, which cannot be changed later.
+- *ObjectID*: the object ID of the app (not the application (client) ID) you previously registered in Azure AD.
+- *subject*: must match the `sub` claim in the token issued by the external identity provider. In this example using Google Cloud, *subject* is the Unique ID of the service account you plan to use.
+- *issuer*: must match the `iss` claim in the token issued by the external identity provider. A URL that complies with the OIDC Discovery spec. Azure AD uses this issuer URL to fetch the keys that are necessary to validate the token. In the case of Google Cloud, the *issuer* is "https://accounts.google.com".
+- *audiences*: must match the `aud` claim in the external token. For security reasons, you should pick a value that is unique for tokens meant for Azure AD. The Microsoft recommended value is "api://AzureADTokenExchange".
```azurecli
-az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials' --body '{"name":"Kubernetes-federated-credential","issuer":"https://aksoicwesteurope.blob.core.windows.net/9d80a3e1-2a87-46ea-ab16-e629589c541c/","subject":"system:serviceaccount:erp8asle:pod-identity-sa","description":"Kubernetes service account federated credential","audiences":["api://AzureADTokenExchange"]}'
+az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<ObjectID>/federatedIdentityCredentials' --body '{"name":"GcpFederation","issuer":"https://accounts.google.com","subject":"112633961854638529490","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
``` And you get the response:
And you get the response:
"audiences": [ "api://AzureADTokenExchange" ],
- "description": "Kubernetes service account federated credential",
+ "description": "Testing",
"id": "51ecf9c3-35fc-4519-a28a-8c27c6178bca",
- "issuer": "https://aksoicwesteurope.blob.core.windows.net/9d80a3e1-2a87-46ea-ab16-e629589c541c/",
- "name": "Kubernetes-federated-credential",
- "subject": "system:serviceaccount:erp8asle:pod-identity-sa"
+ "issuer": "https://accounts.google.com"",
+ "name": "GcpFederation",
+ "subject": "112633961854638529490"
} ```
+# [Portal](#tab/azure-portal)
+
+Find your app registration in the [App Registrations](https://aka.ms/appregistrations) experience of the Azure portal. Select **Certificates & secrets** in the left nav pane, select the **Federated credentials** tab, and select **Add credential**.
+
+Select the **Other issuer** scenario from the dropdown menu.
+
+Specify the following fields (using a software workload running in Google Cloud as an example):
+
+- **Name** is the name of the federated credential, which cannot be changed later.
+- **Subject identifier**: must match the `sub` claim in the token issued by the external identity provider. In this example using Google Cloud, *subject* is the Unique ID of the service account you plan to use.
+- **Issuer**: must match the `iss` claim in the token issued by the external identity provider. A URL that complies with the OIDC Discovery spec. Azure AD uses this issuer URL to fetch the keys that are necessary to validate the token. In the case of Google Cloud, the *issuer* is "https://accounts.google.com".
+++ ## List federated identity credentials on an app
+# [Azure CLI](#tab/azure-cli)
Run the following command to [list the federated identity credential(s)](/graph/api/application-list-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for an app (specified by the object ID of the app): ```azurecli
And you get a response similar to:
} ```
+# [Portal](#tab/azure-portal)
+
+Find your app registration in the [App Registrations](https://aka.ms/appregistrations) experience of the Azure portal. Select **Certificates & secrets** in the left nav pane and select the **Federated credentials** tab. The federated credentials that are configured on your app are listed.
+++ ## Delete a federated identity credential
+# [Azure CLI](#tab/azure-cli)
+ Run the following command to [delete a federated identity credential](/graph/api/application-list-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) from an app (specified by the object ID of the app): ```azurecli
az rest -m DELETE -u 'https://graph.microsoft.com/beta/applications/f6475511-fd
```
+# [Portal](#tab/azure-portal)
+
+Find your app registration in the [App Registrations](https://aka.ms/appregistrations) experience of the Azure portal. Select **Certificates & secrets** in the left nav pane and select the **Federated credentials** tab. The federated credentials that are configured on your app are listed.
+
+To delete a federated identity credential, select the **Delete** icon for the credential.
+++ ## Next steps - To learn how to use workload identity federation for Kubernetes, see [Azure AD Workload Identity for Kubernetes](https://azure.github.io/azure-workload-identity/docs/quick-start.html) open source project. - To learn how to use workload identity federation for GitHub Actions, see [Configure a GitHub Actions workflow to get an access token](/azure/developer/github/connect-from-azure).
active-directory B2b Direct Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-direct-connect-overview.md
-# B2B direct connect overview
+# B2B direct connect overview (Preview)
Azure Active Directory (Azure AD) B2B direct connect is a feature of External Identities that lets you set up a mutual trust relationship with another Azure AD organization for seamless collaboration. With B2B direct connect, users from both organizations can work together using their home credentials and B2B direct connect-enabled apps, without having to be added to each otherΓÇÖs organizations as guests. Use B2B direct connect to share resources with external Azure AD organizations. Or use it to share resources across multiple Azure AD tenants within your own organization.
active-directory Cross Tenant Access Settings B2b Direct Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-direct-connect.md
-# Configure cross-tenant access settings for B2B direct connect
+# Configure cross-tenant access settings for B2B direct connect (Preview)
> [!NOTE] > Cross-tenant access settings are preview features of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
active-directory External Identities Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-identities-pricing.md
Previously updated : 07/13/2021 Last updated : 03/29/2022
To take advantage of MAU billing, your Azure AD tenant must be linked to an Azur
In your Azure AD tenant, guest user collaboration usage is billed based on the count of unique guest users with authentication activity within a calendar month. This model replaces the 1:5 ratio billing model, which allowed up to five guest users for each Azure AD Premium license in your tenant. When your tenant is linked to a subscription and you use External Identities features to collaborate with guest users, you'll be automatically billed using the MAU-based billing model.
+Your first 50,000 MAUs per month are free for both Premium P1 and Premium P2 features. To determine the total number of MAUs, we combine MAUs from all your tenants (both Azure AD and Azure AD B2C) that are linked to the same subscription.
+ The pricing tier that applies to your guest users is based on the highest pricing tier assigned to your Azure AD tenant. For more information, see [Azure Active Directory External Identities Pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/). ## Link your Azure AD tenant to a subscription
-An Azure AD tenant must be linked to an Azure subscription for proper billing and access to features. If the directory doesn't already have a subscription you can link to, you'll have the opportunity to add one during this process.
+An Azure AD tenant must be linked to a resource group within an Azure subscription for proper billing and access to features.
1. Sign in to the [Azure portal](https://portal.azure.com/) with an Azure account that's been assigned at least the [Contributor](../../role-based-access-control/built-in-roles.md) role within the subscription or a resource group within the subscription.
-2. Select the directory you want to link: In the Azure portal toolbar, select the **Directory + Subscription** icon, and then select the directory.
-
- ![Select the Directory + Subscription icon](media/external-identities-pricing/portal-mau-pick-directory.png)
+2. Select the directory you want to link: In the Azure portal toolbar, select the **Directories + subscriptions** icon in the portal toolbar. Then on the **Portal settings | Directories + subscriptions** page, find your directory in the **Directory name** list, and then select **Switch**.
3. Under **Azure Services**, select **Azure Active Directory**.
An Azure AD tenant must be linked to an Azure subscription for proper billing an
![Select the tenant and link a subscription](media/external-identities-pricing/linked-subscriptions.png)
-7. In the Link a subscription pane, select a **Subscription** and a **Resource group**. Then select **Apply**.
-
- > [!NOTE]
- >
- > * Your first 50,000 MAUs per month are free for both Premium P1 and Premium P2 features. To determine the total number of MAUs, we combine MAUs from all your tenants (both Azure AD and Azure AD B2C) that are linked to the same subscription.
- >* If there are no subscriptions listed, you can [associate a subscription to your tenant](../fundamentals/active-directory-how-subscriptions-associated-directory.md). Or, you can add a new subscription by selecting the link **if you don't already have a subscription you may create one here**.
+7. In the **Link a subscription** pane, select a **Subscription** and a **Resource group**. Then select **Apply**. (If there are no subscriptions listed, see [What if I can't find a subscription?](#what-if-i-cant-find-a-subscription).)
![Select a subscription and resource group](media/external-identities-pricing/link-subscription-resource.png) After you complete these steps, your Azure subscription is billed based on your Azure Direct or Enterprise Agreement details, if applicable.
+## What if I can't find a subscription?
+
+If no subscriptions are available in the **Link a subscription** pane, here are some possible reasons:
+
+- You don't have the appropriate permissions. Be sure to sign in with an Azure account that's been assigned at least the [Contributor](../../role-based-access-control/built-in-roles.md) role within the subscription or a resource group within the subscription.
+
+- A subscription exists, but it hasn't been associated with your directory yet. You can [associate an existing subscription to your tenant](../fundamentals/active-directory-how-subscriptions-associated-directory.md) and then repeat the steps for [linking it to your tenant](#link-your-azure-ad-tenant-to-a-subscription).
+
+- No subscription exists. In the **Link a subscription** pane, you can create a subscription by selecting the link **if you don't already have a subscription you may create one here**. After you create a new subscription, you'll need to [create a resource group](../../azure-resource-manager/management/manage-resource-groups-portal.md) in the new subscription, and then repeat the steps for [linking it to your tenant](#link-your-azure-ad-tenant-to-a-subscription).
+ ## Next steps
-For the latest pricing information, see [Azure Active Directory pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
+For the latest pricing information, see [Azure Active Directory pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
+Learn more about [managing Azure resources](../../azure-resource-manager/management/overview.md).
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md
Previously updated : 03/21/2022 Last updated : 03/31/2022 tags: active-directory
Here are some remedies for common problems with Azure Active Directory (Azure AD
> - **Starting July 2022**, we'll begin rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft will stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode).
+## Guest sign-in fails with error code AADSTS50020
+
+When a guest user from an identity provider (IdP) can't sign in to a resource tenant in Azure AD and receives an error code AADSTS50020, there are several possible causes. See the troubleshooting article for error [AADSTS50020](/troubleshoot/azure/active-directory/error-code-aadsts50020-user-account-identity-provider-does-not-exist).
+ ## B2B direct connect user is unable to access a shared channel (error AADSTS90071) When a B2B direct connect sees the following error message when trying to access another organization's Teams shared channel, multi-factor authentication trust settings haven't been configured by the external organization:
active-directory Tshoot Connect Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-connectivity.md
This article explains how connectivity between Azure AD Connect and Azure AD wor
Azure AD Connect uses the MSAL library for authentication. The installation wizard and the sync engine proper require machine.config to be properly configured since these two are .NET applications. >[!NOTE]
->Azure AD Connect v1.6.xx.x uses the ADAL library. The ADAL library is being depricated and support will end in June 2022. Microsot recommendeds that you upgrade to the latest version of [Azure AD Connect v2](whatis-azure-ad-connect-v2.md).
+>Azure AD Connect v1.6.xx.x uses the ADAL library. The ADAL library is being depricated and support will end in June 2022. Microsoft recommends that you upgrade to the latest version of [Azure AD Connect v2](whatis-azure-ad-connect-v2.md).
In this article, we show how Fabrikam connects to Azure AD through its proxy. The proxy server is named fabrikamproxy and is using port 8080.
active-directory Admin Consent Workflow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/admin-consent-workflow-overview.md
+
+ Title: Overview of admin consent workflow
+
+description: Learn about the admin consent workflow in Azure Active Directory
+++++++ Last updated : 03/30/2022++++
+#customer intent: As an admin, I want to learn about the admin consent workflow and how it affects end-user and admin consent experience
++
+# Overview of admin consent workflow
+
+There may be situations where your end-users need to consent to permissions for applications that they're creating or using with their work accounts. However, non-admin users aren't allowed to consent to permissions that require admin consent. Also, users canΓÇÖt consent to applications when [user consent](configure-user-consent.md) is disabled in the userΓÇÖs tenant.
+
+In such situations where user consent is disabled, an admin can grant users the ability to make requests for gaining access to applications by enabling the admin consent workflow. In this article, youΓÇÖll learn about the user and admin experience when the admin consent workflow is disabled vs when it's enabled.
+
+When attempting to sign in, users may see a consent prompt like the one in the following screenshot:
++
+If the user doesnΓÇÖt know who to contact to grant them access, they may be unable to use the application. This situation also requires administrators to create a separate workflow to track requests for applications if they're open to receiving them.
+As an admin, the following options exist for you to determine how users consent to applications:
+- Disable user consent. For example, a high school may want to turn off user consent so that the school IT administration has full control over all the applications that are used in their tenant.
+- Allow users to consent to the required permissions. It's NOT recommended to keep user consent open if you have sensitive data in your tenant.
+- If you still want to retain admin-only consent for certain permissions but want to assist your end-users in onboarding their application, you can use the admin consent workflow to evaluate and respond to admin consent requests. This way, you can have a queue of all the requests for admin consent for your tenant and can track and respond to them directly through the Azure portal.
+To learn how to configure the admin consent workflow, see [configure-admin-consent-workflow.md](configure-admin-consent-workflow.md).
+
+## How the admin consent workflow works
+
+When you configure the admin consent workflow, your end users can request for consent directly through the prompt. The users may see a consent prompt like the one in the following screenshot:
++
+When an administrator responds to a request, the user receives an email alert informing them that the request has been processed.
+
+When the user submits a consent request, the request shows up in the admin consent request page in the Azure portal. Administrators and designated reviewers sign in to [view and act on the new requests](review-admin-consent-requests.md). Reviewers only see consent requests that were created after they were designated as reviewers. Requests show up in the following two tabs in the admin consent requests blade.
+- My pending: This shows any active requests that have the signed-in user designated as a reviewer. Although reviewers can block or deny requests, only people with the correct RBAC permissions to consent to the requested permissions can do so.
+- All(Preview): All requests, active or expired, that exist in the tenant.
+Each request includes information about the application and the user(s) requesting the application.
+
+## Email notifications
+
+If configured, all reviewers will receive email notifications when:
+
+- A new request has been created
+- A request has expired
+- A request is nearing the expiration date.
+
+Requestors will receive email notifications when:
+
+- They submit a new request for access
+- Their request has expired
+- Their request has been denied or blocked
+- Their request has been approved
+
+## Audit logs
+
+The table below outlines the scenarios and audit values available for the admin consent workflow.
+
+|Scenario |Audit Service |Audit Category |Audit Activity |Audit Actor |Audit log limitations |
+|||||||
+|Admin enabling the consent request workflow |Access Reviews |UserManagement |Create governance policy template |App context |Currently you canΓÇÖt find the user context |
+|Admin disabling the consent request workflow |Access Reviews |UserManagement |Delete governance policy template |App context |Currently you canΓÇÖt find the user context |
+|Admin updating the consent workflow configurations |Access Reviews |UserManagement |Update governance policy template |App context |Currently you canΓÇÖt find the user context |
+|End user creating an admin consent request for an app |Access Reviews |Policy |Create request |App context |Currently you canΓÇÖt find the user context |
+|Reviewers approving an admin consent request |Access Reviews |UserManagement |Approve all requests in business flow |App context |Currently you canΓÇÖt find the user context or the app ID that was granted admin consent. |
+|Reviewers denying an admin consent request |Access Reviews |UserManagement |Approve all requests in business flow |App context | Currently you canΓÇÖt find the user context of the actor that denied an admin consent request |
+
+## Next steps
+
+- [Enable the admin consent request workflow](configure-admin-consent-workflow.md)
+- [Review admin consent request](review-admin-consent-requests.md)
+- [Manage consent requests](manage-consent-requests.md)
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
Previously updated : 10/06/2021 Last updated : 03/22/2021
To approve requests, a reviewer must be a global administrator, cloud applicatio
To configure the admin consent workflow, you need: - An Azure account. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- You must be a global administrator to turn on the workflow.
+- You must be a global administrator to turn on the admin consent workflow.
## Enable the admin consent workflow
Under **Admin consent requests**, select **Yes** for **Users can request admin
:::image type="content" source="media/configure-admin-consent-workflow/enable-admin-consent-workflow.png" alt-text="Configure admin consent workflow settings"::: 1. Configure the following settings:
- - **Select users to review admin consent requests** - Select reviewers for this workflow from a set of users that have the global administrator, cloud application administrator, or application administrator roles. You can also add groups and roles that can configure an admin consent workflow. You must designate at least one reviewer before the workflow can be enabled.
+ - **Select users, groups, or roles that will be designated as reviewers for admin consent requests** - Reviewers can view, block, or deny admin consent requests, but only global administrators can approve admin consent requests. People designated as reviewers can view incoming requests in the **My Pending** tab after they have been set as reviewers. Any new reviewers won't be able to act on existing or expired admin consent requests.
- **Selected users will receive email notifications for requests** - Enable or disable email notifications to the reviewers when a request is made. - **Selected users will receive request expiration reminders** - Enable or disable reminder email notifications to the reviewers when a request is about to expire. - **Consent request expires after (days)** - Specify how long requests stay valid.
-1. Select **Save**. It can take up to an hour for the feature to become enabled.
+1. Select **Save**. It can take up to an hour for the workflow to become enabled.
> [!NOTE]
-> You can add or remove reviewers for this workflow by modifying the **Select admin consent requests reviewers** list. Note that a current limitation of this feature is that reviewers can retain the ability to review requests that were made while they were designated as a reviewer.
-
-## Email notifications
-
-If configured, all reviewers will receive email notifications when:
--- A new request has been created-- A request has expired-- A request is nearing the expiration date -
-Requestors will receive email notifications when:
--- They submit a new request for access-- Their request has expired-- Their request has been denied or blocked-- Their request has been approved-
-## Audit logs
-
-The table below outlines the scenarios and audit values available for the admin consent workflow.
-
-|Scenario |Audit Service |Audit Category |Audit Activity |Audit Actor |Audit log limitations |
-|||||||
-|Admin enabling the consent request workflow |Access Reviews |UserManagement |Create governance policy template |App context |Currently you cannot find the user context |
-|Admin disabling the consent request workflow |Access Reviews |UserManagement |Delete governance policy template |App context |Currently you cannot find the user context |
-|Admin updating the consent workflow configurations |Access Reviews |UserManagement |Update governance policy template |App context |Currently you cannot find the user context |
-|End user creating an admin consent request for an app |Access Reviews |Policy |Create request |App context |Currently you cannot find the user context |
-|Reviewers approving an admin consent request |Access Reviews |UserManagement |Approve all requests in business flow |App context |Currently you cannot find the user context or the app ID that was granted admin consent. |
-|Reviewers denying an admin consent request |Access Reviews |UserManagement |Approve all requests in business flow |App context | Currently you cannot find the user context of the actor that denied an admin consent request |
+> You can add or remove reviewers for this workflow by modifying the **Select admin consent requests reviewers** list. A current limitation of this feature is that a reviewer can retain the ability to review requests that were made while they were designated as a reviewer.
## Next steps
active-directory Grant Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md
Granting tenant-wide admin consent requires you to sign in as a user that is aut
To grant tenant-wide admin consent, you need: -- An Azure account with one of the following roles: Global Administrator, Privileged Role Administrator, Cloud Application Administrator, or Application Administrator. A user can also be authorized to grant tenant-wide consent if they are assigned a custom directory role that includes the [permission to grant permissions to applications](../roles/custom-consent-permissions.md).
+- An Azure AD user account with one of the following roles:
+ - Global Administrator or Privileged Role Administrator, for granting consnet for apps requesting any permission, for any API.
+ - Cloud Application Administrator or Application Administrator, for granting consnet for apps requesting any permission for any API, _except_ Azure AD Graph or Microsoft Graph app roles (application permissions).
+ - A custom directory role that includes the [permission to grant permissions to applications](../roles/custom-consent-permissions.md), for the permissions required by the application.
## Grant tenant-wide admin consent in Enterprise apps
active-directory Review Admin Consent Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/review-admin-consent-requests.md
Previously updated : 11/17/2021 Last updated : 03/22/2021 #customer intent: As an admin, I want to review and take action on admin consent requests.
-# Review and take action on admin consent requests
+# Review admin consent requests
In this article, you learn how to review and take action on admin consent requests. To review and act on consent requests, you must be designated as a reviewer. As a reviewer, you only see admin consent requests that were created after you were designated as a reviewer.
To review and take action on admin consent requests, you need:
To review the admin consent requests and take action: 1. Sign in to the [Azure portal](https://portal.azure.com) as one of the registered reviewers of the admin consent workflow.
-2. Select **All services** at the top of the left-hand navigation menu.
-3. In the filter search box, type "**Azure Active Directory**" and select **Azure Active Directory**.
-4. From the navigation menu, select **Enterprise applications**.
-5. Under **Activity**, select **Admin consent requests**.
-6. Select the application that is being requested.
-7. Review details about the request:
-
+1. Select **All services** at the top of the left-hand navigation menu.
+1. In the filter search box, type and select **Azure Active Directory**.
+1. From the navigation menu, select **Enterprise applications**.
+1. Under **Activity**, select **Admin consent requests**.
+1. Select the application that is being requested.
+1. Review details about the request:
- To see who is requesting access and why, select the **Requested by** tab. - To see what permissions are being requested by the application, select **Review permissions and consent**.-
-8. Evaluate the request and take the appropriate action:
-
- - **Approve the request**. To approve a request, grant admin consent to the application. Once a request is approved, all requestors are notified that they have been granted access. Approving a request allows all users in your tenant to access the application unless otherwise restricted with user assignment.
-
+
+1. Evaluate the request and take the appropriate action:
+ - **Approve the request**. To approve a request, grant admin consent to the application. Once a request is approved, all requestors are notified that they have been granted access. Approving a request allows all users in your tenant to access the application unless otherwise restricted with user assignment.
- **Deny the request**. To deny a request, you must provide a justification that will be provided to all requestors. Once a request is denied, all requestors are notified that they have been denied access to the application. Denying a request won't prevent users from requesting admin consent to the app again in the future. - **Block the request**. To block a request, you must provide a justification that will be provided to all requestors. Once a request is blocked, all requestors are notified they've been denied access to the application. Blocking a request creates a service principal object for the application in your tenant in a disabled state. Users won't be able to request admin consent to the application in the future.
active-directory Tutorial Manage Certificates For Federated Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-manage-certificates-for-federated-single-sign-on.md
+
+ Title: "Tutorial: Manage federation certificates"
+description: In this tutorial, you'll learn how to customize the expiration date for your federation certificates, and how to renew certificates that will soon expire.
++++++++ Last updated : 03/31/2022++++
+#customer intent: As an admin of an application, I want to learn how to manage federated SAML certificates by customizing expiration dates and renewing certificates.
++
+# Tutorial: Manage certificates for federated single sign-on
+
+In this article, we cover common questions and information related to certificates that Azure Active Directory (Azure AD) creates to establish federated single sign-on (SSO) to your software as a service (SaaS) applications. Add applications from the Azure AD app gallery or by using a non-gallery application template. Configure the application by using the federated SSO option.
+
+This tutorial is relevant only to apps that are configured to use Azure AD SSO through [Security Assertion Markup Language](https://wikipedia.org/wiki/Security_Assertion_Markup_Language) (SAML) federation.
+
+## Auto-generated certificate for gallery and non-gallery applications
+
+When you add a new application from the gallery and configure a SAML-based sign-on (by selecting **Single sign-on** > **SAML** from the application overview page), Azure AD generates a certificate for the application that is valid for three years. To download the active certificate as a security certificate (**.cer**) file, return to that page (**SAML-based sign-on**) and select a download link in the **SAML Signing Certificate** heading. You can choose between the raw (binary) certificate or the Base64 (base 64-encoded text) certificate. For gallery applications, this section might also show a link to download the certificate as federation metadata XML (an **.xml** file), depending on the requirement of the application.
+
+You can also download an active or inactive certificate by selecting the **SAML Signing Certificate** heading's **Edit** icon (a pencil), which displays the **SAML Signing Certificate** page. Select the ellipsis (**...**) next to the certificate you want to download, and then choose which certificate format you want. You have the additional option to download the certificate in privacy-enhanced mail (PEM) format. This format is identical to Base64 but with a **.pem** file name extension, which isn't recognized in Windows as a certificate format.
++
+## Customize the expiration date for your federation certificate and roll it over to a new certificate
+
+By default, Azure configures a certificate to expire after three years when it's created automatically during SAML single sign-on configuration. Because you can't change the date of a certificate after you save it, you have to:
+
+1. Create a new certificate with the desired date.
+1. Save the new certificate.
+1. Download the new certificate in the correct format.
+1. Upload the new certificate to the application.
+1. Make the new certificate active in the Azure Active Directory portal.
+
+The following two sections help you perform these steps.
+
+### Create a new certificate
+
+First, create and save new certificate with a different expiration date:
+
+1. Sign in to the [Azure Active Directory portal](https://aad.portal.azure.com/). The **Azure Active Directory admin center** page appears.
+1. Select **Enterprise applications**.
+1. From the list of applications, select your desired application.
+1. Under the **Manage** section, select **Single sign-on**.
+1. If the **Select a single sign-on method** page appears, select **SAML**.
+1. In the **Set up Single Sign-On with SAML** page, find the **SAML Signing Certificate** heading and select the **Edit** icon (a pencil). The **SAML Signing Certificate** page appears, which displays the status (**Active** or **Inactive**), expiration date, and thumbprint (a hash string) of each certificate.
+1. Select **New Certificate**. A new row appears below the certificate list, where the expiration date defaults to exactly three years after the current date. (Your changes haven't been saved yet, so you can still modify the expiration date.)
+1. In the new certificate row, hover over the expiration date column and select the **Select Date** icon (a calendar). A calendar control appears, displaying the days of a month of the new row's current expiration date.
+1. Use the calendar control to set a new date. You can set any date between the current date and three years after the current date.
+1. Select **Save**. The new certificate now appears with a status of **Inactive**, the expiration date that you chose, and a thumbprint.
+ > [!NOTE]
+ > When you have an existing certificate that is already expired and you generate a new certificate, the new certificate will be considered for signing tokens, even though you haven't activated it yet.
+1. Select the **X** to return to the **Set up Single Sign-On with SAML** page.
+
+### Upload and activate a certificate
+
+Next, download the new certificate in the correct format, upload it to the application, and make it active in Azure Active Directory:
+
+1. View the application's additional SAML sign-on configuration instructions by either:
+
+ - Selecting the **configuration guide** link to view in a separate browser window or tab, or
+ - Going to the **set up** heading and selecting **View step-by-step instructions** to view in a sidebar.
+
+1. In the instructions, note the encoding format required for the certificate upload.
+1. Follow the instructions in the [Auto-generated certificate for gallery and non-gallery applications](#auto-generated-certificate-for-gallery-and-non-gallery-applications) section earlier. This step downloads the certificate in the encoding format required for upload by the application.
+1. When you want to roll over to the new certificate, go back to the **SAML Signing Certificate** page, and in the newly saved certificate row, select the ellipsis (**...**) and select **Make certificate active**. The status of the new certificate changes to **Active**, and the previously active certificate changes to a status of **Inactive**.
+1. Continue following the application's SAML sign-on configuration instructions that you displayed earlier, so that you can upload the SAML signing certificate in the correct encoding format.
+
+If your application doesn't have any validation for the certificate's expiration, and the certificate matches in both Azure Active Directory and your application, your app is still accessible despite having an expired certificate. Ensure your application can validate the certificate's expiration date.
+
+## Add email notification addresses for certificate expiration
+
+Azure AD will send an email notification 60, 30, and 7 days before the SAML certificate expires. You may add more than one email address to receive notifications. To specify the email address(es) you want the notifications to be sent to:
+
+1. In the **SAML Signing Certificate** page, go to the **notification email addresses** heading. By default, this heading uses only the email address of the admin who added the application.
+1. Below the final email address, type the email address that should receive the certificate's expiration notice, and then press Enter.
+1. Repeat the previous step for each email address you want to add.
+1. For each email address you want to delete, select the **Delete** icon (a garbage can) next to the email address.
+1. Select **Save**.
+
+You can add up to five email addresses to the Notification list (including the email address of the admin who added the application). If you need more people to be notified, use the distribution list emails.
+
+You'll receive the notification email from azure-noreply@microsoft.com. To avoid the email going to your spam location, add this email to your contacts.
+
+## Renew a certificate that will soon expire
+
+If a certificate is about to expire, you can renew it using a procedure that results in no significant downtime for your users. To renew an expiring certificate:
+
+1. Follow the instructions in the [Create a new certificate](#create-a-new-certificate) section earlier, using a date that overlaps with the existing certificate. That date limits the amount of downtime caused by the certificate expiration.
+1. If the application can automatically roll over a certificate, set the new certificate to active by following these steps:
+ 1. Go back to the **SAML Signing Certificate** page.
+ 1. In the newly saved certificate row, select the ellipsis (**...**) and then select **Make certificate active**.
+ 1. Skip the next two steps.
+
+1. If the app can only handle one certificate at a time, pick a downtime interval to perform the next step. (Otherwise, if the application doesnΓÇÖt automatically pick up the new certificate but can handle more than one signing certificate, you can perform the next step anytime.)
+1. Before the old certificate expires, follow the instructions in the [Upload and activate a certificate](#upload-and-activate-a-certificate) section earlier. If your application certificate isn't updated after a new certificate is updated in Azure Active Directory, authentication on your app may fail.
+1. Sign in to the application to make sure that the certificate works correctly.
+
+If your application doesn't validate the certificate expiration configured in Azure Active Directory, and the certificate matches in both Azure Active Directory and your application, your app is still accessible despite having an expired certificate. Ensure your application can validate certificate expiration.
+
+## Related articles
+
+- [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md)
+- [Application management with Azure Active Directory](what-is-application-management.md)
+- [Single sign-on to applications in Azure Active Directory](what-is-single-sign-on.md)
+- [Debug SAML-based single sign-on to applications in Azure Active Directory](./debug-saml-sso-issues.md)
active-directory Recommendation Mfa From Known Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-mfa-from-known-devices.md
+
+ Title: Azure Active Directory recommendation - Minimize MFA prompts from known devices in Azure AD | Microsoft Docs
+description: Learn why you should minimize MFA prompts from known devices in Azure AD.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid: 9b88958d-94a2-4f4b-a18c-616f0617a24e
++
+ na
++ Last updated : 03/31/2022++++++
+# Azure AD recommendation: Minimize MFA prompts from known devices
+
+[Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices.
++
+This article covers the recommendation to convert minimize multi-factor authentication (MFA) prompts from known devices.
++
+## Description
+
+As an admin, you want to maintain security for my companyΓÇÖs resources, but you also want your employees to easily access resources as needed.
+
+MFA enables you to enhance the security posture of your tenant. While enabling MFA is a good practice, you should try to keep the number of MFA prompts your users have to go through at a minimum. One option you have to accomplish this goal is to **allow users to remember multi-factor authentication on devices they trust**.
+
+The remember multi-factor authentication feature sets a persistent cookie on the browser when a user selects the Don't ask again for X days option at sign-in. The user isn't prompted again for MFA from that browser until the cookie expires. If the user opens a different browser on the same device or clears the cookies, they're prompted again to verify.
+
+![Remember MFA on trusted devices](./media/recommendation-mfa-from-known-devices\remember-mfa-on-trusted-devices.png)
+++
+For more information, see [Configure Azure AD Multi-Factor Authentication settings](../authentication/howto-mfa-mfasettings.md).
++
+## Logic
+
+This recommendation shows up, if you have set the remember multi-factor authentication feature to less than 30 days.
++
+## Value
+
+This recommendation improves your user's productivity and minimizes the sign-in time with fewer MFA prompts. Ensure that your most sensitive resources can have the tightest controls, while your least sensitive resources can be more freely accessible.
+
+## Action plan
+
+1. Review [configure Azure AD Multi-Factor Authentication settings](../authentication/howto-mfa-mfasettings.md).
+
+2. Set the remember multi-factor authentication feature to 90 days.
+
+
+## Next steps
+
+- [What is Azure Active Directory recommendations](overview-recommendations.md)
+
+- [Azure AD reports overview](overview-reports.md)
active-directory Recommendation Migrate Apps From Adfs To Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-apps-from-adfs-to-azure-ad.md
+
+ Title: Azure Active Directory recommendation - Migrate apps from ADFS to Azure AD in Azure AD | Microsoft Docs
+description: Learn why you should migrate apps from ADFS to Azure AD in Azure AD
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid: 9b88958d-94a2-4f4b-a18c-616f0617a24e
++
+ na
++ Last updated : 03/31/2022++++++
+# Azure AD recommendation: Migrate apps from ADFS to Azure AD
+
+[Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices.
++
+This article covers the recommendation to migrate apps from ADFS to Azure AD.
++
+## Description
+
+As an admin responsible for managing applications, I want my applications to use Azure ADΓÇÖs security features and maximize their value.
++++
+## Logic
+
+If a tenant has apps on AD FS, and any of these apps are deemed 100% migratable, this recommendation shows up.
+
+## Value
+
+Using Azure AD gives you granular per-application access controls to secure access to applications. With Azure AD's B2B collaboration, you can increase user productivity. Automated app provisioning automates the user identity lifecycle in cloud SaaS apps such as Dropbox, Salesforce and more.
+
+## Action plan
+
+1. [Install Azure AD Connect Health](../hybrid/how-to-connect-install-roadmap.md) on your AD FS server. Azure AD Connect Health installation.
+
+2. [Review the AD FS application activity report](../manage-apps/migrate-adfs-application-activity.md) to get insights about your AD FS applications.
+
+3. Read the solution guide for [migrating applications to Azure AD](../manage-apps/migrate-adfs-apps-to-azure.md).
+
+4. Migrate applications to Azure AD. For more information, use [the deployment plan for enabling single sign-on](https://go.microsoft.com/fwlink/?linkid=2110877&amp;clcid=0x409).
+
+
+
+
+## Next steps
+
+- [What is Azure Active Directory recommendations](overview-recommendations.md)
+
+- [Azure AD reports overview](overview-reports.md)
active-directory Recommendation Turn Off Per User Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-turn-off-per-user-mfa.md
+
+ Title: Azure Active Directory recommendation - Turn off per user MFA in Azure AD | Microsoft Docs
+description: Learn why you should turn off per user MFA in Azure AD
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid: 9b88958d-94a2-4f4b-a18c-616f0617a24e
++
+ na
++ Last updated : 03/31/2022++++++
+# Azure AD recommendation: Turn off per user MFA
+
+[Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices.
++
+This article covers the recommendation to turn off per user MFA.
++
+## Description
+
+As an admin, you want to maintain security for my companyΓÇÖs resources, but you also want your employees to easily access resources as needed.
+
+Multi-factor authentication (MFA) enables you to enhance the security posture of your tenant. In your tenant, you can enable MFA on a per-user basis. In this scenario, your users perform MFA each time they sign in (with some exceptions, such as when they sign in from trusted IP addresses or when the remember MFA on trusted devices feature is turned on).
+
+While enabling MFA is a good practice, you can reduce the number of times your users are prompted for MFA by converting per-user MFA to MFA based on conditional access.
++
+## Logic
+
+This recommendation shows up, if:
+
+- You have per-user MFA configured for at least 5% of your users
+- Conditional access policies are active for more than 1% of your users (indicating familiarity with CA policies).
+
+## Value
+
+This recommendation improves your user's productivity and minimizes the sign-in time with fewer MFA prompts. Ensure that your most sensitive resources can have the tightest controls, while your least sensitive resources can be more freely accessible.
+
+## Action plan
+
+1. To get started, confirm that there's an existing conditional access policy with an MFA requirement. Ensure that you're covering all resources and users you would like to secure with MFA. Review your [conditional access policies](https://portal.azure.com/?Microsoft_AAD_IAM_enableAadvisorFeaturePreview=true&amp%3BMicrosoft_AAD_IAM_enableAadvisorFeature=true#blade/Microsoft_AAD_IAM/PoliciesTemplateBlade).
+
+2. To require MFA using a conditional access policy, follow the steps in [Secure user sign-in events with Azure AD Multi-Factor Authentication](../authentication/tutorial-enable-azure-mfa.md).
+
+3. Ensure that the per-user MFA configuration is turned off.
+
+
+
+## Next steps
+
+- [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md)
+- [Azure AD reports overview](overview-reports.md)
active-directory F5 Big Ip Sap Erp Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/f5-big-ip-sap-erp-easy-button.md
+
+ Title: Configure F5 BIG-IP Easy Button for SSO to SAP ERP using Azure AD
+description: Learn to secure SAP ERP using Azure Active Directory, through F5ΓÇÖs BIG-IP Easy Button guided configuration.
++++++++ Last updated : 03/28/2022+++
+# Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to SAP ERP using Azure AD
+
+In this article, learn to secure SAP ERP using Azure Active Directory (Azure AD), through F5ΓÇÖs BIG-IP Easy Button guided configuration.
+
+Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
+
+* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](../conditional-access/overview.md)
+
+* Full SSO between Azure AD and BIG-IP published services
+
+* Manage identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
+
+To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](/azure/active-directory/manage-apps/f5-aad-integration) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+
+## Scenario description
+
+This scenario looks at the classic **SAP ERP application using Kerberos authentication** to manage access to protected content.
+
+Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it is costly, requires careful planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller (ADC) is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
+
+Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and headers-based SSO, significantly improving the overall security posture of the application.
+
+## Scenario architecture
+
+The SHA solution for this scenario is made up of the following:
+
+**SAP ERP application:** BIG-IP published service to be protected by and Azure AD SHA.
+
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SAML based SSO to the BIG-IP.
+
+**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the SAP service.
+
+SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
+
+![Secure hybrid access - SP initiated flow](./media/f5-big-ip-easy-button-sap-erp/sp-initiated-flow.png)
+
+| Steps| Description|
+| -- |-|
+| 1| User connects to application endpoint (BIG-IP) |
+| 2| BIG-IP APM access policy redirects user to Azure AD (SAML IdP) |
+| 3| Azure AD pre-authenticates user and applies any enforced Conditional Access policies |
+| 4| User is redirected to BIG-IP (SAML SP) and SSO is performed using issued SAML token |
+| 5| BIG-IP requests Kerberos ticket from KDC |
+| 6| BIG-IP sends request to backend application, along with Kerberos ticket for SSO |
+| 7| Application authorizes request and returns payload |
+
+## Prerequisites
+Prior BIG-IP experience isnΓÇÖt necessary, but you will need:
+
+* An Azure AD free subscription or above
+
+* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](/azure/active-directory/manage-apps/f5-big-ip-kerberos-advanced/f5-bigip-deployment-guide)
+
+* Any of the following F5 BIG-IP license offers
+
+ * F5 BIG-IP® Best bundle
+
+ * F5 BIG-IP APM standalone license
+
+ * F5 BIG-IP APM add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM)
+
+ * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php).
+
+* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD, or created directly within Azure AD and flowed back to your on-premises directory
+
+* An account with Azure AD Application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
+
+* An [SSL Web certificate](/azure/active-directory/manage-apps/f5-bigip-deployment-guide#ssl-profile) for publishing services over HTTPS, or use default BIG-IP certs while testing
+
+* An existing SAP ERP environment configured for Kerberos authentication
+
+## BIG-IP configuration methods
+
+There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers the latest Guided Configuration 16.1 offering an Easy button template.
+
+With the Easy Button, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures that applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
+
+>[!NOTE]
+> All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
+
+## Register Easy Button
+
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](../develop/quickstart-register-app.md)
+
+The Easy Button client must also be registered in Azure AD, before it is allowed to establish a trust between each SAML SP instance of a BIG-IP published application, and Azure AD as the SAML IdP.
+
+1. Sign-in to the [Azure AD portal](https://portal.azure.com/) using an account with Application Administrative rights
+
+2. From the left navigation pane, select the **Azure Active Directory** service
+
+3. Under Manage, select **App registrations > New registration**
+
+4. Enter a display name for your application. For example, *F5 BIG-IP Easy Button*
+
+5. Specify who can use the application > **Accounts in this organizational directory only**
+
+6. Select **Register** to complete the initial app registration
+
+7. Navigate to **API permissions** and authorize the following Microsoft Graph **Application permissions**:
+
+ * Application.Read.All
+ * Application.ReadWrite.All
+ * Application.ReadWrite.OwnedBy
+ * Directory.Read.All
+ * Group.Read.All
+ * IdentityRiskyUser.Read.All
+ * Policy.Read.All
+ * Policy.ReadWrite.ApplicationConfiguration
+ * Policy.ReadWrite.ConditionalAccess
+ * User.Read.All
+
+8. Grant admin consent for your organization
+
+9. In the **Certificates & Secrets** blade, generate a new **client secret** and note it down
+
+10. From the **Overview** blade, note the **Client ID** and **Tenant ID**
+
+## Configure Easy Button
+
+Initiate the APM's **Guided Configuration** to launch the **Easy Button** Template.
+
+1. From a browser, sign-in to the **F5 BIG-IP management console**
+
+2. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
+
+ ![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-sap-erp/easy-button-template.png)
+
+3. Review the list of configuration steps and select **Next**
+
+ ![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-sap-erp/config-steps.png)
+
+4. Follow the sequence of steps required to publish your application.
+
+ ![Configuration steps flow](./media/f5-big-ip-easy-button-sap-erp/config-steps-flow.png#lightbox)
+
+### Configuration Properties
+
+These are general and service account properties. The **Configuration Properties** tab creates a BIG-IP application config and SSO object. Consider the **Azure Service Account Details** section to represent the client you registered in your Azure AD tenant earlier, as an application. These settings allow a BIG-IP's OAuth client to individually register a SAML SP directly in your tenant, along with the SSO properties you would normally configure manually. Easy Button does this for every BIG-IP service being published and enabled for SHA.
+
+Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
+
+1. Provide a unique **Configuration Name** so admins can easily distinguish between Easy Button configurations
+
+2. Enable **Single Sign-On (SSO) & HTTP Headers**
+
+3. Enter the **Tenant Id, Client ID,** and **Client Secret** you noted when registering the Easy Button client in your tenant
+
+4. Confirm the BIG-IP can successfully connect to your tenant and select **Next**
+
+ ![Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-easy-button-sap-erp/configuration-general-and-service-account-properties.png)
+
+### Service Provider
+
+The Service Provider settings define the properties for the SAML SP instance of the application protected through SHA.
+
+1. Enter **Host**. This is the public FQDN of the application being secured
+
+2. Enter **Entity ID.** This is the identifier Azure AD will use to identify the SAML SP requesting a token
+
+ ![Screenshot for Service Provider settings](./media/f5-big-ip-easy-button-sap-erp/service-provider-settings.png)
+
+ The optional **Security Settings** specify whether Azure AD should encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
+
+3. From the **Assertion Decryption Private Key** list, select **Create New**
+
+ ![Screenshot for Configure Easy Button- Create New import](./media/f5-big-ip-easy-button-sap-erp/configure-security-create-new.png)
+
+4. Select **OK**. This opens the **Import SSL Certificate and Keys** dialog in a new tab
+
+5. Select **PKCS 12 (IIS)** to import your certificate and private key. Once provisioned close the browser tab to return to the main tab
+
+ ![Screenshot for Configure Easy Button- Import new cert](./media/f5-big-ip-easy-button-sap-erp/import-ssl-certificates-and-keys.png)
+
+6. Check **Enable Encrypted Assertion**
+
+7. If you have enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. This is the private key for the certificate that BIG-IP APM will use to decrypt Azure AD assertions
+
+8. If you have enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. This is the certificate that BIG-IP will upload to Azure AD for encrypting the issued SAML assertions
+
+ ![Screenshot for Service Provider security settings](./media/f5-big-ip-easy-button-sap-erp/service-provider-security-settings.png)
+
+### Azure Active Directory
+
+This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant.
+
+Easy Button provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. For this scenario, select **SAP ERP Central Component > Add** to start the Azure configurations.
+
+ ![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-easy-button-sap-erp/azure-config-add-app.png)
+
+#### Azure Configuration
+
+1. Enter **Display Name** of app that the BIG-IP creates in your Azure AD tenant, and the icon that the users will see in [MyApps portal](https://myapplications.microsoft.com/)
+
+2. Leave the **Sign On URL (optional)** blank to enable IdP initiated sign-on
+
+ ![Screenshot for Azure configuration add display info](./media/f5-big-ip-easy-button-sap-erp/azure-configuration-add-display-info.png)
+
+3. Select the refresh icon next to the **Signing Key** and **Signing Certificate** to locate the certificate you imported earlier
+
+5. Enter the certificateΓÇÖs password in **Signing Key Passphrase**
+
+6. Enable **Signing Option** (optional). This ensures that BIG-IP only accepts tokens and claims that are signed by Azure AD
+
+ ![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-sap-erp/azure-configuration-sign-certificates.png)
+
+7. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. Add a user or group that you can use later for testing, otherwise all access will be denied
+
+ ![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-easy-button-sap-erp/azure-configuration-add-user-groups.png)
+
+#### User Attributes & Claims
+
+When a user successfully authenticates to Azure AD, it issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims tab** shows the default claims to issue for the new application. It also lets you configure more claims.
+
+As our example AD infrastructure is based on a .com domain suffix used both, internally and externally, we donΓÇÖt require any additional attributes to achieve a functional KCD SSO implementation. See the [advanced tutorial](/azure/active-directory/manage-apps/f5-big-ip-kerberos-advanced/f5-big-ip-kerberos-advanced) for cases where you have multiple domains or userΓÇÖs log-in using an alternate suffix.
+
+ ![Screenshot for user attributes and claims](./media/f5-big-ip-easy-button-sap-erp/user-attributes-claims.png)
+
+You can include additional Azure AD attributes, if necessary, but for this scenario SAP ERP only requires the default attributes.
+
+#### Additional User Attributes
+
+The **Additional User Attributes** tab can support a variety of distributed systems requiring attributes stored in other directories, for session augmentation. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.
+
+ ![Screenshot for additional user attributes](./media/f5-big-ip-easy-button-sap-erp/additional-user-attributes.png)
+
+>[!NOTE]
+>This feature has no correlation to Azure AD but is another source of attributes.
+
+#### Conditional Access Policy
+
+CA policies are enforced post Azure AD pre-authentication, to control access based on device, application, location, and risk signals.
+
+The **Available Policies** view, by default, will list all CA policies that do not include user based actions.
+
+The **Selected Policies** view, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list as they are enforced at a tenant level.
+
+To select a policy to be applied to the application being published:
+
+1. Select the desired policy in the **Available Policies** list
+2. Select the right arrow and move it to the **Selected Policies** list
+
+Selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the selected policy is not enforced.
+
+![ Screenshot for CA policies](./media/f5-big-ip-easy-button-sap-erp/conditional-access-policy.png)
+
+>[!NOTE]
+>The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
+
+### Virtual Server Properties
+
+A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
+
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP, instead of the application itself. Using a test PC's localhost DNS is fine for testing
+
+2. Enter **Service Port** as *443* for HTTPS
+
+3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS
+
+4. The Client SSL Profile enables the virtual server for HTTPS, so that client connections are encrypted over TLS. Select the **Client SSL Profile** you created as part of the prerequisites or leave the default whilst testing
+
+ ![ Screenshot for Virtual server](./media/f5-big-ip-easy-button-sap-erp/virtual-server.png)
+
+### Pool Properties
+
+The **Application Pool tab** details the services behind a BIG-IP, represented as a pool containing one or more application servers.
+
+1. Choose from **Select a Pool.** Create a new pool or select an existing one
+
+2. Choose the **Load Balancing Method** as *Round Robin*
+
+3. For **Pool Servers** select an existing server node or specify an IP and port for the backend node hosting the header-based application
+
+ ![ Screenshot for Application pool](./media/f5-big-ip-easy-button-sap-erp/application-pool.png)
+
+#### Single Sign-On & HTTP Headers
+
+Enabling SSO allows users to access BIG-IP published services without having to enter credentials. The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO. You will need the Kerberos delegation account created earlier to complete this step.
+
+Enable **Kerberos** and **Show Advanced Setting** to enter the following:
+
+* **Username Source:** Specifies the preferred username to cache for SSO. You can provide any session variable as the source of the user ID, but *session.saml.last.identity* tends to work best as it holds the Azure AD claim containing the logged in user ID
+
+* **User Realm Source:** Required if the user domain is different to the BIG-IPΓÇÖs kerberos realm. In that case, the APM session variable would contain the logged in user domain. For example,*session.saml.last.attr.name.domain*
+
+ ![Screenshot for SSO and HTTP headers](./media/f5-big-ip-easy-button-sap-erp/sso-headers.png)
+
+* **KDC:** IP of a Domain Controller (Or FQDN if DNS is configured & efficient)
+
+* **UPN Support:** Enable for the APM to use the UPN for kerberos ticketing
+
+* **SPN Pattern:** Use HTTP/%h to inform the APM to use the host header of the client request and build the SPN that it is requesting a kerberos token for.
+
+* **Send Authorization:** Disable for applications that prefer negotiating authentication instead of receiving the kerberos token in the first request. For example, *Tomcat.*
+
+ ![Screenshot for SSO method configuration](./media/f5-big-ip-easy-button-sap-erp/sso-method-config.png)
+
+### Session Management
+
+The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Consult [F5 documentation]( https://support.f5.com/csp/article/K18390492) for details on these settings.
+
+What isnΓÇÖt covered however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users log off.
+ When the Easy Button deploys a SAML application to your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Microsoft [MyApps portal]( https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510) also terminate the session between the BIG-IP and a client.
+
+During deployment, the SAML federation metadata for the published application is imported from your tenant, providing the APM the SAML logout endpoint for Azure AD. This helps SP initiated sign-outs terminate the session between a client and Azure AD.
+
+## Summary
+
+This last step provides a breakdown of your configurations. Select **Deploy** to commit all settings and verify that the application now exists in your tenants list of Enterprise applications.
+
+## Next steps
+
+From a browser, **connect** to the applicationΓÇÖs external URL or select the **applicationΓÇÖs icon** in the [Microsoft MyApps portal](https://myapps.microsoft.com/). After authenticating to Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
+
+For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP.
+
+## Advanced deployment
+
+There may be cases where the Guided Configuration templates lack the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for kerberos-based SSO](/azure/active-directory/manage-apps/f5-big-ip-kerberos-advanced).
+
+Alternatively, the BIG-IP gives you the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
+
+You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
+
+ ![Screenshot for Configure Easy Button - Strict Management](./media/f5-big-ip-easy-button-sap-erp/strict-mode-padlock.png)
+
+At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+
+>[!NOTE]
+>Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the advanced configuration method for production services.
+
+## Troubleshooting
+
+You can fail to access the SHA protected application due to any number of factors, including a misconfiguration.
+
+* Kerberos is time sensitive, so requires that servers and clients be set to the correct time and where possible synchronized to a reliable time source
+
+* Ensure the hostname for the domain controller and web application are resolvable in DNS
+
+* Ensure there are no duplicate SPNs in your AD environment by executing the following query at the command line on a domain PC: setspn -q HTTP/my_target_SPN
+
+You can refer to our [App Proxy guidance](../app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md) to validate an IIS application is configured appropriately for KCD. F5ΓÇÖs article on [how the APM handles Kerberos SSO](https://techdocs.f5.com/bigip-15-1-0/big-ip-access-policy-manager-single-sign-on-concepts-configuration/kerberos-single-sign-on-method.html) is also a valuable resource.
+
+### Log analysis
+
+BIG-IP logging can help quickly isolate all sorts of issues with connectivity, SSO, policy violations, or misconfigured variable mappings. Start troubleshooting by increasing the log verbosity level.
+
+1. Navigate to **Access Policy > Overview > Event Logs > Settings**
+
+2. Select the row for your published application, then **Edit > Access System Logs**
+
+3. Select **Debug** from the SSO list, and then select **OK**
+
+Reproduce your issue, then inspect the logs, but remember to switch this back when finished as verbose mode generates lots of data.
+
+If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, itΓÇÖs possible the issue relates to SSO from Azure AD to the BIG-IP.
+
+1. Navigate to **Access > Overview > Access reports**
+
+2. Run the report for the last hour to see logs provide any clues. The **View session variables** link for your session will also help understand if the APM is receiving the expected claims from Azure AD.
+
+If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related to the backend request or SSO from the BIG-IP to the application.
+
+1. Navigate to **Access Policy > Overview > Active Sessions**
+
+2. Select the link for your active session. The **View Variables** link in this location may also help determine root cause KCD issues, particularly if the BIG-IP APM fails to obtain the right user and domain identifiers from session variables
+
+See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
app-service App Service Web Tutorial Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-custom-domain.md
In this tutorial, you learn how to:
## 1. Prepare your environment * [Create an App Service app](./index.yml), or use an app that you created for another tutorial. The web app's [App Service plan](overview-hosting-plans.md) must be a paid tier and not **Free (F1)**. See [Scale up an app](manage-scale-up.md#scale-up-your-pricing-tier) to update the tier.
-* Make sure you can edit DNS records for your custom domain. To edit DNS records, you need access to the DNS registry for your domain provider, such as GoDaddy. For example, to add DNS entries for `contoso.com` and `www.contoso.com`, you must be able to configure the DNS settings for the `contoso.com` root domain.
+* Make sure you can edit the DNS records for your custom domain. To edit DNS records, you need access to the DNS registry for your domain provider, such as GoDaddy. For example, to add DNS entries for `contoso.com` and `www.contoso.com`, you must be able to configure the DNS settings for the `contoso.com` root domain. Your custom domains must be in a public DNS zone; private DNS zone is only supported on Internal Load Balancer (ILB) App Service Environment (ASE).
* If you don't have a custom domain yet, you can [purchase an App Service domain](manage-custom-dns-buy-domain.md). ## 2. Get a domain verification ID
app-service Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/private-endpoint.md
# Using Private Endpoints for Azure Web App > [!IMPORTANT]
-> Private Endpoint is available for Windows and Linux Web App, containerized or not, hosted on these App Service Plans : **PremiumV2**, **PremiumV3**, **IsolatedV2**, **Functions Premium** (sometimes referred to as the Elastic Premium plan).
+> Private Endpoint is available for Windows and Linux Web App, containerized or not, hosted on these App Service Plans : **Basic**, **Standard**, **PremiumV2**, **PremiumV3**, **IsolatedV2**, **Functions Premium** (sometimes referred to as the Elastic Premium plan).
You can use Private Endpoint for your Azure Web App to allow clients located in your private network to securely access the app over Private Link. The Private Endpoint uses an IP address from your Azure VNet address space. Network traffic between a client on your private network and the Web App traverses over the VNet and a Private Link on the Microsoft backbone network, eliminating exposure from the public Internet.
applied-ai-services Try V3 Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-csharp-sdk.md
Previously updated : 03/16/2022 Last updated : 03/31/2022 recommendations: false
To interact with the Form Recognizer service, you'll need to create an instance
* [**Prebuilt model**](#prebuilt-model)
-1. [Run your program](#run-your-application).
- > [!IMPORTANT] > > * Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. For more information, *see* Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md).
-## Run your application
-
-Once you've added a code sample to your application, choose the green **Start** button next to formRecognizer_quickstart to build and run your program, or press **F5**.
-
- :::image type="content" source="../media/quickstarts/run-visual-studio.png" alt-text="Screenshot: run your Visual Studio program.":::
<!-- ### [.NET Command-line interface (CLI)](#tab/cli)
for (int i = 0; i < result.Tables.Count; i++)
```
+**Run your application**
+
+Once you've added a code sample to your application, choose the green **Start** button next to formRecognizer_quickstart to build and run your program, or press **F5**.
+
+ :::image type="content" source="../media/quickstarts/run-visual-studio.png" alt-text="Screenshot: run your Visual Studio program.":::
+ ### General document model output Here's a snippet of the expected output:
for (int i = 0; i < result.Tables.Count; i++)
```
+**Run your application**
+
+Once you've added a code sample to your application, choose the green **Start** button next to formRecognizer_quickstart to build and run your program, or press **F5**.
+
+ :::image type="content" source="../media/quickstarts/run-visual-studio.png" alt-text="Screenshot: run your Visual Studio program.":::
+ ### Layout model output Here's a snippet of the expected output:
for (int i = 0; i < result.Documents.Count; i++)
```
+**Run your application**
+
+Once you've added a code sample to your application, choose the green **Start** button next to formRecognizer_quickstart to build and run your program, or press **F5**.
+
+ :::image type="content" source="../media/quickstarts/run-visual-studio.png" alt-text="Screenshot: run your Visual Studio program.":::
+ ### Prebuilt model output Here's a snippet of the expected output:
applied-ai-services Try V3 Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-java-sdk.md
Previously updated : 03/16/2022 Last updated : 03/31/2022 recommendations: false
To interact with the Form Recognizer service, you'll need to create an instance
* [**Prebuilt Invoice**](#prebuilt-model)
-1. [Build and run your program](#build-and-run-the-application)
- > [!IMPORTANT] > > Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. For more information, see* the Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md).
-## Build and run the application
-
-Once you've added a code sample to your application, navigate back to your main project directoryΓÇö**form-recognizer-app**.
-
-1. Build your application with the `build` command:
-
- ```console
- gradle build
- ```
-
-1. Run your application with the `run` command:
-
- ```console
- gradle run
- ```
- ## General document model Extract text, tables, structure, key-value pairs, and named entities from documents.
Extract text, tables, structure, key-value pairs, and named entities from docume
} } ```
+<!-- markdownlint-disable MD036 -->
+
+**Build and run the application**
+
+Once you've added a code sample to your application, navigate back to your main project directoryΓÇö**form-recognizer-app**.
+
+1. Build your application with the `build` command:
+
+ ```console
+ gradle build
+ ```
+
+1. Run your application with the `run` command:
+
+ ```console
+ gradle run
+ ```
### General document model output
Extract text, selection marks, text styles, table structures, and bounding regio
} ```
+**Build and run the application**
+
+Once you've added a code sample to your application, navigate back to your main project directoryΓÇö**form-recognizer-app**.
+
+1. Build your application with the `build` command:
+
+ ```console
+ gradle build
+ ```
+
+1. Run your application with the `run` command:
+
+ ```console
+ gradle run
+ ```
+ ### Layout model output Here's a snippet of the expected output:
Analyze and extract common fields from specific document types using a prebuilt
```
+**Build and run the application**
+
+Once you've added a code sample to your application, navigate back to your main project directoryΓÇö**form-recognizer-app**.
+
+1. Build your application with the `build` command:
+
+ ```console
+ gradle build
+ ```
+
+1. Run your application with the `run` command:
+
+ ```console
+ gradle run
+ ```
+ ### Prebuilt model output Here's a snippet of the expected output:
applied-ai-services Try V3 Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-javascript-sdk.md
To interact with the Form Recognizer service, you'll need to create an instance
* [**Prebuilt Invoice**](#prebuilt-model)
-1. [Run your program](#run-your-application)
- > [!IMPORTANT] > > Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. For more information, see* the Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md).
-## Run your application
-
-Once you've added a code sample to your application, run your program:
-
-1. Navigate to the folder where you have your form recognizer application (form-recognizer-app).
-
-1. Type the following command in your terminal:
-
- ```console
- node index.js
- ```
+<!-- markdownlint-disable MD036 -->
## General document model
Extract text, tables, structure, key-value pairs, and named entities from docume
async function main() { // create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
- const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(apiKey));
+ const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(key));
const poller = await client.beginAnalyzeDocuments("prebuilt-document", formUrl);
Extract text, tables, structure, key-value pairs, and named entities from docume
}); ```
+**Run your application**
+
+Once you've added a code sample to your application, run your program:
+
+1. Navigate to the folder where you have your form recognizer application (form-recognizer-app).
+
+1. Type the following command in your terminal:
+
+ ```console
+ node index.js
+ ```
+ ### General document model output Here's a snippet of the expected output:
Extract text, selection marks, text styles, table structures, and bounding regio
const formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf" async function main() {
- const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(apiKey));
+ const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(key));
const poller = await client.beginAnalyzeDocuments("prebuilt-layout", formUrl);
Extract text, selection marks, text styles, table structures, and bounding regio
```
+**Run your application**
+
+Once you've added a code sample to your application, run your program:
+
+1. Navigate to the folder where you have your form recognizer application (form-recognizer-app).
+
+1. Type the following command in your terminal:
+
+ ```console
+ node index.js
+ ```
+ ### Layout model output Here's a snippet of the expected output:
In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
> * To analyze a given file at a URI, you'll use the `beginAnalyzeDocuments` method and pass `PrebuiltModels.Invoice` as the model Id. The returned value is a `result` object containing data about the submitted document. > * For simplicity, all the key-value pairs that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [Invoice](../concept-invoice.md#field-extraction) concept page.
-##### Add the following code to your prebuilt invoice application below the `apiKey` variable
- ```javascript
- // Using the PrebuiltModels object, rather than the raw model ID, adds strong typing to the model's output.
+ // using the PrebuiltModels object, rather than the raw model ID, adds strong typing to the model's output
const { PrebuiltModels } = require("@azure/ai-form-recognizer"); // set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal
In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
async function main() {
- const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(apiKey));
+ const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(key));
const poller = await client.beginAnalyzeDocuments(PrebuiltModels.Invoice, invoiceUrl);
In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
}); ```
+**Run your application**
+
+Once you've added a code sample to your application, run your program:
+
+1. Navigate to the folder where you have your form recognizer application (form-recognizer-app).
+
+1. Type the following command in your terminal:
+
+ ```console
+ node index.js
+ ```
+ ### Prebuilt model output Here's a snippet of the expected output:
applied-ai-services Try V3 Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-python-sdk.md
Previously updated : 03/15/2022 Last updated : 03/31/2022 recommendations: false
To interact with the Form Recognizer service, you'll need to create an instance
* [**Prebuilt Invoice**](#prebuilt-model)
-1. [Run your program](#run-the-application)
- > [!IMPORTANT] > > Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. For more information, *see* Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md).
-## Run the application
-
-Once you've added a code sample to your application, build and run your program:
-
-1. Navigate to the folder where you have your **form_recognizer_quickstart.py** file.
-
-1. Type the following command in your terminal:
-
- ```console
- python form_recognizer_quickstart.py
- ```
+<!-- markdownlint-disable MD036 -->
## General document model
if __name__ == "__main__":
analyze_general_documents() ```
+**Run the application**
+
+Once you've added a code sample to your application, build and run your program:
+
+1. Navigate to the folder where you have your **form_recognizer_quickstart.py** file.
+1. Type the following command in your terminal:
+
+ ```console
+ python form_recognizer_quickstart.py
+ ```
+ ### General document model output Here's a snippet of the expected output:
if __name__ == "__main__":
```
+**Run the application**
+
+Once you've added a code sample to your application, build and run your program:
+
+1. Navigate to the folder where you have your **form_recognizer_quickstart.py** file.
+1. Type the following command in your terminal:
+
+ ```console
+ python form_recognizer_quickstart.py
+ ```
+ ### Layout model output Here's a snippet of the expected output:
if __name__ == "__main__":
analyze_invoice() ```
+**Run the application**
+
+Once you've added a code sample to your application, build and run your program:
+
+1. Navigate to the folder where you have your **form_recognizer_quickstart.py** file.
+1. Type the following command in your terminal:
+
+ ```console
+ python form_recognizer_quickstart.py
+ ```
+ ### Prebuilt model output Here's a snippet of the expected output:
automation Automation Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-webhooks.md
Consider the following strategies:
## Create a webhook
-A webhook requires a published runbook. This walk through uses a modified version of the runbook created from [Create an Azure Automation runbook](./learn/powershell-runbook-managed-identity.md). To follow along, edit your PowerShell runbook with the following code:
-
-```powershell
-param
-(
- [Parameter(Mandatory=$false)]
- [object] $WebhookData
-)
-
-write-output "start"
-write-output ("object type: {0}" -f $WebhookData.gettype())
-write-output $WebhookData
-#write-warning (Test-Json -Json $WebhookData)
-$Payload = $WebhookData | ConvertFrom-Json
-write-output "`n`n"
-write-output $Payload.WebhookName
-write-output $Payload.RequestBody
-write-output $Payload.RequestHeader
-write-output "end"
-
-if ($Payload.RequestBody) {
- $names = (ConvertFrom-Json -InputObject $Payload.RequestBody)
-
- foreach ($x in $names)
- {
- $name = $x.Name
- Write-Output "Hello $name"
- }
-}
-else {
- Write-Output "Hello World!"
-}
-```
-
-Then save and publish the revised runbook. The examples below show to create a webhook using the Azure portal, PowerShell, and REST.
-
-### From the portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. In the Azure portal, navigate to your Automation account.
-
-1. Under **Process Automation**, select **Runbooks** to open the **Runbooks** page.
-
-1. Select your runbook from the list to open the Runbook **Overview** page.
-
-1. Select **Add webhook** to open the **Add Webhook** page.
-
- :::image type="content" source="media/automation-webhooks/add-webhook-icon.png" alt-text="Runbook overview page with Add webhook highlighted.":::
+> [!NOTE]
+> When you use the webhook with PowerShell 7 runbook, it auto-converts the webhook input parameter to an invalid JSON. For more information, see [Known issues - 7.1 (preview)](/azure/automation/automation-runbook-types#known-issues71-preview). We recommend that you use the webhook with PowerShell 5 runbook.
-1. On the **Add Webhook** page, select **Create new webhook**.
+1. Create PowerShell runbook with the following code:
- :::image type="content" source="media/automation-webhooks/add-webhook-page-create.png" alt-text="Add webhook page with create highlighted.":::
+ ```powershell
+ param
+ (
+ [Parameter(Mandatory=$false)]
+ [object] $WebhookData
+ )
+
+ write-output "start"
+ write-output ("object type: {0}" -f $WebhookData.gettype())
+ write-output $WebhookData
+ #write-warning (Test-Json -Json $WebhookData)
+ $Payload = $WebhookData | ConvertFrom-Json
+ write-output "`n`n"
+ write-output $Payload.WebhookName
+ write-output $Payload.RequestBody
+ write-output $Payload.RequestHeader
+ write-output "end"
+
+ if ($Payload.RequestBody) {
+ $names = (ConvertFrom-Json -InputObject $Payload.RequestBody)
+
+ foreach ($x in $names)
+ {
+ $name = $x.Name
+ Write-Output "Hello $name"
+ }
+ }
+ else {
+ Write-Output "Hello World!"
+ }
+ ```
+1. Create a webhook using the Azure portal, or PowerShell or REST API. A webhook requires a published runbook. This walk through uses a modified version of the runbook created from [Create an Azure Automation runbook](./learn/powershell-runbook-managed-identity.md).
-1. Enter in the **Name** for the webhook. The expiration date for the field **Expires** defaults to one year from the current date.
+ # [Azure portal](#tab/portal)
-1. Click the copy icon or press <kbd>Ctrl + C</kbd> copy the URL of the webhook. Then save the URL to a secure location.
+ 1. Sign in to the [Azure portal](https://portal.azure.com/).
- :::image type="content" source="media/automation-webhooks/create-new-webhook.png" alt-text="Creaye webhook page with URL highlighted.":::
+ 1. In the Azure portal, navigate to your Automation account.
- > [!IMPORTANT]
- > Once you create the webhook, you cannot retrieve the URL again. Make sure you copy and record it as above.
+ 1. Under **Process Automation**, select **Runbooks** to open the **Runbooks** page.
-1. Select **OK** to return to the **Add Webhook** page.
+ 1. Select your runbook from the list to open the Runbook **Overview** page.
-1. From the **Add Webhook** page, select **Configure parameters and run settings** to open the **Parameters** page.
+ 1. Select **Add webhook** to open the **Add Webhook** page.
- :::image type="content" source="media/automation-webhooks/add-webhook-page-parameters.png" alt-text="Add webhook page with parameters highlighted.":::
+ :::image type="content" source="media/automation-webhooks/add-webhook-icon.png" alt-text="Runbook overview page with Add webhook highlighted.":::
-1. Review the **Parameters** page. For the example runbook used in this article, no changes are needed. Select **OK** to return to the **Add Webhook** page.
+ 1. On the **Add Webhook** page, select **Create new webhook**.
-1. From the **Add Webhook** page, select **Create**. The webhook is created and you're returned to the Runbook **Overview** page.
+ :::image type="content" source="media/automation-webhooks/add-webhook-page-create.png" alt-text="Add webhook page with create highlighted.":::
-### Using PowerShell
+ 1. Enter in the **Name** for the webhook. The expiration date for the field **Expires** defaults to one year from the current date.
-1. Verify you have the latest version of the PowerShell [Az Module](/powershell/azure/new-azureps-module-az) installed.
+ 1. Click the copy icon or press <kbd>Ctrl + C</kbd> copy the URL of the webhook. Then save the URL to a secure location.
-1. Sign in to Azure interactively using the [Connect-AzAccount](/powershell/module/Az.Accounts/Connect-AzAccount) cmdlet and follow the instructions.
+ :::image type="content" source="media/automation-webhooks/create-new-webhook.png" alt-text="Creaye webhook page with URL highlighted.":::
- ```powershell
- # Sign in to your Azure subscription
- $sub = Get-AzSubscription -ErrorAction SilentlyContinue
- if(-not($sub))
- {
- Connect-AzAccount
- }
- ```
+ > [!IMPORTANT]
+ > Once you create the webhook, you cannot retrieve the URL again. Make sure you copy and record it as above.
-1. Use the [New-AzAutomationWebhook](/powershell/module/az.automation/new-azautomationwebhook) cmdlet to create a webhook for an Automation runbook. Provide an appropriate value for the variables and then execute the script.
+ 1. Select **OK** to return to the **Add Webhook** page.
- ```powershell
- # Initialize variables with your relevant values
- $resourceGroup = "resourceGroupName"
- $automationAccount = "automationAccountName"
- $runbook = "runbookName"
- $psWebhook = "webhookName"
-
- # Create webhook
- $newWebhook = New-AzAutomationWebhook `
- -ResourceGroup $resourceGroup `
- -AutomationAccountName $automationAccount `
- -Name $psWebhook `
- -RunbookName $runbook `
- -IsEnabled $True `
- -ExpiryTime "12/31/2022" `
- -Force
-
- # Store URL in variable; reveal variable
- $uri = $newWebhook.WebhookURI
- $uri
- ```
+ 1. From the **Add Webhook** page, select **Configure parameters and run settings** to open the **Parameters** page.
- The output will be a URL that looks similar to: `https://ad7f1818-7ea9-4567-b43a.webhook.wus.azure-automation.net/webhooks?token=uTi69VZ4RCa42zfKHCeHmJa2W9fd`
+ :::image type="content" source="media/automation-webhooks/add-webhook-page-parameters.png" alt-text="Add webhook page with parameters highlighted.":::
-1. You can also verify the webhook with the PowerShell cmdlet [Get-AzAutomationWebhook](/powershell/module/az.automation/get-azautomationwebhook).
+ 1. Review the **Parameters** page. For the example runbook used in this article, no changes are needed. Select **OK** to return to the **Add Webhook** page.
- ```powershell
- Get-AzAutomationWebhook `
- -ResourceGroup $resourceGroup `
- -AutomationAccountName $automationAccount `
- -Name $psWebhook
- ```
+ 1. From the **Add Webhook** page, select **Create**. The webhook is created and you're returned to the Runbook **Overview** page.
-### Using REST
+ # [PowerShell](#tab/powershell)
-The PUT command is documented at [Webhook - Create Or Update](/rest/api/automation/webhook/create-or-update). This example uses the PowerShell cmdlet [Invoke-RestMethod](/powershell/module/microsoft.powershell.utility/invoke-restmethod) to send the PUT request.
+ 1. Verify you have the latest version of the PowerShell [Az Module](/powershell/azure/new-azureps-module-az) installed.
-1. Create a file called `webhook.json` and then paste the following code:
+ 1. Sign in to Azure interactively using the [Connect-AzAccount](/powershell/module/Az.Accounts/Connect-AzAccount) cmdlet and follow the instructions.
- ```json
- {
- "name": "RestWebhook",
- "properties": {
- "isEnabled": true,
- "expiryTime": "2022-03-29T22:18:13.7002872Z",
- "runbook": {
- "name": "runbookName"
+ ```powershell
+ # Sign in to your Azure subscription
+ $sub = Get-AzSubscription -ErrorAction SilentlyContinue
+ if(-not($sub))
+ {
+ Connect-AzAccount
}
- }
- }
- ```
-
- Before running, modify the value for the **runbook:name** property with the actual name of your runbook. Review [Webhook properties](#webhook-properties) for more information about these properties.
-
-1. Verify you have the latest version of the PowerShell [Az Module](/powershell/azure/new-azureps-module-az) installed.
-
-1. Sign in to Azure interactively using the [Connect-AzAccount](/powershell/module/Az.Accounts/Connect-AzAccount) cmdlet and follow the instructions.
+ ```
+
+ 1. Use the [New-AzAutomationWebhook](/powershell/module/az.automation/new-azautomationwebhook) cmdlet to create a webhook for an Automation runbook. Provide an appropriate value for the variables and then execute the script.
+
+ ```powershell
+ # Initialize variables with your relevant values
+ $resourceGroup = "resourceGroupName"
+ $automationAccount = "automationAccountName"
+ $runbook = "runbookName"
+ $psWebhook = "webhookName"
+
+ # Create webhook
+ $newWebhook = New-AzAutomationWebhook `
+ -ResourceGroup $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $psWebhook `
+ -RunbookName $runbook `
+ -IsEnabled $True `
+ -ExpiryTime "12/31/2022" `
+ -Force
+
+ # Store URL in variable; reveal variable
+ $uri = $newWebhook.WebhookURI
+ $uri
+ ```
+
+ The output will be a URL that looks similar to: `https://ad7f1818-7ea9-4567-b43a.webhook.wus.azure-automation.net/webhooks?token=uTi69VZ4RCa42zfKHCeHmJa2W9fd`
+
+ 1. You can also verify the webhook with the PowerShell cmdlet [Get-AzAutomationWebhook](/powershell/module/az.automation/get-azautomationwebhook).
+
+ ```powershell
+ Get-AzAutomationWebhook `
+ -ResourceGroup $resourceGroup `
+ -AutomationAccountName $automationAccount `
+ -Name $psWebhook
+ ```
+
+ # [REST API](#tab/rest)
+
+ The PUT command is documented at [Webhook - Create Or Update](/rest/api/automation/webhook/create-or-update). This example uses the PowerShell cmdlet [Invoke-RestMethod](/powershell/module/microsoft.powershell.utility/invoke-restmethod) to send the PUT request.
+
+ 1. Create a file called `webhook.json` and then paste the following code:
+
+ ```json
+ {
+ "name": "RestWebhook",
+ "properties": {
+ "isEnabled": true,
+ "expiryTime": "2022-03-29T22:18:13.7002872Z",
+ "runbook": {
+ "name": "runbookName"
+ }
+ }
+ }
+ ```
- ```powershell
- # Sign in to your Azure subscription
- $sub = Get-AzSubscription -ErrorAction SilentlyContinue
- if(-not($sub))
- {
- Connect-AzAccount
- }
- ```
+ Before running, modify the value for the **runbook:name** property with the actual name of your runbook. Review [Webhook properties](#webhook-properties) for more information about these properties.
-1. Provide an appropriate value for the variables and then execute the script.
+ 1. Verify you have the latest version of the PowerShell [Az Module](/powershell/azure/new-azureps-module-az) installed.
- ```powershell
- # Initialize variables
- $subscription = "subscriptionID"
- $resourceGroup = "resourceGroup"
- $automationAccount = "automationAccount"
- $runbook = "runbookName"
- $restWebhook = "webhookName"
- $file = "path\webhook.json"
+ 1. Sign in to Azure interactively using the [Connect-AzAccount](/powershell/module/Az.Accounts/Connect-AzAccount) cmdlet and follow the instructions.
- # consume file
- $body = Get-Content $file
-
- # Craft Uri
- $restURI = "https://management.azure.com/subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Automation/automationAccounts/$automationAccount/webhooks/$restWebhook`?api-version=2015-10-31"
- ```
-
-1. Run the following script to obtain an access token. If your access token expired, you need to rerun the script.
-
- ```powershell
- # Obtain access token
- $azContext = Get-AzContext
- $azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
- $profileClient = New-Object -TypeName Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient -ArgumentList ($azProfile)
- $token = $profileClient.AcquireAccessToken($azContext.Subscription.TenantId)
- $authHeader = @{
- 'Content-Type'='application/json'
- 'Authorization'='Bearer ' + $token.AccessToken
- }
- ```
+ ```powershell
+ # Sign in to your Azure subscription
+ $sub = Get-AzSubscription -ErrorAction SilentlyContinue
+ if(-not($sub))
+ {
+ Connect-AzAccount
+ }
+ ```
+
+ 1. Provide an appropriate value for the variables and then execute the script.
+
+ ```powershell
+ # Initialize variables
+ $subscription = "subscriptionID"
+ $resourceGroup = "resourceGroup"
+ $automationAccount = "automationAccount"
+ $runbook = "runbookName"
+ $restWebhook = "webhookName"
+ $file = "path\webhook.json"
+
+ # consume file
+ $body = Get-Content $file
+
+ # Craft Uri
+ $restURI = "https://management.azure.com/subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Automation/automationAccounts/$automationAccount/webhooks/$restWebhook`?api-version=2015-10-31"
+ ```
+
+ 1. Run the following script to obtain an access token. If your access token expired, you need to rerun the script.
+
+ ```powershell
+ # Obtain access token
+ $azContext = Get-AzContext
+ $azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
+ $profileClient = New-Object -TypeName Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient -ArgumentList ($azProfile)
+ $token = $profileClient.AcquireAccessToken($azContext.Subscription.TenantId)
+ $authHeader = @{
+ 'Content-Type'='application/json'
+ 'Authorization'='Bearer ' + $token.AccessToken
+ }
+ ```
-1. Run the following script to create the webhook using the REST API.
+ 1. Run the following script to create the webhook using the REST API.
- ```powershell
- # Invoke the REST API
- # Store URL in variable; reveal variable
- $response = Invoke-RestMethod -Uri $restURI -Method Put -Headers $authHeader -Body $body
- $webhookURI = $response.properties.uri
- $webhookURI
- ```
+ ```powershell
+ # Invoke the REST API
+ # Store URL in variable; reveal variable
+ $response = Invoke-RestMethod -Uri $restURI -Method Put -Headers $authHeader -Body $body
+ $webhookURI = $response.properties.uri
+ $webhookURI
+ ```
- The output is a URL that looks similar to: `https://ad7f1818-7ea9-4567-b43a.webhook.wus.azure-automation.net/webhooks?token=uTi69VZ4RCa42zfKHCeHmJa2W9fd`
+ The output is a URL that looks similar to: `https://ad7f1818-7ea9-4567-b43a.webhook.wus.azure-automation.net/webhooks?token=uTi69VZ4RCa42zfKHCeHmJa2W9fd`
-1. You can also use [Webhook - Get](/rest/api/automation/webhook/get) to retrieve the webhook identified by its name. You can run the following PowerShell commands:
+ 1. You can also use [Webhook - Get](/rest/api/automation/webhook/get) to retrieve the webhook identified by its name. You can run the following PowerShell commands:
- ```powershell
- $response = Invoke-RestMethod -Uri $restURI -Method GET -Headers $authHeader
- $response | ConvertTo-Json
- ```
+ ```powershell
+ $response = Invoke-RestMethod -Uri $restURI -Method GET -Headers $authHeader
+ $response | ConvertTo-Json
+ ```
+ ## Use a webhook
This example uses the PowerShell cmdlet [Invoke-WebRequest](/powershell/module/m
-ResourceGroupName $resourceGroup ` -Stream Output ```-
- The output should look similar to the following:
+ When you trigger a runbook created in the previous step, it will create a job and the output should look similar to the following:
:::image type="content" source="media/automation-webhooks/webhook-job-output.png" alt-text="Output from webhook job.":::
automation Automation Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/automation-account.md
Title: Troubleshoot Azure Automation account issues description: This article tells how to troubleshoot and resolve issues with an Azure account. Previously updated : 03/24/2020 Last updated : 03/28/2022
This article discusses solutions to problems that you might encounter when you use an Azure Automation account. For general information about Automation accounts, see [Azure Automation account authentication overview](../automation-security-overview.md).
+## Scenario: Unable to create an Automation account when GUID is used as account name
+
+### Issue
+
+When you create an Automation account with a GUID as an account name, you encounter an error.
+
+### Cause
+
+An *accountid* is a unique identifier across all Automation accounts in a region and when the account name is a GUID, we keep both Automation *accountid* and *name* as GUID. In this scenario, when you create a new Automation account and specify a GUID (as an account name) and, if it conflicts with any existing Automation *accountid*, you encounter an error.
+
+For example, when you try to create an Automation account with the name *8a2f48c1-9e99-472c-be1b-dcc11429c9ff* and if there is already an existing Automation *accountid* across all Automation accounts in that region, then the account creation will fail and you will see the following error:
+
+ ```error
+ {
+
+ "code": "BadRequest",
+
+ "message": Automation account already exists with this account id. AccountId: 8a2f48c1-9e99-472c-be1b-dcc11429c9ff.
+
+ }
+
+```
+ ### Resolution
+
+Ensure that you create an Automation account with a new name.
+ ## <a name="rp-register"></a>Scenario: Unable to register Automation Resource Provider for subscriptions ### Issue
availability-zones Az Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-overview.md
description: Learn about regions and availability zones and how they work to hel
Previously updated : 1/17/2022 Last updated : 03/30/2022
Some organizations require high availability of availability zones and protectio
## Azure regions with availability zones
-Azure provides the most extensive global footprint of any cloud provider and is rapidly opening new regions and availability zones.
+Azure provides the most extensive global footprint of any cloud provider and is rapidly opening new regions and availability zones. The following regions currently support availability zones.
| Americas | Europe | Africa | Asia Pacific | |--|-||-|
Azure provides the most extensive global footprint of any cloud provider and is
| East US | Norway East | | Korea Central | | East US 2 | UK South | | Southeast Asia | | South Central US | West Europe | | East Asia |
-| US Gov Virginia | Sweden Central| | China North 3 |
-| West US 2 | | | |
+| US Gov Virginia | Sweden Central | | China North 3 |
+| West US 2 | Switzerland North* | | |
| West US 3 | | | |
+\* To learn more about Availability Zones and available services support in these regions, contact your Microsoft sales or customer representative. For the upcoming regions that will support Availability Zones, see [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/).
+ ## Next steps - [Microsoft commitment to expand Azure availability zones to more regions](https://azure.microsoft.com/blog/our-commitment-to-expand-azure-availability-zones-to-more-regions/)
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
Azure provides the most extensive global footprint of any cloud provider and is
| West US 2 | Switzerland North* | | | | West US 3 | | | |
-\* To learn more about Availability Zones and available services support in these regions, contact your Microsoft sales or customer
-representative. For the upcoming regions that will support Availability Zones, see [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/).
+\* To learn more about Availability Zones and available services support in these regions, contact your Microsoft sales or customer representative. For the upcoming regions that will support Availability Zones, see [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/).
For a list of Azure services that support availability zones by Azure region, see the [availability zones documentation](az-overview.md).
azure-monitor Agent Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-data-sources.md
description: Data sources define the log data that Azure Monitor collects from a
Previously updated : 02/26/2021 Last updated : 03/31/2022
Last updated 02/26/2021
The data that Azure Monitor collects from virtual machines with the [Log Analytics](./log-analytics-agent.md) agent is defined by the data sources that you configure on the [Log Analytics workspace](../logs/data-platform-logs.md). Each data source creates records of a particular type with each type having its own set of properties. > [!IMPORTANT]
-> This article covers data sources for the [Log Analytics agent](./log-analytics-agent.md) which is one of the agents used by Azure Monitor. Other agents collect different data and are configured differently. See [Overview of Azure Monitor agents](agents-overview.md) for a list of the available agents and the data they can collect.
+> This article covers data sources for the legacy [Log Analytics agent](./log-analytics-agent.md) which is one of the agents used by Azure Monitor. This agent **will be deprecated by August, 2024**. Please plan to [migrate to Azure Monitor agent](./azure-monitor-agent-migration.md) before that. Other agents collect different data and are configured differently. See [Overview of Azure Monitor agents](agents-overview.md) for a list of the available agents and the data they can collect.
![Log data collection](media/agent-data-sources/overview.png)
All log data collected by Azure Monitor is stored in the workspace as records.
## Next steps * Learn about [monitoring solutions](../insights/solutions.md) that add functionality to Azure Monitor and also collect data into the workspace. * Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and monitoring solutions.
-* Configure [alerts](../alerts/alerts-overview.md) to proactively notify you of critical data collected from data sources and monitoring solutions.
+* Configure [alerts](../alerts/alerts-overview.md) to proactively notify you of critical data collected from data sources and monitoring solutions.
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
# Migrate to Azure Monitor agent from Log Analytics agent
-The [Azure Monitor agent (AMA)](azure-monitor-agent-overview.md) collects monitoring data from the guest operating system of Azure and hybrid virtual machines and delivers it to Azure Monitor where it can be used by different features, insights, and other services such as [Microsoft Sentinel](../../sentintel/../sentinel/overview.md) and [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md). The Azure Monitor agent is meant to replace the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines. This article provides high-level guidance on when and how to migrate to the new Azure Monitor agent (AMA) and the data collection rules (DCR) that define the data the agent should collect.
+The [Azure Monitor agent (AMA)](azure-monitor-agent-overview.md) collects monitoring data from the guest operating system of Azure and hybrid virtual machines and delivers it to Azure Monitor where it can be used by different features, insights, and other services such as [Microsoft Sentinel](../../sentintel/../sentinel/overview.md) and [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md). All of the data collection configuration is handled via [Data Collection Rules](../essentials/data-collection-rule-overview.md). The Azure Monitor agent is meant to replace the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines. This article provides high-level guidance on when and how to migrate to the new Azure Monitor agent (AMA) and the data collection rules (DCR) that define the data the agent should collect.
-The decision to migrate to AMA will be based on the different features and services that you use. Considerations for Azure Monitor and other supported features and services are provided in this article since they should be considered together in your migration strategy.
+## Why should I migrate to the Azure Monitor agent?
+- **Security and performance**
+ - AMA uses Managed Identity or AAD tokens (for clients) which are much more secure than the legacy authentication methods.
+ - AMA can provide higher events per second (EPS) upload rate compared to legacy agents
+- **Cost savings** via efficient data collection [using Data Collection Rules](data-collection-rule-azure-monitor-agent.md). This is one of the most useful advantages of using AMA.
+ - DCRs allow granular targeting of machines connected to a workspace to collect data from as compared to the ΓÇ£all or nothingΓÇ¥ mode that legacy agents have.
+ - Using DCRs, you can filter out data to remove unused events and save additional costs.
+
+- **Simpler management** of data collection, including ease of troubleshooting
+ - **Multihoming** on both Windows and Linux is possible easily
+ - Every action across the data collection lifecycle, from onboarding/setup to deployment to updates and changes over time, is significantly easier and scalable thanks to agent configuration becoming centralized and ΓÇÿin the cloudΓÇÖ as compared to configuring things on every machine.
+ - Enabling/disabling of additional capabilities or services (Sentinel, Defender for Cloud, VM Insights, etc) is more transparent and controlled, using the extensibility architecture of AMA.
+- **A single agent** that will consolidate all the features necessary to address all telemetry data collection needs across servers and client devices (running Windows 10, 11) as compared to running various different monitoring agents. This is the eventual goal, though AMA is currently converging with the Log Analytics agents.
+
+## When should I migrate to the Azure Monitor agent?
+Your migration plan to the Azure Monitor agent should include the following considerations:
+
+|Consideration |Description |
+|||
+|**Environment requirements** | Verify that your environment is currently supported by the AMA. For more information, see [Supported operating systems](./agents-overview.md#supported-operating-systems). |
+|**Current and new feature requirements** | While the AMA provides [several new features](#current-capabilities), such as filtering, scoping, and multihoming, it is not yet at parity with the legacy Log Analytics agent.As you plan your migration, make sure that the features your organization requires are already supported by the AMA. You may decide to continue using the Log Analytics agent for now, and migrate at a later date. See [Supported services and features](./azure-monitor-agent-overview.md#supported-services-and-features) for a current status of features that are supported and that may be in preview. |
> [!IMPORTANT] > The Log Analytics agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are currently using the Log Analytics agent with Azure Monitor or other supported features and services, you should start planning your migration to the Azure Monitor agent using the information in this article.
Azure Monitor agent currently supports the following core functionality:
> - [Overview of Azure Arc ΓÇô enabled servers agent](../../azure-arc/servers/agent-overview.md) > - [Plan and deploy Azure Arc ΓÇô enabled servers at scale](../../azure-arc/servers/plan-at-scale-deployment.md)
-## Plan your migration
-
-You migration plan to the Azure Monitor agent should include the following considerations:
-
-|Consideration |Description |
-|||
-|**Environment requirements** | Verify that your environment is currently supported by the AMA. For more information, see [Supported operating systems](./agents-overview.md#supported-operating-systems). |
-|**Current and new feature requirements** | While the AMA provides [several new features](#current-capabilities), such as filtering, scoping, and multi-homing, it is not yet at parity with the legacy Log Analytics agent.As you plan your migration, make sure that the features your organization requires are already supported by the AMA. You may decide to continue using the Log Analytics agent for now, and migrate at a later date. See [Supported services and features](./azure-monitor-agent-overview.md#supported-services-and-features) for a current status of features that are supported and that may be in preview. |
- ## Gap analysis between agents The following tables show gap analyses for the **log types** that are currently collected by each agent. This will be updated as support for AMA grows towards parity with the Log Analytics agent. For a general comparison of Azure Monitor agents, see [Overview of Azure Monitor agents](../agents/azure-monitor-agent-overview.md).
For more information, see:
- [Overview of the Azure Monitor agents](agents-overview.md) - [AMA migration for Microsoft Sentinel](../../sentinel/ama-migrate.md)-- [Frequently asked questions for AMA migration](/azure/azure-monitor/faq#azure-monitor-agent)
+- [Frequently asked questions for AMA migration](/azure/azure-monitor/faq#azure-monitor-agent)
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
description: Overview of the Azure Monitor agent, which collects monitoring data
Previously updated : 3/21/2022 Last updated : 3/31/2022
To start transitioning your VMs off the current agents to the new agent, conside
Azure Monitor's Log Analytics agent is retiring on 31 August 2024. The current agents will be supported until the retirement date. ## Coexistence with other agents
-The Azure Monitor agent can coexist (run side by side on the same machine) with the existing agents so that you can continue to use their existing functionality during evaluation or migration. While this allows you to begin transition given the limitations, you must review the below points carefully:
+The Azure Monitor agent can coexist (run side by side on the same machine) with the legacy Log Analytics agents so that you can continue to use their existing functionality during evaluation or migration. While this allows you to begin transition given the limitations, you must review the below points carefully:
- Be careful in collecting duplicate data because it could skew query results and affect downstream features like alerts, dashboards or workbooks. For example, VM insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents. If you install the Azure Monitor agent and create a data collection rule for these same events and performance data, it will result in duplicate data. As such, ensure you're not collecting the same data from both agents. If you are, ensure they're **collecting from different machines** or **going to separate destinations**. - Besides data duplication, this would also generate more charges for data ingestion and retention.-- Running two telemetry agents on the same machine would result in double the resource consumption, including but not limited to CPU, memory, storage space and network bandwidth.
+- Running two telemetry agents on the same machine would result in double the resource consumption, including but not limited to CPU, memory, storage space and network bandwidth.
+> [!NOTE]
+> When using both agents during evaluation or migration, you can use the **'Category'** column of the [Heartbeat](/azure/azure-monitor/reference/tables/Heartbeat) table in your Log Analytics workspace, and filter for 'Azure Monitor Agent'.
## Supported resource types Azure virtual machines, virtual machine scale sets, and Azure Arc-enabled servers are currently supported. Azure Kubernetes Service and other compute resource types aren't currently supported.
The Azure Monitor agent sends data to Azure Monitor Metrics (preview) or a Log A
| Syslog | Log Analytics workspace - [Syslog](/azure/azure-monitor/reference/tables/syslog)<sup>2</sup> table | Information sent to the Linux event logging system | <sup>1</sup> [Click here](../essentials/metrics-custom-overview.md#quotas-and-limits) to review other limitations of using Azure Monitor Metrics. On Linux, using Azure Monitor Metrics as the only destination is supported in v1.10.9.0 or higher.
-<sup>2</sup> Azure Monitor Linux Agent v1.15.2 or higher supports syslog RFC formats including Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee and CEF (Common Event Format).
+<sup>2</sup> Azure Monitor Linux Agent v1.15.2 or higher supports syslog RFC formats including **Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee and CEF (Common Event Format)**.
## Supported services and features The following table shows the current support for the Azure Monitor agent with other Azure services.
The following table shows the current support for the Azure Monitor agent with o
| Azure service | Current support | More information | |:|:|:| | [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Private preview | [Sign-up link](https://aka.ms/AMAgent) |
-| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Linux Syslog CEF (Common Event Format): Private preview</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | <ul><li>[Sign-up link](https://aka.ms/AMAgent)</li><li>No sign-up needed </li><li>No sign-up needed</li></ul> |
+| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows DNS logs: Private preview</li><li>Linux Syslog CEF (Common Event Format): Private preview</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | <ul><li>[Sign-up link](https://aka.ms/AMAgent)</li><li>[Sign-up link](https://aka.ms/AMAgent)</li><li>No sign-up needed </li><li>No sign-up needed</li></ul> |
The following table shows the current support for the Azure Monitor agent with Azure Monitor features.
The Azure Monitor agent extensions for Windows and Linux can communicate either
$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}'; $protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
-Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.0 -Settings $settingsString -ProtectedSettings $protectedSettingsString
+Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.0 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString
``` # [Linux VM](#tab/PowerShellLinux) ```powershell
-$settingsHashtable = @{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}};
-$protectedSettingsHashtable = @{"proxy":{"username": "[username]","password": "[password]"}};
+$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
+$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
-Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.5 -Settings $settingsString -ProtectedSettings $protectedSettingsString
+Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.5 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString
``` # [Windows Arc enabled server](#tab/PowerShellWindowsArc)
Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMoni
$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}'; $protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
-New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Settings $settingsString -ProtectedSettings $protectedSettingsString
+New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString
``` # [Linux Arc enabled server](#tab/PowerShellLinuxArc)
New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType Az
$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}'; $protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
-New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Settings $settingsString -ProtectedSettings $protectedSettingsString
+New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString
```
azure-monitor It Service Management Connector Secure Webhook Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/it-service-management-connector-secure-webhook-connections.md
Title: IT Service Management Connector - Secure Export in Azure Monitor
description: This article shows you how to connect your ITSM products/services with Secure Export in Azure Monitor to centrally monitor and manage ITSM work items. Last updated 2/23/2022- - # Connect Azure to ITSM tools by using Secure Export This article shows you how to configure the connection between your IT Service Management (ITSM) product or service by using Secure Export.
Secure Export is an updated version of [IT Service Management Connector (ITSMC)]
ITSMC uses username and password credentials. Secure Export has stronger authentication because it uses Azure Active Directory (Azure AD). Azure AD is Microsoft's cloud-based identity and access management service. It helps users sign in and access internal or external resources. Using Azure AD with ITSM helps to identify Azure alerts (through the Azure AD application ID) that were sent to the external system.
-> [!NOTE]
-> The ability to connect Azure to ITSM tools by using Secure Export is in preview.
- ## Secure Export architecture The Secure Export architecture introduces the following new capabilities: * **New action group**: Alerts are sent to the ITSM tool through the Secure Webhook action group, instead of the ITSM action group that ITSMC uses. * **Azure AD authentication**: Authentication occurs through Azure AD instead of username/password credentials.- ## Secure Export data flow The steps of the Secure Export data flow are:
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
# Application Map: Triage Distributed Applications
-Application Map helps you spot performance bottlenecks or failure hotspots across all components of your distributed application. Each node on the map represents an application component or its dependencies; and has health KPI and alerts status. You can click through from any component to more detailed diagnostics, such as Application Insights events. If your app uses Azure services, you can also click through to Azure diagnostics, such as SQL Database Advisor recommendations.
--
+Application Map helps you spot performance bottlenecks or failure hotspots across all components of your distributed application. Each node on the map represents an application component or its dependencies; and has health KPI and alerts status. You can select any component to get more detailed diagnostics, such as Application Insights events. If your app uses Azure services, you can also select Azure diagnostics, such as SQL Database Advisor recommendations.
## What is a Component? Components are independently deployable parts of your distributed/microservices application. Developers and operations teams have code-level visibility or access to telemetry generated by these application components.
-* Components are different from "observed" external dependencies such as SQL, EventHub etc. which your team/organization may not have access to (code or telemetry).
+* Components are different from "observed" external dependencies such as SQL, Event Hubs etc. which your team/organization may not have access to (code or telemetry).
* Components run on any number of server/role/container instances.
-* Components can be separate Application Insights instrumentation keys (even if subscriptions are different) or different roles reporting to a single Application Insights instrumentation key. The preview map experience shows the components regardless of how they are set up.
+* Components can be separate Application Insights resources (even if subscriptions are different) or different roles reporting to a single Application Insights resource. The preview map experience shows the components regardless of how they're set up.
## Composite Application Map You can see the full application topology across multiple levels of related application components. Components could be different Application Insights resources, or different roles in a single resource. The app map finds components by following HTTP dependency calls made between servers with the Application Insights SDK installed.
-This experience starts with progressive discovery of the components. When you first load the application map, a set of queries is triggered to discover the components related to this component. A button at the top-left corner will update with the number of components in your application as they are discovered.
+This experience starts with progressive discovery of the components. When you first load the application map, a set of queries is triggered to discover the components related to this component. A button at the top-left corner will update with the number of components in your application as they're discovered.
On clicking "Update map components", the map is refreshed with all components discovered until that point. Depending on the complexity of your application, this may take a minute to load.
-If all of the components are roles within a single Application Insights resource, then this discovery step is not required. The initial load for such an application will have all its components.
+If all of the components are roles within a single Application Insights resource, then this discovery step isn't required. The initial load for such an application will have all its components.
![Screenshot shows an example of an application map.](media/app-map/app-map-001.png) One of the key objectives with this experience is to be able to visualize complex topologies with hundreds of components.
-Click on any component to see related insights and go to the performance and failure triage experience for that component.
+Select any component to see related insights and go to the performance and failure triage experience for that component.
![Flyout](media/app-map/application-map-002.png)
To troubleshoot performance problems, select **investigate performance**.
### Go to details
-Select **go to details** to explore the end-to-end transaction experience, which can offer views down to the call stack level.
+Select **go to details** to explore the end-to-end transaction experience, which can offer views to the call stack level.
![Screenshot of go-to-details button](media/app-map/go-to-details.png)
Select **go to details** to explore the end-to-end transaction experience, which
### View Logs (Analytics)
-To query and investigate your applications data further, click **view in Logs (Analytics)**.
+To query and investigate your applications data further, select **view in Logs (Analytics)**.
![Screenshot of view in analytics button](media/app-map/view-logs.png)
namespace CustomInitializer.Telemetry
**ASP.NET apps: Load initializer to the active TelemetryConfiguration**
-In ApplicationInsights.config :
+In ApplicationInsights.config:
```xml <ApplicationInsights>
You can also set the cloud role name using the environment variable ```APPLICATI
**Java SDK**
-If you are using the SDK, starting with Application Insights Java SDK 2.5.0, you can specify the cloud role name
-by adding `<RoleName>` to your `ApplicationInsights.xml` file, e.g.
+If you're using the SDK, starting with Application Insights Java SDK 2.5.0, you can specify the cloud role name
+by adding `<RoleName>` to your `ApplicationInsights.xml` file, for example.
```xml <?xml version="1.0" encoding="utf-8"?>
For the [official definitions](https://github.com/Microsoft/ApplicationInsights-
715: string CloudRoleInstance = "ai.cloud.roleInstance"; ```
-Alternatively, **cloud role instance** can be helpful for scenarios where **cloud role name** tells you the problem is somewhere in your web front-end, but you might be running your web front-end across multiple load-balanced servers so being able to drill in a layer deeper via Kusto queries and knowing if the issue is impacting all web front-end servers/instances or just one can be extremely important.
+Alternatively, **cloud role instance** can be helpful for scenarios where **cloud role name** tells you the problem is somewhere in your web front-end, but you might be running your web front-end across multiple load-balanced servers so being able to drill in a layer deeper via Kusto queries and knowing if the issue is impacting all web front-end servers/instances or just one can be important.
A scenario where you might want to override the value for cloud role instance could be if your app is running in a containerized environment where just knowing the individual server might not be enough information to locate a given issue.
If you're having trouble getting Application Map to work as expected, try these
### Too many nodes on the map
-Application Map constructs an application node for each unique cloud role name present in your request telemetry and a dependency node for each unique combination of type, target, and cloud role name in your dependency telemetry. If there are more than 10,000 nodes in your telemetry, Application Map will not be able to fetch all the nodes and links, so your map will be incomplete. If this happens, a warning message will appear when viewing the map.
+Application Map constructs an application node for each unique cloud role name present in your request telemetry and a dependency node for each unique combination of type, target, and cloud role name in your dependency telemetry. If there are more than 10,000 nodes in your telemetry, Application Map won't be able to fetch all the nodes and links, so your map will be incomplete. If this happens, a warning message will appear when viewing the map.
In addition, Application Map only supports up to 1000 separate ungrouped nodes rendered at once. Application Map reduces visual complexity by grouping dependencies together that have the same type and callers, but if your telemetry has too many unique cloud role names or too many dependency types, that grouping will be insufficient, and the map will be unable to render. To fix this, you'll need to change your instrumentation to properly set the cloud role name, dependency type, and dependency target fields.
-* Dependency target should represent the logical name of a dependency. In many cases, it's equivalent to the server or resource name of the dependency. For example, in the case of HTTP dependencies it is set to the hostname. It should not contain unique IDs or parameters that change from one request to another.
+* Dependency target should represent the logical name of a dependency. In many cases, it's equivalent to the server or resource name of the dependency. For example, in the case of HTTP dependencies it's set to the hostname. It shouldn't contain unique IDs or parameters that change from one request to another.
-* Dependency type should represent the logical type of a dependency. For example, HTTP, SQL or Azure Blob are typical dependency types. It should not contain unique IDs.
+* Dependency type should represent the logical type of a dependency. For example, HTTP, SQL or Azure Blob are typical dependency types. It shouldn't contain unique IDs.
* The purpose of cloud role name is described in the [above section](#set-or-override-cloud-role-name).
azure-monitor Transaction Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-diagnostics.md
# Unified cross-component transaction diagnostics
-The unified diagnostics experience automatically correlates server-side telemetry from across all your Application Insights monitored components into a single view. It doesn't matter if you have multiple resources with separate instrumentation keys. Application Insights detects the underlying relationship and allows you to easily diagnose the application component, dependency, or exception that caused a transaction slowdown or failure.
-
+The unified diagnostics experience automatically correlates server-side telemetry from across all your Application Insights monitored components into a single view. It doesn't matter if you have multiple resources. Application Insights detects the underlying relationship and allows you to easily diagnose the application component, dependency, or exception that caused a transaction slowdown or failure.
## What is a Component? Components are independently deployable parts of your distributed/microservices application. Developers and operations teams have code-level visibility or access to telemetry generated by these application components.
-* Components are different from "observed" external dependencies such as SQL, EventHub etc. which your team/organization may not have access to (code or telemetry).
+* Components are different from "observed" external dependencies such as SQL, Event Hubs etc. which your team/organization may not have access to (code or telemetry).
* Components run on any number of server/role/container instances. * Components can be separate Application Insights instrumentation keys (even if subscriptions are different) or different roles reporting to a single Application Insights instrumentation key. The new experience shows details across all components, regardless of how they have been set up.
This view has four key parts: results list, a cross-component transaction chart,
## Cross-component transaction chart
-This chart provides a timeline with horizontal bars for the duration of requests and dependencies across components. Any exceptions that are collected are also marked on the timeline.
+This chart provides a timeline with horizontal bars during requests and dependencies across components. Any exceptions that are collected are also marked on the timeline.
* The top row on this chart represents the entry point, the incoming request to the first component called in this transaction. The duration is the total time taken for the transaction to complete. * Any calls to external dependencies are simple non-collapsible rows, with icons representing the dependency type.
This chart provides a timeline with horizontal bars for the duration of requests
> [!NOTE] > Calls to other components have two rows: one row represents the outbound call (dependency) from the caller component, and the other row corresponds to the inbound request at the called component. The leading icon and distinct styling of the duration bars help differentiate between them.
-## All telemetry with this Operation Id
+## All telemetry with this Operation ID
This section shows flat list view in a time sequence of all the telemetry related to this transaction. It also shows the custom events, and traces that aren't displayed in the transaction chart. You can filter this list to telemetry generated by a specific component/call. You can select any telemetry item in this list to see corresponding [details on the right](#details-of-the-selected-telemetry).
This section shows flat list view in a time sequence of all the telemetry relate
## Details of the selected telemetry
-This collapsible pane shows the detail of any selected item from the transaction chart, or the list. "Show all" lists all of the standard attributes that are collected. Any custom attributes are separately listed below the standard set. Click on the "..." below the stack trace window to get an option to copy the trace. "Open profiler traces" or "Open debug snapshot" shows code level diagnostics in corresponding detail panes.
+This collapsible pane shows the detail of any selected item from the transaction chart, or the list. "Show all" lists all of the standard attributes that are collected. Any custom attributes are separately listed below the standard set. Select the "..." below the stack trace window to get an option to copy the trace. "Open profiler traces" or "Open debug snapshot" shows code level diagnostics in corresponding detail panes.
![Exception detail](media/transaction-diagnostics/exceptiondetail.png) ## Search results
-This collapsible pane shows the other results that meet the filter criteria. Click on any result to update the respective details the 3 sections listed above. We try to find samples that are most likely to have the details available from all components even if sampling is in effect in any of them. These are shown as "suggested" samples.
+This collapsible pane shows the other results that meet the filter criteria. Select any result to update the respective details the three sections listed above. We try to find samples that are most likely to have the details available from all components even if sampling is in effect in any of them. These are shown as "suggested" samples.
![Search results](media/transaction-diagnostics/searchResults.png) ## Profiler and snapshot debugger
-[Application Insights profiler](./profiler.md) or [snapshot debugger](snapshot-debugger.md) help with code-level diagnostics of performance and failure issues. With this experience, you can see profiler traces or snapshots from any component with a single click.
+[Application Insights profiler](./profiler.md) or [snapshot debugger](snapshot-debugger.md) help with code-level diagnostics of performance and failure issues. With this experience, you can see profiler traces or snapshots from any component with a single selection.
-If you could not get Profiler working, please contact **serviceprofilerhelp\@microsoft.com**
+If you couldn't get Profiler working, contact **serviceprofilerhelp\@microsoft.com**
-If you could not get Snapshot Debugger working, please contact **snapshothelp\@microsoft.com**
+If you couldn't get Snapshot Debugger working, contact **snapshothelp\@microsoft.com**
![Profiler Integration](media/transaction-diagnostics/profilerTraces.png)
If you do have access and the components are instrumented with the latest Applic
*I see duplicate rows for the dependencies. Is this expected?*
-At this time, we are showing the outbound dependency call separate from the inbound request. Typically, the two calls look identical with only the duration value being different due to the network round trip. The leading icon and distinct styling of the duration bars help differentiate between them. Is this presentation of the data confusing? Give us your feedback!
+At this time, we're showing the outbound dependency call separate from the inbound request. Typically, the two calls look identical with only the duration value being different due to the network round trip. The leading icon and distinct styling of the duration bars help differentiate between them. Is this presentation of the data confusing? Give us your feedback!
*What about clock skews across different component instances?*
This is by design. All of the related items, across all components, are already
*I see more events than expected in the transaction diagnostics experience when using the Application Insights JavaScript SDK. Is there a way to see fewer events per transaction?*
-The transaction diagnostics experience shows all telemetry in a [single operation](correlation.md#data-model-for-telemetry-correlation) that share an [Operation Id](data-model-context.md#operation-id). By default, the Application Insights SDK for JavaScript creates a new operation for each unique page view. In a Single Page Application (SPA), only one page view event will be generated and a single Operation Id will be used for all telemetry generated, this can result in many events being correlated to the same operation. In these scenarios, you can use Automatic Route Tracking to automatically create new operations for navigation in your single page app. You must turn on [enableAutoRouteTracking](javascript.md#single-page-applications) so a page view is generated every time the URL route is updated (logical page view occurs). If you want to manually refresh the Operation Id, you can do so by calling `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`. Manually triggering a PageView event will also reset the Operation Id.
+The transaction diagnostics experience shows all telemetry in a [single operation](correlation.md#data-model-for-telemetry-correlation) that share an [Operation ID](data-model-context.md#operation-id). By default, the Application Insights SDK for JavaScript creates a new operation for each unique page view. In a Single Page Application (SPA), only one page view event will be generated and a single Operation ID will be used for all telemetry generated, this can result in many events being correlated to the same operation. In these scenarios, you can use Automatic Route Tracking to automatically create new operations for navigation in your single page app. You must turn on [enableAutoRouteTracking](javascript.md#single-page-applications) so a page view is generated every time the URL route is updated (logical page view occurs). If you want to manually refresh the Operation ID, you can do so by calling `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`. Manually triggering a PageView event will also reset the Operation ID.
*Why do transaction detail durations not add up to the top-request duration?*
-Time not explained in the gantt chart, is time that is not covered by a tracked dependency.
-This can be due to either external calls that were not instrumented (automatically or manually), or that the time taken was in process rather than because of an external call.
+Time not explained in the gantt chart, is time that isn't covered by a tracked dependency.
+This can be due to either external calls that weren't instrumented (automatically or manually), or that the time taken was in process rather than because of an external call.
If all calls were instrumented, in process is the likely root cause for the time spent. A useful tool for diagnosing the process is the [Application Insights profiler](./profiler.md).
azure-monitor Data Collection Rule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-overview.md
Title: Data Collection Rules in Azure Monitor description: Overview of data collection rules (DCRs) in Azure Monitor including their contents and structure and how you can create and work with them. Previously updated : 02/21/2022 Last updated : 03/31/2022
azure-monitor Data Collection Rule Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-transformations.md
print d=parse_json('{"a":123, "b":"hello", "c":[1,2,3], "d":{}}')
### Supported statements #### let statement
-The right-hand side of [let](/data-explorer/kusto/query/letstatement) can be a scalar expression, a tabular expression or a user-defined function. Only user-defined functions with scalar arguments are supported.
+The right-hand side of [let](/azure/data-explorer/kusto/query/letstatement) can be a scalar expression, a tabular expression or a user-defined function. Only user-defined functions with scalar arguments are supported.
#### tabular expression statements The only supported data sources for the KQL statement are as follows:
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 03/18/2022 Last updated : 03/31/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files standard network features are supported for the following reg
* North Central US * South Central US * West Europe
+* West US 2
* West US 3 ## Considerations
azure-resource-manager Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/modules.md
If you've [published a module to a registry](bicep-cli.md#publish), you can link
module <symbolic-name> 'br:<registry-name>.azurecr.io/<file-path>:<tag>' = { ``` -- **br** is the schema name for a Bicep registry.
+- **br** is the scheme name for a Bicep registry.
- **file path** is called `repository` in Azure Container Registry. The **file path** can contain segments that are separated by the `/` character. - **tag** is used for specifying a version for the module.
azure-resource-manager App Service Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/app-service-move-limitations.md
Title: Move Azure App Service resources
+ Title: Move Azure App Service resources across resource groups or subscriptions
description: Use Azure Resource Manager to move App Service resources to a new resource group or subscription. Previously updated : 08/30/2021 Last updated : 03/31/2022
-# Move guidance for App Service resources
+# Move App Service resources to a new resource group or subscription
-This article describes the steps to move App Service resources. There are specific requirements for moving App Service resources to a new subscription.
+This article describes the steps to move App Service resources between resource groups or Azure subscriptions. There are specific requirements for moving App Service resources to a new subscription.
+
+If you want to move App Services to a new region, see [Move an App Service resource to another region](../../../app-service/manage-move-across-regions.md).
## Move across subscriptions
-When moving a Web App across subscriptions, the following guidance applies:
+When you move a Web App across subscriptions, the following guidance applies:
- Moving a resource to a new resource group or subscription is a metadata change that shouldn't affect anything about how the resource functions. For example, the inbound IP address for an app service doesn't change when moving the app service. - The destination resource group must not have any existing App Service resources. App Service resources include:
azure-resource-manager Networking Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/networking-move-limitations.md
Title: Move Azure Networking resources to new subscription or resource group description: Use Azure Resource Manager to move virtual networks and other networking resources to a new resource group or subscription. Previously updated : 08/16/2021 Last updated : 03/31/2022
-# Move guidance for networking resources
+# Move networking resources to new resource group or subscription
-This article describes how to move virtual networks and other networking resources for specific scenarios.
+This article describes how to move virtual networks and other networking resources to a new resource group or Azure subscription.
During the move, your networking resources will operate without interruption.
+If you want to move networking resources to a new region, see [Tutorial: Move Azure VMs across regions](../../../resource-mover/tutorial-move-region-virtual-machines.md).
+ ## Dependent resources > [!NOTE]
azure-resource-manager Virtual Machines Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/virtual-machines-move-limitations.md
Title: Move Azure VMs to new subscription or resource group description: Use Azure Resource Manager to move virtual machines to a new resource group or subscription. Previously updated : 02/28/2022 Last updated : 03/31/2022
-# Move guidance for virtual machines
+# Move virtual machines to resource group or subscription
-This article describes the scenarios that aren't currently supported and the steps to move virtual machines with backup.
+This article describes how to move a virtual machine to a new resource group or Azure subscription.
+
+If you want to move a virtual machine to a new region, see [Tutorial: Move Azure VMs across regions](../../../resource-mover/tutorial-move-region-virtual-machines.md).
## Scenarios not supported
azure-sql Resource Limits Logical Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-limits-logical-server.md
Previously updated : 03/30/2022 Last updated : 03/31/2022 # Resource management in Azure SQL Database
This query should be executed in the user database, not in the master database.
> [!IMPORTANT] > In Premium and Business Critical service tiers, if the workload attempts to increase combined local storage consumption by data files, transaction log files, and `tempdb` files over the **maximum local storage** limit, an out-of-space error will occur.
-As databases are created, deleted, and increase or decrease in size, local storage consumption on a machine fluctuates over time. If the system detects that available local storage on a machine is low, and a database or an elastic pool is at risk of running out of space, it will move the database or elastic pool to a different machine with sufficient local storage available.
+Local SSD storage is also used by databases in service tiers other than Premium and Business Critical for the tempdb database and Hyperscale RBPEX cache. As databases are created, deleted, and increase or decrease in size, total local storage consumption on a machine fluctuates over time. If the system detects that available local storage on a machine is low, and a database or an elastic pool is at risk of running out of space, it will move the database or elastic pool to a different machine with sufficient local storage available.
This move occurs in an online fashion, similarly to a database scaling operation, and has a similar [impact](single-database-scale.md#impact), including a short (seconds) failover at the end of the operation. This failover terminates open connections and rolls back transactions, potentially impacting applications using the database at that time.
-Because all data is copied to local storage volumes on different machines, moving larger databases may require a substantial amount of time. During that time, if local space consumption by a database or an elastic pool, or by the `tempdb` database grows rapidly, the risk of running out of space increases. The system initiates database movement in a balanced fashion to minimize out-of-space errors while avoiding unnecessary failovers.
-
-> [!NOTE]
-> Database movement due to insufficient local storage only occurs in the Premium or Business Critical service tiers. It does not occur in the Hyperscale, General Purpose, Standard, and Basic service tiers, because in those tiers data files are not stored in local storage.
+Because all data is copied to local storage volumes on different machines, moving larger databases in Premium and Business Critical service tiers may require a substantial amount of time. During that time, if local space consumption by a database or an elastic pool, or by the `tempdb` database grows rapidly, the risk of running out of space increases. The system initiates database movement in a balanced fashion to minimize out-of-space errors while avoiding unnecessary failovers.
## Tempdb sizes
azure-sql Single Database Create Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/single-database-create-quickstart.md
To create a single database in the Azure portal, this quickstart starts at the A
1. Browse to the [Select SQL Deployment option](https://portal.azure.com/#create/Microsoft.AzureSQL) page. 1. Under **SQL databases**, leave **Resource type** set to **Single database**, and select **Create**.
- ![Add to Azure SQL](./media/single-database-create-quickstart/select-deployment.png)
+ :::image type="content" source="./media/single-database-create-quickstart/select-deployment.png" alt-text="Add to Azure SQL" lightbox="media/single-database-create-quickstart/select-deployment.png":::
1. On the **Basics** tab of the **Create SQL Database** form, under **Project details**, select the desired Azure **Subscription**. 1. For **Resource group**, select **Create new**, enter *myResourceGroup*, and select **OK**. 1. For **Database name**, enter *mySampleDatabase*. 1. For **Server**, select **Create new**, and fill out the **New server** form with the following values: - **Server name**: Enter *mysqlserver*, and add some characters for uniqueness. We can't provide an exact server name to use because server names must be globally unique for all servers in Azure, not just unique within a subscription. So enter something like mysqlserver12345, and the portal lets you know if it's available or not.
+ - **Location**: Select a location from the dropdown list.
+ - **Authentication method**: Select **Use SQL authentication**.
- **Server admin login**: Enter *azureuser*. - **Password**: Enter a password that meets requirements, and enter it again in the **Confirm password** field.
- - **Location**: Select a location from the dropdown list.
+
Select **OK**. 1. Leave **Want to use SQL elastic pool** set to **No**. 1. Under **Compute + storage**, select **Configure database**.
-1. This quickstart uses a serverless database, so select **Serverless**, and then select **Apply**.
+1. This quickstart uses a serverless database, so leave **Service tier** set to **General Purpose (Scalable compute and storage options)** and set **Compute tier** to **Serverless**. Select **Apply**.
- ![configure serverless database](./media/single-database-create-quickstart/configure-database.png)
+ :::image type="content" source="./media/single-database-create-quickstart/configure-database.png" alt-text="configure serverless database" lightbox="media/single-database-create-quickstart/configure-database.png":::
1. Select **Next: Networking** at the bottom of the page.
- ![New SQL database - Basic tab](./media/single-database-create-quickstart/new-sql-database-basics.png)
+ :::image type="content" source="./media/single-database-create-quickstart/new-sql-database-basics.png" alt-text="New SQL database - Basic tab":::
1. On the **Networking** tab, for **Connectivity method**, select **Public endpoint**. 1. For **Firewall rules**, set **Add current client IP address** to **Yes**. Leave **Allow Azure services and resources to access this server** set to **No**.
-1. Select **Next: Additional settings** at the bottom of the page.
+1. Select **Next: Security** at the bottom of the page.
- ![Networking tab](./media/single-database-create-quickstart/networking.png)
+ :::image type="content" source="./media/single-database-create-quickstart/networking.png" alt-text="Networking tab":::
+1. On the **Security tab**, you have the option to enable [Microsoft Defender for SQL](../database/azure-defender-for-sql.md). Select **Next: Additional settings** at the bottom of the page.
1. On the **Additional settings** tab, in the **Data source** section, for **Use existing data**, select **Sample**. This creates an AdventureWorksLT sample database so there's some tables and data to query and experiment with, as opposed to an empty blank database.
-1. Optionally, enable [Microsoft Defender for SQL](../database/azure-defender-for-sql.md).
-1. Optionally, set the [maintenance window](../database/maintenance-window.md) so planned maintenance is performed at the best time for your database.
+ 1. Select **Review + create** at the bottom of the page:
- ![Additional settings tab](./media/single-database-create-quickstart/additional-settings.png)
+ :::image type="content" source="./media/single-database-create-quickstart/additional-settings.png" alt-text="Additional settings tab":::
1. On the **Review + create** page, after reviewing, select **Create**.
Once your database is created, you can use the **Query editor (preview)** in the
1. On the page for your database, select **Query editor (preview)** in the left menu. 1. Enter your server admin login information, and select **OK**.
- ![Sign in to Query editor](./media/single-database-create-quickstart/query-editor-login.png)
+ :::image type="content" source="./media/single-database-create-quickstart/query-editor-login.png" alt-text="Sign in to Query editor":::
1. Enter the following query in the **Query editor** pane.
Once your database is created, you can use the **Query editor (preview)** in the
1. Select **Run**, and then review the query results in the **Results** pane.
- ![Query editor results](./media/single-database-create-quickstart/query-editor-results.png)
+ :::image type="content" source="./media/single-database-create-quickstart/query-editor-results.png" alt-text="Query editor results" lightbox="media/single-database-create-quickstart/query-editor-results.png":::
1. Close the **Query editor** page, and select **OK** when prompted to discard your unsaved edits.
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
Last updated 04/20/2021
# What is Azure VMware Solution?
-Azure VMware Solution provides you with private clouds that contain vSphere clusters built from dedicated bare-metal Azure infrastructure. The minimum initial deployment is three hosts, but additional hosts can be added one at a time, up to a maximum of 16 hosts per cluster. All provisioned private clouds have vCenter Server, vSAN, vSphere, and NSX-T. As a result, you can migrate workloads from your on-premises environments, deploy new virtual machines (VMs), and consume Azure services from your private clouds. In addition, Azure VMware Solution management tools (vCenter Server and NSX Manager) are available at least 99.9% of the time. For more information, see [Azure VMware Solution SLA](https://aka.ms/avs/sla).
+Azure VMware Solution provides you with private clouds that contain VMware vSphere clusters built from dedicated bare-metal Azure infrastructure. The minimum initial deployment is three hosts, but additional hosts can be added one at a time, up to a maximum of 16 hosts per cluster. All provisioned private clouds have VMware vCenter Server, VMware vSAN, VMware vSphere, and VMware NSX-T Data Center. As a result, you can migrate workloads from your on-premises environments, deploy new virtual machines (VMs), and consume Azure services from your private clouds. In addition, Azure VMware Solution management tools (vCenter Server and NSX Manager) are available at least 99.9% of the time. For more information, see [Azure VMware Solution SLA](https://azure.microsoft.com/support/legal/sla/azure-vmware/v1_1/).
-Azure VMware Solution is a VMware validated solution with ongoing validation and testing of enhancements and upgrades. Microsoft manages and maintains private cloud infrastructure and software. It allows you to focus on developing and running workloads in your private clouds.
+Azure VMware Solution is a VMware validated solution with ongoing validation and testing of enhancements and upgrades. Microsoft manages and maintains the private cloud infrastructure and software. It allows you to focus on developing and running workloads in your private clouds to deliver business value.
The diagram shows the adjacency between private clouds and VNets in Azure, Azure services, and on-premises environments. Network access from private clouds to Azure services or VNets provides SLA-driven integration of Azure service endpoints. ExpressRoute Global Reach connects your on-premises environment to your Azure VMware Solution private cloud.
backup Sap Hana Db Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-db-restore.md
Title: Restore SAP HANA databases on Azure VMs description: In this article, discover how to restore SAP HANA databases that are running on Azure Virtual Machines. You can also use Cross Region Restore to restore your databases to a secondary region. Previously updated : 11/30/2021 Last updated : 03/31/2022
To restore the backup data as files instead of a database, choose **Restore as F
The files that are dumped are: * Database backup files
- * Catalog files
* JSON metadata files (for each backup file that's involved) Typically, a network share path, or path of a mounted Azure file share when specified as the destination path, enables easier access to these files by other machines in the same network or with the same Azure file share mounted on them.
To restore the backup data as files instead of a database, choose **Restore as F
su - <sid>adm ```
- 1. Generate the catalog file for restore. Extract the **BackupId** from the JSON metadata file for the full backup, which will be used later in the restore operation. Make sure that the full and log backups are in different folders and delete the catalog files and JSON metadata files in these folders.
+ 1. Generate the catalog file for restore. Extract the **BackupId** from the JSON metadata file for the full backup, which will be used later in the restore operation. Make sure that the full and log backups (not present for Full Backup Recovery) are in different folders and delete the JSON metadata files in these folders.
```bash hdbbackupdiag --generate --dataDir <DataFileDir> --logDirs <LogFilesDir> -d <PathToPlaceCatalogFile>
To restore the backup data as files instead of a database, choose **Restore as F
In the command above:
- * `<DataFileDir>` - the folder that contains the full backups
- * `<LogFilesDir>` - the folder that contains the log backups, differential and incremental backups (if any)
- * `<PathToPlaceCatalogFile>` - the folder where the catalog file generated must be placed
+ * `<DataFileDir>` - the folder that contains the full backups.
+ * `<LogFilesDir>` - the folder that contains the log backups, differential and incremental backups. For Full BackUp Restore, Log folder isn't created. Add an empty directory in that case.
+ * `<PathToPlaceCatalogFile>` - the folder where the catalog file generated must be placed.
1. Restore using the newly generated catalog file through HANA Studio or run the HDBSQL restore query with this newly generated catalog. HDBSQL queries are listed below:
- * To restore to a point in time:
+ * To open hdsql prompt, run the following command:
- If you're creating a new restored database, run the HDBSQL command to create a new database `<DatabaseName>` and then stop the database for restore. However, if you're only restoring an existing database, run the HDBSQL command to stop the database.
+ ```bash
+ hdbsql -U AZUREWLBACKUPHANAUSER -d systemDB
+ ```
+
+ * To restore to a point-in-time:
+
+ If you're creating a new restored database, run the HDBSQL command to create a new database `<DatabaseName>` and then stop the database for restore using the command `ALTER SYSTEM STOP DATABASE <db> IMMEDIATE`. However, if you're only restoring an existing database, run the HDBSQL command to stop the database.
Then run the following command to restore the database: ```hdbsql
- RECOVER DATABASE FOR <DatabaseName> UNTIL TIMESTAMP '<TimeStamp>' CLEAR LOG USING SOURCE '<DatabaseName@HostName>' USING CATALOG PATH ('<PathToGeneratedCatalogInStep3>') USING LOG PATH (' <LogFileDir>') USING DATA PATH ('<DataFileDir>') USING BACKUP_ID <BackupIdFromJsonFile> CHECK ACCESS USING FILE
+ RECOVER DATABASE FOR <db> UNTIL TIMESTAMP <t1> USING CATALOG PATH <path> USING LOG PATH <path> USING DATA PATH <path> USING BACKUP_ID <bkId> CHECK ACCESS USING FILE
``` * `<DatabaseName>` - Name of the new database or existing database that you want to restore
To restore the backup data as files instead of a database, choose **Restore as F
* To restore to a particular full or differential backup:
- If you're creating a new restored database, run the HDBSQL command to create a new database `<DatabaseName>` and then stop the database for restore. However, if you're only restoring an existing database, run the HDBSQL command to stop the database:
+ If you're creating a new restored database, run the HDBSQL command to create a new database `<DatabaseName>` and then stop the database for restore using the command `ALTER SYSTEM STOP DATABASE <db> IMMEDIATE`. However, if you're only restoring an existing database, run the HDBSQL command to stop the database:
```hdbsql RECOVER DATA FOR <DatabaseName> USING BACKUP_ID <BackupIdFromJsonFile> USING SOURCE '<DatabaseName@HostName>' USING CATALOG PATH ('<PathToGeneratedCatalogInStep3>') USING DATA PATH ('<DataFileDir>') CLEAR LOG
To restore the backup data as files instead of a database, choose **Restore as F
* `<DataFileDir>` - the folder that contains the full backups * `<LogFilesDir>` - the folder that contains the log backups, differential and incremental backups (if any) * `<BackupIdFromJsonFile>` - the **BackupId** extracted in **Step C**
+ * To restore using backup ID:
+
+ ```hdbsql
+ RECOVER DATA FOR <db> USING BACKUP_ID <bkId> USING CATALOG PATH <path> USING LOG PATH <path> USING DATA PATH <path> CHECK ACCESS USING FILE
+ ```
+
+ Examples:
+
+ SAP HANA SYSTEM restoration on same server
+
+ ```hdbsql
+ RECOVER DATABASE FOR SYSTEM UNTIL TIMESTAMP '2022-01-12T08:51:54.023' USING CATALOG PATH ('/restore/catalo_gen') USING LOG PATH ('/restore/Log/') USING DATA PATH ('/restore/Data_2022-01-12_08-51-54/') USING BACKUP_ID 1641977514020 CHECK ACCESS USING FILE
+ ```
+
+ SAP HANA tenant restoration on same server
+
+ ```hdbsql
+ RECOVER DATABASE FOR DHI UNTIL TIMESTAMP '2022-01-12T08:51:54.023' USING CATALOG PATH ('/restore/catalo_gen') USING LOG PATH ('/restore/Log/') USING DATA PATH ('/restore/Data_2022-01-12_08-51-54/') USING BACKUP_ID 1641977514020 CHECK ACCESS USING FILE
+ ```
+
+ SAP HANA SYSTEM restoration on different server
+
+ ```hdbsql
+ RECOVER DATABASE FOR SYSTEM UNTIL TIMESTAMP '2022-01-12T08:51:54.023' USING SOURCE <sourceSID> USING CATALOG PATH ('/restore/catalo_gen') USING LOG PATH ('/restore/Log/') USING DATA PATH ('/restore/Data_2022-01-12_08-51-54/') USING BACKUP_ID 1641977514020 CHECK ACCESS USING FILE
+ ```
+
+ SAP HANA tenant restoration on different server
+
+ ```hdbsql
+ RECOVER DATABASE FOR DHI UNTIL TIMESTAMP '2022-01-12T08:51:54.023' USING SOURCE <sourceSID> USING CATALOG PATH ('/restore/catalo_gen') USING LOG PATH ('/restore/Log/') USING DATA PATH ('/restore/Data_2022-01-12_08-51-54/') USING BACKUP_ID 1641977514020 CHECK ACCESS USING FILE
+ ```
### Restore to a specific point in time
bastion Kerberos Authentication Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/kerberos-authentication-portal.md
+
+ Title: 'Configure Bastion for Kerberos authentication: Azure portal'
+
+description: Learn how to configure Bastion to use Kerberos authentication via the Azure portal.
+++ Last updated : 03/08/2022++++
+# How to configure Bastion for Kerberos authentication using the Azure portal (Preview)
+
+This article shows you how to configure Azure Bastion to use Kerberos authentication. Kerberos authentication can be used with both the Basic and the Standard Bastion SKUs. For more information about Kerberos authentication, see the [Kerberos authentication overview](/windows-server/security/kerberos/kerberos-authentication-overview). For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
+
+> [!NOTE]
+> During Preview, the Kerberos setting for Azure Bastion can be configured in the Azure portal only.
+>
+
+## <a name="prereq"></a>Prerequisites
+
+* An Azure account with an active subscription. If you don't have one, [create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). To be able to connect to a VM through your browser using Bastion, you must be able to sign in to the Azure portal.
+
+* An Azure virtual network. For steps to create a VNet, see [Quickstart: Create a virtual network](../virtual-network/quick-create-portal.md).
+
+## <a name="vnet"></a>Update VNet DNS servers
+
+In this section, the following steps help you update your virtual network to specify custom DNS settings.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to the virtual network for which you want to deploy the Bastion resources.
+1. Go to the **DNS servers** page for your VNet and select **Custom**. Add the IP address of your Azure-hosted domain controller and **Save**.
+
+ :::image type="content" source="./media/kerberos-authentication-portal/dns-servers.png" alt-text="Screenshot of DNS servers page." lightbox="./media/kerberos-authentication-portal/dns-servers.png":::
+
+## <a name="deploy"></a>Deploy Bastion
+
+In this section, the following steps help you deploy Bastion to your virtual network.
+
+1. Deploy Bastion to your VNet using the steps in [Tutorial: Deploy Bastion using manual configuration settings](tutorial-create-host-portal.md). Configure the settings on the **Basics** tab. Then, select the **Advanced** tab.
+
+1. On the **Advanced** tab, select **Kerberos**. Then select the **Review + create** and **Create** to deploy Bastion to your virtual network.
+
+ :::image type="content" source="./media/kerberos-authentication-portal/select-kerberos.png" alt-text="Screenshot of Advanced tab." lightbox="./media/kerberos-authentication-portal/select-kerberos.png":::
+
+1. Once the deployment completes, you can use it to sign in to any reachable Windows VMs joined to the custom DNS you specified in the earlier steps.
+
+## <a name="modify"></a>To modify an existing Bastion deployment
+
+In this section, the following steps help you modify your virtual network and existing Bastion deployment for Kerberos authentication.
+
+1. [Update the DNS settings](#vnet) for your virtual network.
+1. Go to the portal page for your Bastion deployment and select **Configuration**.
+1. On the Configuration page, select **Kerberos authentication**, then select **Apply**.
+1. Bastion will update with the new configuration settings.
+
+## <a name="verify"></a>To verify Bastion is using Kerberos
+
+Once you have enabled Kerberos on your Bastion resource, you can verify that it's actually using Kerberos for authentication to the target domain-joined VM.
+
+1. Sign into the target VM (either via Bastion or not). Search for "Edit Group Policy" from the taskbar and open the **Local Group Policy Editor**.
+1. Select **Computer Configuration > Windows Settings > Security Settings > Local Policies > Security Options**.
+1. Find the policy **Networking security: Restrict NTLM: Incoming NTLM Traffic** and set it to **Deny all domain accounts**. Because Bastion uses NTLM for authentication when Kerberos is disabled, this setting ensures that NTLM-based authentication is unsuccessful for future sign-in attempts on the VM.
+1. End the VM session.
+1. Connect to the target VM again using Bastion. Sign-in should succeed, indicating that Bastion used Kerberos (and not NTLM) for authentication.
+
+## Next steps
+
+For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
cdn Cdn Manage Expiration Of Blob Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-manage-expiration-of-blob-content.md
$blob.ICloudBlob.SetProperties()
> ## Setting Cache-Control headers by using .NET
-To specify a blob's `Cache-Control` header by using .NET code, use the [Azure Storage Client Library for .NET](../storage/blobs/storage-quickstart-blobs-dotnet.md) to set the [CloudBlob.Properties.CacheControl](/dotnet/api/microsoft.azure.storage.blob.blobproperties.cachecontrol) property.
+To specify a blob's `Cache-Control` header by using .NET code, use the [Azure Storage Client Library for .NET](../storage/blobs/storage-quickstart-blobs-dotnet.md) to set the [BlobHttpHeaders.CacheControl](/dotnet/api/azure.storage.blobs.models.blobhttpheaders.cachecontrol?view=azure-dotnet) property.
For example: ```csharp
-class Program
-{
- const string connectionString = "<storage connection string>";
- static void Main()
+ class Program
{
- // Retrieve storage account information from connection string
- CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
-
- // Create a blob client for interacting with the blob service.
- CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
-
- // Create a reference to the container
- CloudBlobContainer <container name> = blobClient.GetContainerReference("<container name>");
-
- // Create a reference to the blob
- CloudBlob <blob name> = container.GetBlobReference("<blob name>");
-
- // Set the CacheControl property to expire in 1 hour (3600 seconds)
- blob.Properties.CacheControl = "max-age=3600";
-
- // Update the blob's properties in the cloud
- blob.SetProperties();
+ const string containerName = "<container name>";
+ const string blobName = "<blob name>";
+ const string connectionString = "<storage connection string>";
+ static void Main()
+ {
+ // Retrieve storage account information from connection string
+ BlobContainerClient container = new BlobContainerClient(connectionString, containerName);
+
+ // Create a blob client for interacting with the blob service.
+ BlobClient blob = container.GetBlobClient(blobName);
+
+ // Set the CacheControl property to expire in 1 hour (3600 seconds)
+ blob.SetHttpHeaders(new BlobHttpHeaders {CacheControl = "max-age=3600" });
+ }
}
-}
``` > [!TIP]
cognitive-services Speech Ssml Phonetic Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-ssml-phonetic-sets.md
For some locales, Speech service defines its own phonetic alphabets, which ordin
See the sections in this article for the phonemes that are specific to each locale.
+## ar-EG/ar-SA
+
+## bg-BG
+ ## ca-ES [!INCLUDE [ca-ES](./includes/phonetic-sets/text-to-speech/ca-es.md)]
-## de-DE
+## cs-CZ
+
+## da-DK
+
+## de-DE/de-CH/de-AT
[!INCLUDE [de-DE](./includes/phonetic-sets/text-to-speech/de-de.md)]
-## en-GB
+## el-GR
+
+## en-GB/en-IE/en-AU
[!INCLUDE [en-GB](./includes/phonetic-sets/text-to-speech/en-gb.md)]
-## en-US
+## en-US/en-CA
[!INCLUDE [en-US](./includes/phonetic-sets/text-to-speech/en-us.md)] ## es-ES
See the sections in this article for the phonemes that are specific to each loca
## es-MX [!INCLUDE [es-MX](./includes/phonetic-sets/text-to-speech/es-mx.md)]
-## fr-FR
+## fi-FI
+
+## fr-FR/fr-CA/fr-CH
[!INCLUDE [fr-FR](./includes/phonetic-sets/text-to-speech/fr-fr.md)]
+## he-IL
+
+## hr-HR
+
+## hu-HU
+
+## id-ID
+ ## it-IT [!INCLUDE [it-IT](./includes/phonetic-sets/text-to-speech/it-it.md)] ## ja-JP [!INCLUDE [ja-JP](./includes/phonetic-sets/text-to-speech/ja-jp.md)]
+## ko-KR
+
+## ms-MY
+
+## nb-NO
+
+## nl-NL/nl-BE
+
+## pl-PL
+ ## pt-BR [!INCLUDE [pt-BR](./includes/phonetic-sets/text-to-speech/pt-br.md)] ## pt-PT [!INCLUDE [pt-PT](./includes/phonetic-sets/text-to-speech/pt-pt.md)]
+## ro-RO
+ ## ru-RU [!INCLUDE [ru-RU](./includes/phonetic-sets/text-to-speech/ru-ru.md)]
+## sk-SK
+
+## sl-SI
+
+## sv-SE
+
+## th-TH
+
+## tr-TR
+
+## vi-VN
+ ## zh-CN [!INCLUDE [zh-CN](./includes/phonetic-sets/text-to-speech/zh-cn.md)]
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/known-issues.md
The following are known issues in the Communication Services Call Automation API
Up to 100 users can join a group call using the JS web calling SDK.
-##Android API emulators
-When utilizing Android API emulators some crashes are expected.
+## Android API emulators
+When utilizing Android API emulators on Android 5.0 (API level 21) and Android 5.1 (API level 22) some crashes are expected.
container-apps Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/billing.md
+
+ Title: Billing in Azure Container Apps preview
+description: Learn how billing is calculated in Azure Container Apps preview
++++ Last updated : 03/09/2022+++
+# Billing in Azure Container Apps preview
+
+Azure Container Apps billing consists of two types of charges:
+
+- **[Resource consumption](#resource-consumption-charges)**: The amount of resources allocated to your container app on a per-second basis, billed in vCPU-seconds and GiB-seconds.
+
+- **[HTTP requests](#request-charges)**: The number of HTTP requests your container app receives.
+
+The following resources are free during each calendar month, per subscription:
+
+- The first 180,000 vCPU-seconds
+- The first 360,000 GiB-seconds
+- The first 2 million HTTP requests
+
+This article describes how to calculate the cost of running your container app. For pricing details in your account's currency, see [Azure Container Apps Pricing](https://azure.microsoft.com/pricing/details/container-apps/).
+
+## Resource consumption charges
+
+Azure Container Apps runs replicas of your application based on the [scaling rules and replica count limits](scale-app.md) you configure. You're charged for the amount of resources allocated to each replica while it's running.
+
+There are two meters for resource consumption:
+
+- **vCPU-seconds**: The amount of vCPU cores allocated to your container app on a per-second basis.
+
+- **GiB-seconds**: The amount of memory allocated to your container app on a per-second basis.
+
+The first 180,000 vCPU-seconds and 360,000 GiB-seconds in each subscription per calendar month are free.
+
+The rate you pay for resource consumption depends on the state of your container app and replicas. By default, replicas are charged at an *active* rate. However, in certain conditions, a replica can enter an *idle* state. While in an *idle* state, resources are billed at a reduced rate.
+
+### No replicas are running
+
+When your container app is scaled down to zero replicas, no resource consumption charges are incurred.
+
+### Minimum number of replicas are running
+
+Idle usage charges are applied when your replicas are running under a specific set of circumstances. The criteria for idle charges include:
+
+- When your container app<sup>1</sup> is configured with a [minimum replica count](scale-app.md) of at least one.
+- The app is scaled down to the minimum replica count.
+
+Usage charges are calculated individually for each replica. A replica is considered idle when *all* of the following conditions are true:
+
+- All of the containers in the replica have started and are running.
+- The replica isn't processing any HTTP requests.
+- The replica is using less than 0.01 vCPU cores.
+- The replica is receiving less than 1,000 bytes per second of network traffic.
+
+When a replica is idle, resource consumption charges are calculated at the reduced idle rates. When a replica is not idle, the active rates apply.
+
+### More than the minimum number of replicas are running
+
+When your container app<sup>1</sup> is scaled above the [minimum replica count](scale-app.md), all running replicas are charged for resource consumption at the active rate.
+
+<sup>1</sup> For container apps in multiple revision mode, charges are based on the current replica count in a revision relative to its configured minimum replica count.
+
+## Request charges
+
+In addition to resource consumption, Azure Container Apps also charges based on the number of HTTP requests received by your container app.
+
+The first 2 million requests in each subscription per calendar month are free.
+
cost-management-billing Aws Integration Set Up Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/aws-integration-set-up-configure.md
description: This article walks you through setting up and configuring AWS Cost and Usage report integration with Cost Management. Previously updated : 01/10/2022 Last updated : 03/30/2022 -+ # Set up and configure AWS Cost and Usage report integration
The policy JSON should resemble the following example. Replace `bucketname` with
"Effect": "Allow", "Action": [ "s3:GetObject",
- "s3:ListBucket"
+ "s3:ListBucket",
"iam:GetPolicyVersion", "iam:ListPolicyVersions",
- "iam:ListAttachedRolePolicies",
+ "iam:ListAttachedRolePolicies"
], "Resource": [ "arn:aws:s3:::bucketname",
- "arn:aws:s3:::bucketname/*"
+ "arn:aws:s3:::bucketname/*",
"arn:aws:iam::accountnumber:policy/*", "arn:aws:iam::accountnumber:role/rolename" ]
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
The Azure Data Factory service is improved on an ongoing basis. To stay up to da
This page is updated monthly, so revisit it regularly.
+## March 2022
+<br>
+<table>
+<tr><td><b>Service Category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+<tr><td rowspan=5><b>Data Flow</b></td><td>ScriptLines and Parameterized Linked Service support added mapping data flows</td><td>It is now super-easy to detect changes to your data flow script in Git with ScriptLines in your data flow JSON definition. Parameterized Linked Services can now also be used inside your data flows for flexible generic connection patterns.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-mapping-data-flows-adds-scriptlines-and-link-service/ba-p/3249929#M589">Learn more</a></td></tr>
+
+<tr><td>Flowlets General Availability (GA)</td><td>Flowlets is now generally available to create reusable portions of data flow logic that you can share in other pipelines as inline transformations. Flowlets enable ETL jobs to be composed of custom or common logic components.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450">Learn more</a></td></tr>
+
+<tr><td>Change Feed connectors are available in 5 data flow source transformations</td><td>Change Feed connectors are available in data flow source transformations for Cosmos DB, Blob store, ADLS Gen1, ADLS Gen2, and CDM.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450">Learn more</a></td></tr>
+
+<tr><td>Data Preview and Debug Improvements in Mapping Data Flows</td><td>A few new exciting features were added to data preview and the debug experience in Mapping Data Flows.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254">Learn more</a></td></tr>
+
+<tr><td>SFTP connector for Mapping Data Flow</td><td>The SFTP connector is now available for Mapping Data Flows.<br><a href="connector-sftp.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
+
+<tr><td><b>Data Movement</b></td><td>Support Always Encrypted for SQL related connectors in Lookup Activity under Managed VNET</td><td>Always Encrypted is supported for SQL Server, Azure SQL DB, Azure SQL MI, Azure Synapse Analytics in Lookup Activity under Managed VNET.<br><a href="control-flow-lookup-activity.md">Learn more</a></td></tr>
+
+<tr><td><b>Integration Runtime</b></td><td>New UI layout in Azure integration runtime creation and edit page</td><td>The UI layout of the integration runtime creation/edit page has been changed to tab style including Settings, Virtual Network and Data flow runtime.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/new-ui-layout-in-azure-integration-runtime-creation-and-edit/ba-p/3248237">Learn more</a></td></tr>
+
+<tr><td rowspan=2><b>Orchestration</b></td><td>Transform data using the Script activity</td><td>You can use a Script activity to invoke a SQL script in Azure SQL Database, Azure Synapse Analytics, SQL Server Database, Oracle, or Snowflake.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/execute-sql-statements-using-the-new-script-activity-in-azure/ba-p/3239969">Learn more</a></td></tr>
+
+<tr><td>Web activity timeout improvement</td><td>You can configure response timeout in a Web activity to prevent it from timing out if the response period is more than 1 minute, especially in the case of synchronous APIs.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/web-activity-response-timeout-improvement/ba-p/3260307">Learn more</a></td></tr>
+
+</table>
+ ## February 2022 <br> <table>
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Title: Microsoft Defender for Cloud - an introduction description: Use Microsoft Defender for Cloud to protect your Azure, hybrid, and multi-cloud resources and workloads. --++ Previously updated : 02/28/2022 Last updated : 03/31/2022 # What is Microsoft Defender for Cloud?
Defender for Cloud fills three vital needs as you manage the security of your re
## Posture management and workload protection
-Microsoft Defender for Cloud's features cover the two broad pillars of cloud security: cloud security posture management and cloud workload protection.
+Microsoft Defender for Cloud's features covers the two broad pillars of cloud security: cloud security posture management and cloud workload protection.
### Cloud security posture management (CSPM)
The central feature in Defender for Cloud that enables you to achieve those goal
When you open Defender for Cloud for the first time, it will meet the visibility and strengthening goals as follows:
-1. **Generate a secure score** for your subscriptions based on an assessment of your connected resources compared with the guidance in [Azure Security Benchmark](/security/benchmark/azure/overview). Use the score to understand your security posture, and the compliance dashboard to review your compliance with the built-in benchmark. When you've enabled the enhanced security features, you can customize the standards used to assess your compliance, and add other regulations (such as NIST and Azure CIS) or organization-specific security requirements.
+1. **Generate a secure score** for your subscriptions based on an assessment of your connected resources compared with the guidance in [Azure Security Benchmark](/security/benchmark/azure/overview). Use the score to understand your security posture, and the compliance dashboard to review your compliance with the built-in benchmark. When you've enabled the enhanced security features, you can customize the standards used to assess your compliance, and add other regulations (such as NIST and Azure CIS) or organization-specific security requirements. You can also apply recommendations, and score based on the AWS Foundational Security Best practices standards.
1. **Provide hardening recommendations** based on any identified security misconfigurations and weaknesses. Use these security recommendations to strengthen the security posture of your organization's Azure, hybrid, and multi-cloud resources.
Learn more on the following pages:
It's a security basic to know and make sure your workloads are secure, and it starts with having tailored security policies in place. Because policies in Defender for Cloud are built on top of Azure Policy controls, you're getting the full range and flexibility of a **world-class policy solution**. In Defender for Cloud, you can set your policies to run on management groups, across subscriptions, and even for a whole tenant.
-Defender for Cloud continuously discovers new resources that are being deployed across your workloads and assesses whether they are configured according to security best practices. If not, they're flagged and you get a prioritized list of recommendations for what you need to fix. Recommendations help you reduce the attack surface across each of your resources.
+Defender for Cloud continuously discovers new resources that are being deployed across your workloads and assesses whether they're configured according to security best practices. If not, they're flagged and you get a prioritized list of recommendations for what you need to fix. Recommendations help you reduce the attack surface across each of your resources.
The list of recommendations is enabled and supported by the Azure Security Benchmark. This Microsoft-authored, Azure-specific, benchmark provides a set of guidelines for security and compliance best practices based on common compliance frameworks. Learn more in [Introduction to Azure Security Benchmark](/security/benchmark/azure/introduction).
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
Title: Understand the enhanced security features of Microsoft Defender for Cloud description: Learn about the benefits of enabling enhanced security in Microsoft Defender for Cloud --++ Last updated 02/24/2022
You can use any of the following ways to enable enhanced security for your subsc
### Can I enable Microsoft Defender for servers on a subset of servers in my subscription? No. When you enable [Microsoft Defender for servers](defender-for-servers-introduction.md) on a subscription, all the machines in the subscription will be protected by Defender for servers.
-An alternative is to enable Microsoft Defender for servers at the Log Analytics workspace level. If you do this, only servers reporting to that workspace will be protected and billed. However, several capabilities will be unavailable. These include Microsoft Defender for Endpoint, VA solution (TVM/ Qualys), just-in-time VM access, and more.
+An alternative is to enable Microsoft Defender for servers at the Log Analytics workspace level. If you do this, only servers reporting to that workspace will be protected and billed. However, several capabilities will be unavailable. These include Microsoft Defender for Endpoint, VA solution (TVM/Qualys), just-in-time VM access, and more.
### If I already have a license for Microsoft Defender for Endpoint can I get a discount for Defender for servers? If you've already got a license for **Microsoft Defender for Endpoint for Servers**, you won't have to pay for that part of your Microsoft Defender for servers license. Learn more about [this license](/microsoft-365/security/defender-endpoint/minimum-requirements#licensing-requirements).
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 03/30/2022 Last updated : 03/31/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in March include:
+- [Global availability of Secure Score for AWS and GCP environments](#global-availability-of-secure-score-for-aws-and-gcp-environments)
- [Deprecated the recommendations to install the network traffic data collection agent](#deprecated-the-recommendations-to-install-the-network-traffic-data-collection-agent) - [Defender for Containers can now scan for vulnerabilities in Windows images (preview)](#defender-for-containers-can-now-scan-for-vulnerabilities-in-windows-images-preview) - [New alert for Microsoft Defender for Storage (preview)](#new-alert-for-microsoft-defender-for-storage-preview)
Updates in March include:
- [Deprecated Microsoft Defender for IoT device recommendations](#deprecated-microsoft-defender-for-iot-device-recommendations) - [Deprecated Microsoft Defender for IoT device alerts](#deprecated-microsoft-defender-for-iot-device-alerts) - [Posture management and threat protection for AWS and GCP released for general availability (GA)](#posture-management-and-threat-protection-for-aws-and-gcp-released-for-general-availability-ga)
+- [Registry scan for Windows images in ACR added support for national clouds](#registry-scan-for-windows-images-in-acr-added-support-for-national-clouds)
+
+### Global availability of Secure Score for AWS and GCP environments
+
+The cloud security posture management capabilities provided by Microsoft Defender for Cloud, has now added support for your AWS and GCP environments within your Secure Score.
+
+Enterprises can now view their overall security posture, across various environments, such as Azure, AWS and GCP.
+
+The Secure Score page has been replaced with the Security posture dashboard. The Security posture dashboard allows you to view an overall combined score for all of your environments, or a breakdown of your security posture based on any combination of environments that you choose.
+
+The Recommendations page has also been redesigned to provide new capabilities such as: cloud environment selection, advanced filters based on content (resource group, AWS account, GCP project and more), improved user interface on low resolution, support for open query in resource graph, and more. You can learn more about your overall [security posture](secure-score-security-controls.md) and [security recommendations](review-security-recommendations.md).
### Deprecated the recommendations to install the network traffic data collection agent
-Changes in our roadmap and priorities have removed the need for the network traffic data collection agent. Consequently, the following two recommendations and their related policies were deprecated.
+Changes in our roadmap and priorities have removed the need for the network traffic data collection agent. The following two recommendations and their related policies were deprecated.
|Recommendation |Description |Severity | ||||
See more alerts for [Resource Manager](alerts-reference.md#alerts-resourcemanage
The recommendation `Vulnerabilities in container security configurations should be remediated` has been moved from the secure score section to best practices section.
-The current user experience only provides the score when all compliance checks have passed. Most customers have difficulties with meeting all the required checks. We are working on an improved experience for this recommendation, and once released the recommendation will be moved back to the secure score.
+The current user experience only provides the score when all compliance checks have passed. Most customers have difficulties with meeting all the required checks. We're working on an improved experience for this recommendation, and once released the recommendation will be moved back to the secure score.
### Deprecated the recommendation to use service principals to protect your subscriptions
The following recommendations are deprecated:
### Deprecated Microsoft Defender for IoT device alerts
-All Microsoft Defender for IoT device alerts are no longer visible in Microsoft Defender for Cloud. These alerts are still available on Microsoft Defender for IoT's Alert page, and in Microsoft Sentinel.
+All Microsoft Defenders for IoT device alerts are no longer visible in Microsoft Defender for Cloud. These alerts are still available on Microsoft Defender for IoT's Alert page, and in Microsoft Sentinel.
### Posture management and threat protection for AWS and GCP released for general availability (GA)
All Microsoft Defender for IoT device alerts are no longer visible in Microsoft
Learn how to protect and connect your [AWS environment](quickstart-onboard-aws.md) and [GCP organization](quickstart-onboard-gcp.md) with Microsoft Defender for Cloud.
+### Registry scan for Windows images in ACR added support for national clouds
+
+Registry scan for Windows images is now supported in Azure Government and Azure China 21Vianet. This addition is currently in preview.
+
+Learn more about our [feature's availability](supported-machines-endpoint-solutions-clouds-containers.md).
+ ## February 2022 Updates in February include:
Updates in January include:
### Microsoft Defender for Resource Manager updated with new alerts and greater emphasis on high-risk operations mapped to MITRE ATT&CK® Matrix
-The cloud management layer is a crucial service connected to all your cloud resources. Because of this, it's also a potential target for attackers. Consequently, we recommend security operations teams closely monitor the resource management layer.
+The cloud management layer is a crucial service connected to all your cloud resources. Because of this, it's also a potential target for attackers. We recommend security operations teams closely monitor the resource management layer.
Microsoft Defender for Resource Manager automatically monitors the resource management operations in your organization, whether they're performed through the Azure portal, Azure REST APIs, Azure CLI, or other Azure programmatic clients. Defender for Cloud runs advanced security analytics to detect threats and alerts you about suspicious activity.
The two recommendations, which both offer automated remediation (the 'Fix' actio
Defender for Cloud uses the Log Analytics agent to gather security-related data from machines. The agent reads various security-related configurations and event logs and copies the data to your workspace for analysis.
-Defender for Cloud's auto provisioning settings have a toggle for each type of supported extension, including the Log Analytics agent.
+Defender for Cloud's auto provisioning settings has a toggle for each type of supported extension, including the Log Analytics agent.
In a further expansion of our hybrid cloud features, we've added an option to auto provision the Log Analytics agent to machines connected to Azure Arc.
With this release, the availability and presentation of Defender for Kubernetes
- Existing subscriptions - Wherever they appear in the Azure portal, the plans are shown as **Deprecated** with instructions for how to upgrade to the newer plan :::image type="content" source="media/release-notes/defender-plans-deprecated-indicator.png" alt-text="Defender for container registries and Defender for Kubernetes plans showing 'Deprecated' and upgrade information.":::
-The new plan is free for the month of December 2021. For the potential changes to the billing from the old plans to Defender for Containers, and for more details on the benefits introduced with this plan, see [Introducing Microsoft Defender for Containers](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/introducing-microsoft-defender-for-containers/ba-p/2952317).
+The new plan is free for the month of December 2021. For the potential changes to the billing from the old plans to Defender for Containers, and for more information on the benefits introduced with this plan, see [Introducing Microsoft Defender for Containers](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/introducing-microsoft-defender-for-containers/ba-p/2952317).
For more information, see:
defender-for-cloud Review Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md
Title: Security recommendations in Microsoft Defender for Cloud description: This document walks you through how recommendations in Microsoft Defender for Cloud help you protect your Azure resources and stay in compliance with security policies. Previously updated : 11/09/2021 Last updated : 03/31/2022 # Review your security recommendations [!INCLUDE [Banner for top of topics](./includes/banner.md)]
-This topic explains how to view and understand the recommendations in Microsoft Defender for Cloud to help you protect your Azure resources.
+This article explains how to view and understand the recommendations in Microsoft Defender for Cloud to help you protect your multi-cloud resources.
-## Monitor recommendations <a name="monitor-recommendations"></a>
+## View your recommendations <a name="monitor-recommendations"></a>
Defender for Cloud analyzes the security state of your resources to identify potential vulnerabilities.
-1. From Defender for Cloud's menu, open the **Recommendations** page to see the recommendations applicable to your environment. Recommendations are grouped into security controls.
+**To view your Secure score recommendations**:
- :::image type="content" source="./media/review-security-recommendations/view-recommendations.png" alt-text="Recommendations grouped by security control." lightbox="./media/review-security-recommendations/view-recommendations.png":::
+1. Sign in to the [Azure portal](https://portal.azure.com).
-1. To find recommendations specific to the resource type, severity, environment, or other criteria that are important to you, use the optional filters above the list of recommendations.
+1. Navigate to **Microsoft Defender for Cloud** > **Recommendations**.
- :::image type="content" source="media/review-security-recommendations/recommendation-list-filters.png" alt-text="Filters for refining the list of Microsoft Defender for Cloud recommendations.":::
+ :::image type="content" source="media/review-security-recommendations/recommendations-view.png" alt-text="Screenshot of the recommendations page.":::
-1. Expand a control and select a specific recommendation to view the recommendation details page.
+ Here you'll see the recommendations applicable to your environment(s). Recommendations are grouped into security controls.
- :::image type="content" source="./media/review-security-recommendations/recommendation-details-page.png" alt-text="Recommendation details page." lightbox="./media/review-security-recommendations/recommendation-details-page.png":::
+1. Select **Secure score recommendations**.
- The page includes:
+ :::image type="content" source="media/review-security-recommendations/secure-score-recommendations.png" alt-text="Screenshot showing the location of the secure score recommendations tab.":::
+
+ > [!NOTE]
+ > Custom recommendations can be found under the All recommendations tab. Learn how to [Create custom security initiatives and policies](custom-security-policies.md).
+
+ Secure score recommendations affect the secure score and are mapped to the various security controls. The All recommendations tab, allows you to see all of the recommendations including recommendations that are part of different regulatory compliance standards.
+
+1. (Optional) Select a relevant environment(s).
+
+ :::image type="content" source="media/review-security-recommendations/environment-filter.png" alt-text="Screenshot of the environment filter, to select your filters.":::
+
+1. Select the :::image type="icon" source="media/review-security-recommendations/drop-down-arrow.png" border="false"::: to expand the control, and view a list of recommendations.
+
+ :::image type="content" source="media/review-security-recommendations/list-recommendations.png" alt-text="Screenshot showing how to see the full list of recommendations by selecting the drop-down menu icon." lightbox="media/review-security-recommendations/list-recommendations-expanded.png":::
+
+1. Select a specific recommendation to view the recommendation details page.
+
+ :::image type="content" source="./media/review-security-recommendations/recommendation-details-page.png" alt-text="Recommendation details page." lightbox="./media/review-security-recommendations/recommendation-details-page-expanded.png":::
1. For supported recommendations, the top toolbar shows any or all of the following buttons: - **Enforce** and **Deny** (see [Prevent misconfigurations with Enforce/Deny recommendations](prevent-misconfigurations.md)).
Defender for Cloud analyzes the security state of your resources to identify pot
1. **Severity indicator**. 1. **Freshness interval** (where relevant). 1. **Count of exempted resources** if exemptions exist for a recommendation, this shows the number of resources that have been exempted with a link to view the specific resources.
- 1. **Mapping to MITRE ATT&CK ® tactics and techniques** if a recommendation has defined tactics and techniques, select the icon for links to the relevant pages on MITRE's site.
+ 1. **Mapping to MITRE ATT&CK ® tactics and techniques** if a recommendation has defined tactics and techniques, select the icon for links to the relevant pages on MITRE's site. This applies only to Azure scored recommendations.
:::image type="content" source="media/review-security-recommendations/tactics-window.png" alt-text="Screenshot of the MITRE tactics mapping for a recommendation.":::
Defender for Cloud analyzes the security state of your resources to identify pot
The relationship types are: - **Prerequisite** - A recommendation that must be completed before the selected recommendation
- - **Alternative** - A different recommendation which provides another way of achieving the goals of the selected recommendation
+ - **Alternative** - A different recommendation, which provides another way of achieving the goals of the selected recommendation
- **Dependent** - A recommendation for which the selected recommendation is a prerequisite For each related recommendation, the number of unhealthy resources is shown in the "Affected resources" column.
Defender for Cloud analyzes the security state of your resources to identify pot
1. **Remediation steps** - A description of the manual steps required to remediate the security issue on the affected resources. For recommendations with the **Fix** option**, you can select **View remediation logic** before applying the suggested fix to your resources. 1. **Affected resources** - Your resources are grouped into tabs:
- - **Healthy resources** ΓÇô Relevant resources which either aren't impacted or on which you've already remediated the issue.
- - **Unhealthy resources** ΓÇô Resources which are still impacted by the identified issue.
+ - **Healthy resources** ΓÇô Relevant resources, which either aren't impacted or on which you've already remediated the issue.
+ - **Unhealthy resources** ΓÇô Resources that are still impacted by the identified issue.
- **Not applicable resources** ΓÇô Resources for which the recommendation can't give a definitive answer. The not applicable tab also includes reasons for each resource. :::image type="content" source="./media/review-security-recommendations/recommendations-not-applicable-reasons.png" alt-text="Not applicable resources with reasons."::: 1. Action buttons to remediate the recommendation or trigger a logic app.
+## Search for a recommendation
+
+You can search for specific recommendations by name. The search box and filters above the list of recommendations can be used to help locate a specific recommendation.
+
+Custom recommendations only appear under the All recommendations tab.
+
+**To search for recommendations**:
+
+1. On the recommendation page, select an environment from the environment filter.
+
+ :::image type="content" source="media/review-security-recommendations/environment-filter.png" alt-text="Screenshot of the environmental filter on the recommendation page.":::
+
+ You can select 1, 2, or all options at a time. The page's results will automatically reflect your choice.
+
+1. Enter a name in the search box, or select one of the available filters.
+
+ :::image type="content" source="media/review-security-recommendations/search-filters.png" alt-text="Screenshot of the search box and filter list.":::
+
+1. Select :::image type="icon" source="media/review-security-recommendations/add-filter.png" border="false"::: to add more filter(s).
+
+1. Select a filter from the drop-down menu.
+
+ :::image type="content" source="media/review-security-recommendations/filter-drop-down.png" alt-text="Screenshot of the available filters to select.":::
+
+1. Select a value from the drop-down menu.
+
+1. Select **OK**.
+ ## Review recommendation data in Azure Resource Graph Explorer (ARG)
-The toolbar on the recommendation details page includes an **Open query** button to explore the details in [Azure Resource Graph (ARG)](../governance/resource-graph/index.yml), an Azure service that provides the ability to query - across multiple subscriptions - Defender for Cloud's security posture data.
+You can review recommendations in ARG both on the recommendations page or on an individual recommendation.
+
+The toolbar on the recommendation details page includes an **Open query** button to explore the details in [Azure Resource Graph (ARG)](../governance/resource-graph/index.yml), an Azure service that gives you the ability to query - across multiple subscriptions - Defender for Cloud's security posture data.
ARG is designed to provide efficient resource exploration with the ability to query at scale across your cloud environments with robust filtering, grouping, and sorting capabilities. It's a quick and efficient way to query information across Azure subscriptions programmatically or from within the Azure portal. Using the [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/), you can cross-reference Defender for Cloud data with other resource properties.
-For example, this recommendation details page shows fifteen affected resources:
+For example, this recommendation details page shows 15 affected resources:
:::image type="content" source="./media/review-security-recommendations/open-query.png" alt-text="The **Open Query** button on the recommendation details page.":::
-When you open the underlying query, and run it, Azure Resource Graph Explorer returns the same fifteen resources and their health status for this recommendation:
+When you open the underlying query, and run it, Azure Resource Graph Explorer returns the same 15 resources and their health status for this recommendation:
:::image type="content" source="./media/review-security-recommendations/run-query.png" alt-text="Azure Resource Graph Explorer showing the results for the recommendation shown in the previous screenshot.":::
-## Preview recommendations
+## Recommendation insights
+
+The Insights column of the page gives you more details for each recommendation. The options available in this section include:
+
+| Icon | Name | Description |
+|--|--|--|
+| :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false"::: | *Preview recommendation** | This recommendation won't affect your secure score until it's GA. |
+| :::image type="icon" source="media/secure-score-security-controls/fix-icon.png" border="false"::: | **Fix** | From within the recommendation details page, you can use 'Fix' to resolve this issue. |
+| :::image type="icon" source="media/secure-score-security-controls/enforce-icon.png" border="false"::: | **Enforce** | From within the recommendation details page, you can automatically deploy a policy to fix this issue whenever someone creates a non-compliant resource. |
+| :::image type="icon" source="media/secure-score-security-controls/deny-icon.png" border="false"::: | **Deny** | From within the recommendation details page, you can prevent new resources from being created with this issue. |
+
+Recommendations that aren't included in the calculations of your secure score, should still be remediated wherever possible, so that when the period ends they'll contribute towards your score instead of against it.
+
+## Download recommendations in a CSV report
+
+Recommendations can be downloaded to a CSV report from the Recommendations page.
+
+**To download a CSV report of your recommendations**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Recommendations**.
+
+1. Select **Download CSV report**.
+
+ :::image type="content" source="media/review-security-recommendations/download-csv.png" alt-text="Screenshot showing you where to select the Download CSV report from.":::
+
+You'll know the report is being prepared by the pop-up.
-Recommendations flagged as **Preview** aren't included in the calculations of your secure score.
-They should still be remediated wherever possible, so that when the preview period ends they'll contribute towards your score.
+When the report is ready, you'll be notified by a second pop-up.
-An example of a preview recommendation:
-
## Next steps In this document, you were introduced to security recommendations in Defender for Cloud. For related information:
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
Title: Secure score in Microsoft Defender for Cloud
+ Title: Security posture for Microsoft Defender for Cloud
description: Description of Microsoft Defender for Cloud's secure score and its security controls --++ Previously updated : 03/23/2022 Last updated : 03/31/2022
-# Secure score in Microsoft Defender for Cloud
+# Security posture for Microsoft Defender for Cloud
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
Microsoft Defender for Cloud has two main goals:
- to help you understand your current security situation - to help you efficiently and effectively improve your security
-The central feature in Defender for Cloud that enables you to achieve those goals is **secure score**.
+The central feature in Defender for Cloud that enables you to achieve those goals is the **secure score**.
-Defender for Cloud continually assesses your resources, subscriptions, and organization for security issues. It then aggregates all the findings into a single score so that you can tell, at a glance, your current security situation: the higher the score, the lower the identified risk level.
+Defender for Cloud continually assesses your cross-cloud resources for security issues. It then aggregates all the findings into a single score so that you can tell, at a glance, your current security situation: the higher the score, the lower the identified risk level.
- In the Azure portal pages, the secure score is shown as a percentage value and the underlying values are also clearly presented:
Defender for Cloud continually assesses your resources, subscriptions, and organ
To increase your security, review Defender for Cloud's recommendations page and remediate the recommendation by implementing the remediation instructions for each issue. Recommendations are grouped into **security controls**. Each control is a logical group of related security recommendations, and reflects your vulnerable attack surfaces. Your score only improves when you remediate *all* of the recommendations for a single resource within a control. To see how well your organization is securing each individual attack surface, review the scores for each security control.
-For more information, see [How your secure score is calculated](secure-score-security-controls.md#how-your-secure-score-is-calculated) below.
+For more information, see [How your secure score is calculated](secure-score-security-controls.md#how-your-secure-score-is-calculated) below.
+
+## Manage your security posture
+
+On the Security posture page, you're able to see the secure score for your entire subscription, and each environment in your subscription. By default all environments are shown.
++
+| Page section | Description |
+|--|--|
+| :::image type="content" source="media/secure-score-security-controls/select-environment.png" alt-text="Screenshot showing the different environment options."::: | Select your environment to see its secure score, and details. Multiple environments can be selected at once. The page will change based on your selection here.|
+| :::image type="content" source="media/secure-score-security-controls/environment.png" alt-text="Screenshot of the environment section of the security posture page."::: | Shows the total number of subscriptions, accounts and projects that affect your overall score. It also shows how many unhealthy resources and how many recommendations exist in your environments. |
+
+The bottom half of the page allows you to view, and manage all of your individual subscriptions, accounts, and projects, by viewing their individual secure scores, number of unhealthy resources and even view their recommendations.
+
+You can group this section by environment by selecting the Group by Environment checkbox.
+ ## How your secure score is calculated The contribution of each security control towards the overall secure score is shown on the recommendations page. To get all the possible points for a security control, all of your resources must comply with all of the security recommendations within the security control. For example, Defender for Cloud has multiple recommendations regarding how to secure your management ports. You'll need to remediate them all to make a difference to your secure score. ### Example scores for a control - In this example:
-| # | Name | Description |
-| :: | - | |
-| 1 | **Remediate vulnerabilities security control** | This control contains multiple recommendations related to discovering and resolving known vulnerabilities. |
-| 2 | **Max score** | The maximum number of points you can get by fulfilling all recommendations within a control. The maximum score for a control indicates the relative significance of that control and is fixed for every environment. Use the max score values to triage the issues to work on first.<br>For a list of all controls and their max scores, see [Security controls and their recommendations](#security-controls-and-their-recommendations). |
-| 3 | **Number of resources** | There are 35 resources affected by this control.<br>To understand the possible contribution of every resource, divide the max score by the number of resources.<br>For this example, 6/35=0.1714<br>**Every resource contributes 0.1714 points.** |
-| 4 | **Current score** | The current score for this control.<br>Current score=[Score per resource]*[Number of healthy resources]<br> 0.1714 x 5 healthy resources = 0.86<br>Each control contributes towards the total score. In this example, the control is contributing 0.86 points to current total secure score. |
-| 5 | **Potential score increase** | The remaining points available to you within the control. If you remediate all the recommendations in this control, your score will increase by 9%.<br>Potential score increase=[Score per resource]*[Number of unhealthy resources]<br> 0.1714 x 30 unhealthy resources = 5.14<br> |
+- **Remediate vulnerabilities security control** - This control groups multiple recommendations related to discovering and resolving known vulnerabilities.
+- **Max score** - :::image type="icon" source="media/secure-score-security-controls/max-score.png" border="false":::
-### Calculations - understanding your score
+ The maximum number of points you can gain by completing all recommendations within a control. The maximum score for a control indicates the relative significance of that control and is fixed for every environment. Use the max score values to triage the issues to work on first.<br>For a list of all controls and their max scores, see [Security controls and their recommendations](#security-controls-and-their-recommendations).
+
+- **Current score** - :::image type="icon" source="media/secure-score-security-controls/current-score.png" border="false":::
+
+ The current score for this control.<br>Current score=[Score per resource]*[Number of healthy resources].
-| Metric | Formula and example |
-| | |
-| **Security control's current score** | <br>![Equation for calculating a security control's score.](media/secure-score-security-controls/secure-score-equation-single-control.png)<br><br>Each individual security control contributes towards the Security Score. Each resource affected by a recommendation within the control, contributes towards the control's current score. The current score for each control is a measure of the status of the resources *within* the control.<br>![Tooltips showing the values used when calculating the security control's current score](media/secure-score-security-controls/security-control-scoring-tooltips.png)<br>In this example, the max score of 6 would be divided by 78 because that's the sum of the healthy and unhealthy resources.<br>6 / 78 = 0.0769<br>Multiplying that by the number of healthy resources (4) results in the current score:<br>0.0769 * 4 = **0.31**<br><br> |
-| **Secure score**<br>Single subscription | <br>![Equation for calculating a subscription's secure score](media/secure-score-security-controls/secure-score-equation-single-sub.png)<br><br>![Single subscription secure score with all controls enabled](media/secure-score-security-controls/secure-score-example-single-sub.png)<br>In this example, there is a single subscription with all security controls available (a potential maximum score of 60 points). The score shows 28 points out of a possible 60 and the remaining 32 points are reflected in the "Potential score increase" figures of the security controls.<br>![List of controls and the potential score increase](media/secure-score-security-controls/secure-score-example-single-sub-recs.png) |
-| **Secure score**<br>Multiple subscriptions | <br>![Equation for calculating the secure score for multiple subscriptions.](media/secure-score-security-controls/secure-score-equation-multiple-subs.png)<br><br>When calculating the combined score for multiple subscriptions, Defender for Cloud includes a *weight* for each subscription. The relative weights for your subscriptions are determined by Defender for Cloud based on factors such as the number of resources.<br>The current score for each subscription is calculated in the same way as for a single subscription, but then the weight is applied as shown in the equation.<br>When viewing multiple subscriptions, secure score evaluates all resources within all enabled policies and groups their combined impact on each security control's maximum score.<br>![Secure score for multiple subscriptions with all controls enabled](media/secure-score-security-controls/secure-score-example-multiple-subs.png)<br>The combined score is **not** an average; rather it's the evaluated posture of the status of all resources across all subscriptions.<br>Here too, if you go to the recommendations page and add up the potential points available, you will find that it's the difference between the current score (24) and the maximum score available (60). |
+ Each control contributes towards the total score. In this example, the control is contributing 2.00 points to current total secure score.
+- **Potential score increase** - :::image type="icon" source="media/secure-score-security-controls/potential-increase.png" border="false":::
+ The remaining points available to you within the control. If you remediate all the recommendations in this control, your score will increase by 9%.
+
+ For example, Potential score increase=[Score per resource]*[Number of unhealthy resources] or 0.1714 x 30 unhealthy resources = 5.14.
+
+- **Insights** - :::image type="icon" source="media/secure-score-security-controls/insights.png" border="false":::
+
+ Gives you extra details for each recommendation. Which can be:
+
+ - :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false"::: Preview recommendation - This recommendation won't affect your secure score until it's GA.
+
+ - :::image type="icon" source="media/secure-score-security-controls/fix-icon.png" border="false"::: Fix - From within the recommendation details page, you can use 'Fix' to resolve this issue.
+
+ - :::image type="icon" source="media/secure-score-security-controls/enforce-icon.png" border="false"::: Enforce - From within the recommendation details page, you can automatically deploy a policy to fix this issue whenever someone creates a non-compliant resource.
+
+ - :::image type="icon" source="media/secure-score-security-controls/deny-icon.png" border="false"::: Deny - From within the recommendation details page, you can prevent new resources from being created with this issue.
+
+### Calculations - understanding your score
+
+|Metric|Formula and example|
+|-|-|
+|**Security control's current score**|<br>![Equation for calculating a security control's score.](media/secure-score-security-controls/secure-score-equation-single-control.png)<br><br>Each individual security control contributes towards the Security Score. Each resource affected by a recommendation within the control, contributes towards the control's current score. The current score for each control is a measure of the status of the resources *within* the control.<br>![Tooltips showing the values used when calculating the security control's current score](media/secure-score-security-controls/security-control-scoring-tooltips.png)<br>In this example, the max score of 6 would be divided by 78 because that's the sum of the healthy and unhealthy resources.<br>6 / 78 = 0.0769<br>Multiplying that by the number of healthy resources (4) results in the current score:<br>0.0769 * 4 = **0.31**<br><br>|
+|**Secure score**<br>Single subscription, or connector|<br>![Equation for calculating a subscription's secure score](media/secure-score-security-controls/secure-score-equation-single-sub.png)<br><br>![Single subscription secure score with all controls enabled](media/secure-score-security-controls/secure-score-example-single-sub.png)<br>In this example, there's a single subscription, or connector with all security controls available (a potential maximum score of 60 points). The score shows 28 points out of a possible 60 and the remaining 32 points are reflected in the "Potential score increase" figures of the security controls.<br>![List of controls and the potential score increase](media/secure-score-security-controls/secure-score-example-single-sub-recs.png) <br> This equation is the same equation for a connector with just the word subscription being replaced by the word connector. |
+|**Secure score**<br>Multiple subscriptions, and connectors|<br>![Equation for calculating the secure score for multiple subscriptions.](media/secure-score-security-controls/secure-score-equation-multiple-subs.png)<br><br>When calculating the combined score for multiple subscriptions, and connectors, Defender for Cloud includes a *weight* for each subscription, and connector. The relative weights for your subscriptions, and connectors are determined by Defender for Cloud based on factors such as the number of resources.<br>The current score for each subscription, a dn connector is calculated in the same way as for a single subscription, or connector, but then the weight is applied as shown in the equation.<br>When viewing multiple subscriptions, and connectors, the secure score evaluates all resources within all enabled policies and groups their combined impact on each security control's maximum score.<br>![Secure score for multiple subscriptions with all controls enabled](media/secure-score-security-controls/secure-score-example-multiple-subs.png)<br>The combined score is **not** an average; rather it's the evaluated posture of the status of all resources across all subscriptions, and connectors.<br><br>Here too, if you go to the recommendations page and add up the potential points available, you'll find that it's the difference between the current score (22) and the maximum score available (58).|
+
### Which recommendations are included in the secure score calculations? Only built-in recommendations have an impact on the secure score.
-Recommendations flagged as **Preview** aren't included in the calculations of your secure score. We recommend that you remediate preview recommendations so that they contribute towards your score when the preview period ends.
+Recommendations flagged as **Preview** aren't included in the calculations of your secure score. They should still be remediated wherever possible, so that when the preview period ends they'll contribute towards your score.
An example of a preview recommendation:
We recommend every organization carefully reviews their assigned Azure Policy in
> [!TIP] > For details about reviewing and editing your initiatives, see [Working with security policies](tutorial-security-policy.md).
-Even though Defender for Cloud's default security initiative is based on industry best practices and standards, there are scenarios in which the built-in recommendations listed below might not completely fit your organization. Consequently, it is sometimes necessary to adjust the default initiative - without compromising security - to ensure it's aligned with your organization's own policies, industry standards, regulatory standards, and benchmarks.<br><br>
+Even though Defender for Cloud's default security initiative is based on industry best practices and standards, there are scenarios in which the built-in recommendations listed below might not completely fit your organization. It's sometimes necessary to adjust the default initiative - without compromising security - to ensure it's aligned with your organization's own policies, industry standards, regulatory standards, and benchmarks.<br><br>
[!INCLUDE [security-center-controls-and-recommendations](../../includes/asc/security-control-recommendations.md)]
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--| | [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | March 2022 |
-| [AWS and GCP recommendations to GA](#aws-and-gcp-recommendations-to-ga) | March 2022 |
| [Relocation of custom recommendations](#relocation-of-custom-recommendations) | March 2022 | | [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | May 2022 |
Learn more:
- [Defender for Cloud's supported endpoint protection solutions](supported-machines-endpoint-solutions-clouds-servers.md#endpoint-supported) - [How these recommendations assess the status of your deployed solutions](endpoint-protection-recommendations-technical.md)
-### AWS and GCP recommendations to GA
-
-**Estimated date for change:** March 2022
-
-There are currently AWS and GCP recommendations in the preview stage. These recommendations come from the AWS Foundational Security Best Practices and GCP default standards which are assigned by default. All of the recommendations will become Generally Available (GA) in March 2022.
-
-When these recommendations go live, their impact will be included in the calculations of your secure score. Expect changes to your secure score.
-
-#### AWS recommendations
-
-**To find these recommendations**:
-
-1. Navigate to **Environment settings** > **`AWS connector`** > **Standards (preview)**.
-1. Right click on **AWS Foundational Security Best Practices (preview)**, and select **view assessments**.
--
-#### GCP recommendations
-
-**To find these recommendations**:
-
-1. Navigate to **Environment settings** > **`GCP connector`** > **Standards (preview)**.
-1. Right click on **GCP Default (preview)**, and select **view assessments**.
-- ### Relocation of custom recommendations **Estimated date for change:** March 2022
defender-for-iot Edge Security Module Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/edge-security-module-deprecation.md
This article describes Microsoft Defender for IoT features and support for different capabilities within Defender for IoT.
+## Legacy Defender for IoT micro-agent
+
+The Defender-IoT-micro-agent has been replaced by our newer micro-agent experience.
+
+For more information, see [Tutorial: Create a DefenderIotMicroAgent module twin (Preview)](tutorial-create-micro-agent-module-twin.md) and [Tutorial: Install the Defender for IoT micro agent (Preview)](tutorial-standalone-agent-binary-installation.md).
+
+### Timeline
+
+Microsoft Defender for IoT will continue to support the legacy agent until March 31, 2023.
+ ## Defender for IoT C, C#, and Edge Defender-IoT-micro-agent deprecation The new micro agent will replace the current C, C#, and Edge Defender-IoT-micro-agent.ΓÇ»
defender-for-iot How To Deploy Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-deploy-edge.md
Use the following steps to deploy a Defender for IoT security module for IoT Edg
1. From the Azure portal, open **Marketplace**.
-1. Select **Internet of Things**, then search for **Microsoft Defender for IoT** and select it.
+1. Select **Internet of Things**, then search for **Azure Security Center for IoT** and select it.
:::image type="content" source="media/howto/edge-onboarding.png" alt-text="Select Defender for IoT":::
devtest-labs Activity Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/activity-logs.md
Title: Activity logs description: This article provides steps to view activity logs for Azure DevTest Labs. ++ Last updated 07/10/2020
devtest-labs Add Artifact Repository https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/add-artifact-repository.md
Title: Add an artifact repository to your lab description: Learn how to add a private artifact repository to your lab to store your custom artifacts. ++ Last updated 01/11/2022
devtest-labs Add Artifact Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/add-artifact-vm.md
Title: Add an artifact to a VM description: Learn how to add an artifact to a virtual machine in a lab in Azure DevTest Labs. ++ Last updated 01/11/2022
devtest-labs Add Vm Use Shared Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/add-vm-use-shared-image.md
Title: Add a VM using a shared image description: Learn how to add a virtual machine (VM) using an image from the attached shared image gallery in Azure DevTest Labs ++ Last updated 06/26/2020
devtest-labs Automate Add Lab User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/automate-add-lab-user.md
Title: Automate adding a lab user description: This article shows you how to automate adding a user to a lab in Azure DevTest Labs using Azure Resource Manager templates, PowerShell, and CLI. ++ Last updated 06/26/2020
devtest-labs Best Practices Distributive Collaborative Development Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/best-practices-distributive-collaborative-development-environment.md
Title: Distributed collaborative development of Azure DevTest Labs resources description: Provides best practices for setting up a distributed and collaborative development environment to develop DevTest Labs resources. ++ Last updated 06/26/2020
devtest-labs Configure Lab Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/configure-lab-identity.md
Title: Configure a lab identity description: Learn how to configure a lab identity in Azure DevTest. ++ Last updated 08/20/2020
devtest-labs Configure Lab Remote Desktop Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/configure-lab-remote-desktop-gateway.md
Title: Configure a lab to use a remote desktop gateway description: Learn how to configure a remote desktop gateway in Azure DevTest Labs for secure access to lab VMs without exposing RDP ports. ++ Last updated 03/07/2022
devtest-labs Configure Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/configure-shared-image-gallery.md
Title: Configure a shared image gallery description: Learn how to configure a shared image gallery in Azure DevTest Labs, which enables users to access images from a shared location while creating lab resources. ++ Last updated 06/26/2020
devtest-labs Connect Environment Lab Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/connect-environment-lab-virtual-network.md
Title: Connect environments to a lab's vnet description: Learn how to connect an environment (like Service Fabric cluster) to your lab's virtual network in Azure DevTest Labs++ Last updated 06/26/2020
devtest-labs Connect Linux Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/connect-linux-virtual-machine.md
Title: Connect to your Linux virtual machines description: Learn how to connect to your Linux virtual machine in a lab (Azure DevTest Labs) ++ Last updated 07/17/2020
devtest-labs Connect Virtual Machine Through Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/connect-virtual-machine-through-browser.md
Title: Connect to lab virtual machines through Browser connect description: Learn how to connect to lab virtual machines (VMs) through a browser if Browser connect is enabled for the lab. ++ Last updated 03/14/2022
devtest-labs Connect Windows Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/connect-windows-virtual-machine.md
Title: Connect to your Windows virtual machines description: Learn how to connect to your Windows virtual machine in a lab (Azure DevTest Labs) ++ Last updated 07/17/2020
devtest-labs Create Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/create-alerts.md
Title: Create activity log alerts for labs description: This article provides steps to create activity log alerts for lab in Azure DevTest Labs. ++ Last updated 07/10/2020
devtest-labs Create Environment Service Fabric Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/create-environment-service-fabric-cluster.md
Title: Create a Service Fabric cluster environment description: Learn how to create an environment with a self-contained Service Fabric cluster. See how to start and stop the cluster by using schedules. ++ Last updated 06/26/2020
devtest-labs Create Lab Windows Vm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/create-lab-windows-vm-template.md
Title: Create a lab in Azure DevTest Labs by using an Azure Resource Manager template description: Use an Azure Resource Manager (ARM) template to create a lab that has a virtual machine in Azure DevTest Labs. ++ Last updated 01/03/2022
devtest-labs Deliver Proof Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/deliver-proof-concept.md
Title: Deliver a proof of concept description: Use a proof of concept or pilot deployment to investigate incorporating Azure DevTest Labs into an enterprise environment. ++ Last updated 03/22/2022
devtest-labs Deploy Nested Template Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/deploy-nested-template-environments.md
Title: Deploy nested ARM template environments description: Learn how to nest Azure Resource Manager (ARM) templates to deploy Azure DevTest Labs environments. ++ Last updated 01/26/2022
devtest-labs Devtest Lab Add Claimable Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-add-claimable-vm.md
Title: Create and manage claimable VMs description: Learn how to use the Azure portal to add a claimable virtual machine in Azure DevTest Labs and see the processes to follows to claim/unclaim a virtual machine. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Add Devtest User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-add-devtest-user.md
Title: Add lab owners and users with role-based access control (RBAC) description: Learn about the Azure DevTest Labs Owner, Contributor, and DevTest Labs User roles, and how to add members to lab roles by using the Azure portal or Azure PowerShell. ++ Last updated 01/26/2022
devtest-labs Devtest Lab Add Tag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-add-tag.md
Title: Add tags to a lab description: Learn how to create custom tags in Azure DevTest Labs and use tags to categorize resources. You can see all the resources in your subscription that have a tag. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Add Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-add-vm.md
Title: Create and add a virtual machine to a lab description: Learn how to use the Azure portal to add a virtual machine (VM) to a lab in Azure DevTest Labs. Configure basic settings, artifacts, and advanced settings. ++ Last updated 03/03/2022
devtest-labs Devtest Lab Announcements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-announcements.md
Title: Post an announcement to a lab description: Learn how to post a custom announcement in an existing lab to notify users about recent changes or additions to the lab in Azure DevTest Labs. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Artifact Author https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-artifact-author.md
Title: Create custom artifacts for virtual machines description: Learn how to create and use artifacts to deploy and set up applications on DevTest Labs virtual machines. ++ Last updated 01/11/2022
devtest-labs Devtest Lab Attach Detach Data Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-attach-detach-data-disk.md
Title: Attach & detach data disks for lab VMs description: Learn how to attach or detach a data disk for a lab virtual machine in Azure DevTest Labs. ++ Last updated 03/29/2022
devtest-labs Devtest Lab Auto Shutdown https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-auto-shutdown.md
Title: Configure auto shutdown policy for labs and virtual machines description: Learn how to set auto shutdown schedules and policies for Azure DevTest Labs or for individual virtual machines (VMs) to shut down the VMs at a specific time daily. ++ Last updated 11/01/2021
devtest-labs Devtest Lab Auto Startup Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-auto-startup-vm.md
Title: Configure auto-start settings for a VM description: Learn how to configure auto-start settings for VMs in a lab. This setting allows VMs in the lab to be automatically started on a schedule. ++ Last updated 03/29/2022
devtest-labs Devtest Lab Comparing Vm Base Image Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-comparing-vm-base-image-types.md
Title: Comparing custom images and formulas description: Learn about the differences between custom images and formulas as VM bases so you can decide which one best suits your environment. ++ Last updated 08/26/2021
devtest-labs Devtest Lab Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-concepts.md
Title: Azure DevTest Labs concepts description: Learn definitions of some basic DevTest Labs concepts related to labs, virtual machines (VMs), and environments. ++ Last updated 03/03/2022
devtest-labs Devtest Lab Configure Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-configure-cost-management.md
Title: View the monthly estimated lab cost trend description: This article provides information on how to track the cost of your lab (monthly estimated cost trend chart) in Azure DevTest Labs. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Configure Marketplace Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-configure-marketplace-images.md
Title: Configure Azure Marketplace image settings description: Configure which Azure Marketplace images can be used when creating a VM in Azure DevTest Labs ++ Last updated 06/26/2020
devtest-labs Devtest Lab Configure Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-configure-vnet.md
Title: Configure a virtual network description: Learn how to configure an existing virtual network and subnet to use for creating virtual machines in Azure DevTest Labs. ++ Last updated 02/15/2022
devtest-labs Devtest Lab Create Custom Image From Vhd Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-custom-image-from-vhd-using-powershell.md
Title: Create a custom image from VHD file by using Azure PowerShell description: Automate creation of a custom image in Azure DevTest Labs from a VHD file by using PowerShell. ++ Last updated 10/24/2021
devtest-labs Devtest Lab Create Custom Image From Vm Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-custom-image-from-vm-using-portal.md
Title: Create a custom image from a lab VM description: Learn how to create a custom image from a provisioned virtual machine in Azure DevTest Labs by using the Azure portal. ++ Last updated 02/15/2022
devtest-labs Devtest Lab Create Environment From Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-environment-from-arm.md
Title: Use ARM templates to create multi-VM environments and PaaS resources description: Learn how to use Azure Resource Manager (ARM) templates to create multi-VM, platform-as-a-service (PaaS) environments and resources in Azure DevTest Labs. ++ Last updated 01/03/2022
devtest-labs Devtest Lab Create Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-lab.md
Title: 'Quickstart: Create a lab in the Azure portal' description: Learn how to quickly create a lab in Azure DevTest Labs by using the Azure portal. ++ Last updated 03/03/2022
devtest-labs Devtest Lab Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-template.md
Title: Create an Azure DevTest Labs virtual machine custom image from a VHD file description: Learn how to use a VHD file to create an Azure DevTest Labs virtual machine custom image in the Azure portal. ++ Last updated 01/04/2022
devtest-labs Devtest Lab Delete Lab Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-delete-lab-vm.md
Title: Delete a lab virtual machine or a lab description: Learn how to delete a virtual machine from a lab or delete a lab in Azure DevTest Labs. ++ Last updated 03/14/2022
devtest-labs Devtest Lab Dev Ops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-dev-ops.md
Title: Integrate Azure DevTest Labs with DevOps CI/CD pipelines description: Learn how to use Azure DevTest Labs with continuous integration (CI) and continuous delivery (CD) pipelines in an enterprise environment. ++ Last updated 11/16/2021
devtest-labs Devtest Lab Enable Licensed Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-enable-licensed-images.md
Title: Enable a licensed image in your lab description: Learn how to enable a licensed image in Azure DevTest Labs using the Azure portal ++ Last updated 06/26/2020
devtest-labs Devtest Lab Grant User Permissions To Specific Lab Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-grant-user-permissions-to-specific-lab-policies.md
Title: Grant user permissions to specific lab policies description: Learn how to grant user permissions to specific lab policies in DevTest Labs based on each user's needs ++ Last updated 06/26/2020
devtest-labs Devtest Lab Guidance Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-get-started.md
Title: Popular scenarios for using Azure DevTest Labs description: This article describes primary Azure DevTest Labs scenarios, and how an organization can begin exploring DevTest Labs. ++ Last updated 02/03/2022
devtest-labs Devtest Lab Guidance Governance Application Migration Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-governance-application-migration-integration.md
Title: Application migration and integration description: This article provides governance guidance for Azure DevTest Labs infrastructure. The context is application migration and integration. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Guidance Governance Cost Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-governance-cost-ownership.md
Title: Manage cost and ownership description: This article provides information that helps you optimize for cost and align ownership across your environment. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Guidance Governance Policy Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-governance-policy-compliance.md
Title: Company policy and compliance description: This article provides guidance on governing company policy and compliance for Azure DevTest Labs infrastructure. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Guidance Governance Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-governance-resources.md
Title: Governance of Azure DevTest Labs infrastructure description: This article addresses the alignment and management of resources for Azure DevTest Labs within your organization. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Guidance Orchestrate Implementation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-orchestrate-implementation.md
Title: Orchestrate implementation description: This article provides guidance for orchestrating implementation of Azure DevTest Labs in your organization. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Guidance Prescriptive Adoption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-prescriptive-adoption.md
Title: Adopt Azure DevTest Labs for your enterprise description: This article provides prescriptive guidance for using Azure DevTest Labs in your enterprise. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Guidance Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-scale.md
Title: Scale up your Azure DevTest Labs infrastructure description: See information and guidance about scaling up your Azure DevTest Labs infrastructure. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Integrate Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-integrate-ci-cd.md
Title: Integrate Azure DevTest Labs into Azure Pipelines description: Learn how to integrate Azure DevTest Labs into Azure Pipelines continuous integration and delivery (CI/CD) pipelines. ++ Last updated 11/16/2021
devtest-labs Devtest Lab Internal Support Message https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-internal-support-message.md
Title: Add an internal support statement to a lab description: Learn how to post an internal support statement to a lab in Azure DevTest Labs ++ Last updated 06/26/2020
devtest-labs Devtest Lab Manage Formulas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-manage-formulas.md
Title: Manage formulas in Azure DevTest Labs to create VMs description: This article illustrates how to create a formula from either a base (custom image, Marketplace image, or another formula) or an existing VM. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Mandatory Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-mandatory-artifacts.md
Title: Specify mandatory artifacts for lab virtual machines description: Learn how to specify mandatory artifacts to install at creation of every lab virtual machine (VM) in Azure DevTest Labs. ++ Last updated 01/12/2022
devtest-labs Devtest Lab Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-overview.md
Title: What is Azure DevTest Labs? description: Learn how DevTest Labs makes it easy to create, manage, and monitor Azure virtual machines and environments. ++ Last updated 03/03/2022
devtest-labs Devtest Lab Redeploy Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-redeploy-vm.md
Title: Redeploy a VM in a lab description: Learn how to redeploy a virtual machine (move from one Azure node to another) in Azure DevTest Labs. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-reference-architecture.md
Title: Enterprise reference architecture description: See a reference architecture and considerations for Azure DevTest Labs in an enterprise. ++ Last updated 03/14/2022
devtest-labs Devtest Lab Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-resize-vm.md
Title: Stop and resize lab VMs description: Learn how to change the size of a virtual machine (VM) in Azure DevTest Labs based on changing needs for CPU, network, or disk performance. ++ Last updated 02/15/2022
devtest-labs Devtest Lab Restart Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-restart-vm.md
Title: Restart a VM in a lab description: This article provides steps to quickly and easily restart virtual machines (VM) in Azure DevTest Labs. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Scale Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-scale-lab.md
Title: Scale quotas and limits in your lab description: This article describes how you can scale your lab in Azure DevTest Labs. View your usage quotas and limits, and request for an increase. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Set Lab Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-set-lab-policy.md
Title: Control costs with lab policies description: Learn how to define lab policies such as VM sizes, maximum VMs per user, and shutdown automation. ++ Last updated 02/14/2022
devtest-labs Devtest Lab Shared Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-shared-ip.md
Title: Understand shared IP addresses description: Learn how Azure DevTest Labs uses shared IP addresses to minimize the public IP addresses you need to access your lab VMs. ++ Last updated 11/08/2021
devtest-labs Devtest Lab Store Secrets In Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-store-secrets-in-key-vault.md
Title: Store secrets in a key vault description: Learn how to store secrets in an Azure Key Vault and use them while creating a VM, formula, or an environment. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Troubleshoot Apply Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-troubleshoot-apply-artifacts.md
Title: Troubleshooting issues with artifacts description: Learn how to troubleshoot issues that occur when applying artifacts in an Azure DevTest Labs virtual machine. ++ Last updated 11/04/2021
devtest-labs Devtest Lab Troubleshoot Artifact Failure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-troubleshoot-artifact-failure.md
Title: Diagnose artifact failures in an Azure DevTest Labs virtual machine description: DevTest Labs provide information that you can use to diagnose an artifact failure. This article shows you how to troubleshoot artifact failures. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Upload Vhd Using Azcopy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-upload-vhd-using-azcopy.md
Title: Upload VHD file to Azure DevTest Labs using AzCopy description: This article provides a walkthrough to use the AzCopy command-line utility to upload a VHD file to a lab's storage account in Azure DevTest Labs. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Upload Vhd Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-upload-vhd-using-powershell.md
Title: Upload VHD file to Azure DevTest Labs using PowerShell description: This article provides a walkthrough that shows you how to upload a VHD file to Azure DevTest Labs using PowerShell. ++ Last updated 06/26/2020
devtest-labs Devtest Lab Upload Vhd Using Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-upload-vhd-using-storage-explorer.md
Title: Upload a VHD file to by using Storage Explorer description: Upload a VHD file to a DevTest Labs lab storage account by using Microsoft Azure Storage Explorer. ++ Last updated 11/05/2021
devtest-labs Devtest Lab Use Arm And Powershell For Lab Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-use-arm-and-powershell-for-lab-resources.md
Title: Create and deploy labs with Azure Resource Manager (ARM) templates description: Learn how Azure DevTest Labs uses Azure Resource Manager (ARM) templates to create and configure lab virtual machines (VMs) and environments. ++ Last updated 01/11/2022
devtest-labs Devtest Lab Use Claim Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-use-claim-capabilities.md
Title: Use claim capabilities description: Learn about different scenarios for using claim/unclaim capabilities of Azure DevTest Labs ++ Last updated 06/26/2020
devtest-labs Devtest Lab Use Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-use-resource-manager-template.md
Title: Create VMs by using ARM templates description: Learn how to view, edit, save, and store ARM virtual machine (VM) templates, and connect template repositories to Azure DevTest Labs. ++ Last updated 01/11/2022
devtest-labs Devtest Lab Vm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-vm-powershell.md
Title: Create a lab virtual machine by using Azure PowerShell description: Learn how to use Azure PowerShell to create and manage virtual machines in Azure DevTest Labs. ++ Last updated 03/17/2022
devtest-labs Devtest Lab Vmcli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-vmcli.md
Title: Create and manage virtual machines in Azure DevTest Labs with Azure CLI description: Learn how to use Azure DevTest Labs to create and manage virtual machines with Azure CLI ++ Last updated 06/26/2020
devtest-labs Enable Browser Connection Lab Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/enable-browser-connection-lab-virtual-machines.md
Title: Enable browser connection to Azure DevTest Labs virtual machines description: Integrate Azure Bastion with DevTest Labs to enable accessing lab virtual machines (VMs) through a browser. ++ Last updated 11/02/2021
devtest-labs Enable Managed Identities Lab Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/enable-managed-identities-lab-vms.md
Title: Enable managed identities on your lab VMs description: This article shows how a lab owner can enable user-assigned managed identities on your lab virtual machines. ++ Last updated 06/26/2020
devtest-labs Encrypt Disks Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/encrypt-disks-customer-managed-keys.md
Title: Encrypt disks using customer-managed keys description: Learn how to encrypt disks using customer-managed keys in Azure DevTest Labs. ++ Last updated 09/29/2021
devtest-labs Encrypt Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/encrypt-storage.md
Title: Manage storage accounts for labs description: Learn about DevTest Labs storage accounts, encryption, customer-managed keys, and setting expiration dates for artifact results storage. ++ Last updated 03/15/2022
devtest-labs Environment Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/environment-security-alerts.md
Title: Security alerts for environments description: This article shows you how to view security alerts for an environment in DevTest Labs and take an appropriate action. ++ Last updated 06/26/2020
devtest-labs Extend Devtest Labs Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/extend-devtest-labs-azure-functions.md
Title: Extend Azure DevTest Labs using Azure Functions description: Learn how to extend Azure DevTest Labs using Azure Functions. ++ Last updated 06/26/2020
devtest-labs How To Move Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/how-to-move-labs.md
Title: Move a DevTest lab to another region description: Shows you how to move a lab to another region. ++ Last updated 03/03/2022
devtest-labs Image Factory Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/image-factory-create.md
Title: Create an image factory description: This article shows you how to set up a custom image factory by using sample scripts available in the Git repository (Azure DevTest Labs). ++ Last updated 06/26/2020
devtest-labs Image Factory Save Distribute Custom Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/image-factory-save-distribute-custom-images.md
Title: Save and distribute images description: This article gives you the steps to save custom images from the already created virtual machines (VMs) in Azure DevTest Labs. ++ Last updated 06/26/2020
devtest-labs Image Factory Set Retention Policy Cleanup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/image-factory-set-retention-policy-cleanup.md
Title: Set up retention policy description: Learn how to configure a retention policy, clean up the factory, and retire old images from DevTest Labs. ++ Last updated 06/26/2020
devtest-labs Image Factory Set Up Devops Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/image-factory-set-up-devops-lab.md
Title: Run an image factory from Azure DevOps description: This article covers all the preparations needed to run the image factory from Azure DevOps (formerly Visual Studio Team Services). ++ Last updated 06/26/2020
devtest-labs Import Virtual Machines From Another Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/import-virtual-machines-from-another-lab.md
Title: Import virtual machines from another lab description: Learn how to import virtual machines from one lab to another in Azure DevTest Labs. ++ Last updated 11/08/2021
devtest-labs Integrate Environments Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/integrate-environments-devops-pipeline.md
Title: Integrate DevTest Labs environments into Azure Pipelines description: Learn how to integrate Azure DevTest Labs environments into Azure Pipelines continuous integration (CI) and continuous delivery (CD) pipelines. ++ Last updated 11/17/2021
devtest-labs Lab Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/lab-services-overview.md
Title: Azure Lab Services vs. Azure DevTest Labs description: Compare features, scenarios, and use cases for Azure DevTest Labs and Azure Lab Services. ++ Last updated 11/15/2021
devtest-labs Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/network-isolation.md
Title: Network isolation description: Learn how to enable and configure network isolation for labs in Azure DevTest Labs. ++ Last updated 03/21/2022
devtest-labs Personal Data Delete Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/personal-data-delete-export.md
Title: How to delete and export personal data description: Learn how to delete and export personal data from the Azure DevLast Labs service to support your obligations under the General Data Protection Regulation (GDPR). ++ Last updated 06/26/2020
devtest-labs Create Lab Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/quickstarts/create-lab-rest.md
Title: 'Quickstart: Create a lab with REST API' description: In this quickstart, you create a lab in Azure DevTest Labs by using an Azure REST API. ++ Last updated 10/27/2021 #Customer intent: As an administrator, I want to set up a lab so that my developers have a test environment.
devtest-labs Report Usage Across Multiple Labs Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/report-usage-across-multiple-labs-subscriptions.md
Title: Azure DevTest Labs usage across multiple labs and subscriptions description: Learn how to report Azure DevTest Labs usage across multiple labs and subscriptions. ++ Last updated 06/26/2020
devtest-labs Resource Group Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/resource-group-control.md
Title: Specify resource group for Azure VMs in DevTest Labs description: Learn how to specify a resource group for VMs in a lab in Azure DevTest Labs. ++ Last updated 10/18/2021
devtest-labs Samples Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/samples-cli.md
Title: Azure CLI Samples description: Learn about Azure CLI scripts. With these samples, you can create a virtual machine and then start, stop, and delete it in Azure Lab Services. ++ Last updated 02/02/2022
devtest-labs Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/samples-powershell.md
Title: Azure PowerShell Samples description: Learn about Azure PowerShell scripts. These samples help you manage labs in Azure Lab Services. ++ Last updated 02/02/2022
devtest-labs Start Machines Use Automation Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/start-machines-use-automation-runbooks.md
Title: Define VM start order with Azure Automation description: Learn how to start virtual machines in a specific order by using Azure Automation runbooks in Azure DevTest Labs. ++ Last updated 03/17/2022
devtest-labs Test App Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/test-app-azure.md
Title: Set up an app for testing on a lab VM description: Learn how to publish an app to an Azure file share for testing from a DevTest Labs virtual machine. ++ Last updated 03/29/2022
devtest-labs Troubleshoot Vm Environment Creation Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/troubleshoot-vm-environment-creation-failures.md
Title: Troubleshoot VM and environment failures description: Learn how to troubleshoot virtual machine (VM) and environment creation failures in Azure DevTest Labs. ++ Last updated 06/26/2020
devtest-labs Tutorial Create Custom Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/tutorial-create-custom-lab.md
Title: Create a lab tutorial
-description: In this tutorial, you create a lab in Azure DevTest Labs by using the Azure portal. A lab admin sets up a lab, creates VMs in the lab, and configures policies.
+ Title: Set up a lab & lab VM & lab user
+description: Use the Azure portal to create a lab, create a virtual machine in the lab, and add a lab user in Azure DevTest Labs.
Previously updated : 11/03/2021++ Last updated : 03/30/2022
-# Tutorial: Set up a lab in DevTest Labs using the Azure portal
+# Tutorial: Create a DevTest Labs lab and VM and add a user in the Azure portal
-In this tutorial, you create a lab by using the Azure portal. A lab admin sets up a lab in an organization, creates Azure virtual machines (VMs) in the lab, and configures policies. Lab users (for example: developer and testers) claim VMs in the lab, connect to them, and use them.
-
-In this tutorial, you learn how to:
+In this Azure DevTest Labs tutorial, you learn how to:
> [!div class="checklist"]
-> * Create a lab
-> * Add an Azure virtual machine (VM) to the lab
-> * Add a user and assign it to the **DevTest Labs User** role
+> * Create a lab in DevTest Labs.
+> * Add an Azure virtual machine (VM) to the lab.
+> * Add a user in the DevTest Labs User role.
+> * Delete the lab when no longer needed.
+
+In the [next tutorial](tutorial-use-custom-lab.md), lab users, such as developers, testers, and trainees, learn how to connect to the lab VM and claim and unclaim lab VMs.
+
+## Prerequisite
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- To create a lab, you need at least [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role in an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- To add users to a lab, you must have [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) or [Owner](/azure/role-based-access-control/built-in-roles.md#owner) role in the subscription the lab is in.
## Create a lab
-These steps illustrate how to use the Azure portal to create a lab in Azure DevTest Labs.
+To create a lab in Azure DevTest Labs, follow these steps.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the [Azure portal](https://portal.azure.com), search for and select **DevTest Labs**.
-1. Enter `DevTest Labs` in the search text box, and then select **DevTest Labs** from the results.
+ :::image type="content" source="./media/tutorial-create-custom-lab/portal-search-devtest-labs.png" alt-text="Screenshot of searching for DevTest Labs in the portal.":::
- :::image type="content" source="./media/tutorial-create-custom-lab/portal-search-devtest-labs.png" alt-text="Screenshot of portal search for DevTest Labs.":::
+1. On the **DevTest Labs** page, select **Create**.
-1. On the **DevTest Labs** page, select **+ Create**.
+1. On the **Create Devtest Lab** page, on the **Basic Settings** tab, provide the following information:
-1. On the **Create Devtest Lab** page, under the **Basic Settings** tab, provide the following information:
+ |Setting|Value|
+ |||
+ |**Subscription**|Change the subscription if you want to use a different subscription for the lab.|
+ |**Resource group**|Select an existing resource group from the dropdown list, or select **Create new** to create a new resource group so it's easy to delete later.|
+ |**Lab name**|Enter a name for the lab.|
+ |**Location**|If you're creating a new resource group, select an Azure region for the resource group and lab.|
+ |**Public environments**|Leave **On** for access to the [DevTest Labs public environment repository](https://github.com/Azure/azure-devtestlab/tree/master/Environments). Set to **Off** to disable access. For more information, see [Enable public environments when you create a lab](devtest-lab-create-environment-from-arm.md#enable-public-environments-when-you-create-a-lab).|
- |Property | Description |
- |||
- |Subscription| From the drop-down list, select the Azure subscription to be used for the lab.|
- |Resource&nbsp;group| From the drop-down list, select your existing resource group, or select **Create new**.|
- |Lab name| Enter a name for the lab.|
- |Location| From the drop-down list, select a location that's used for the lab.|
- |Public environments| Leave the default value of **On**. Public environment repository contains a list of curated Azure Resource Manager templates that enable lab users to create PaaS resources within Labs.|
+ :::image type="content" source="./media/tutorial-create-custom-lab/create-custom-lab-blade.png" alt-text="Screenshot of the Basic Settings tab of the Create DevTest Labs form.":::
- :::image type="content" source="./media/tutorial-create-custom-lab/create-custom-lab-blade.png" alt-text="Screenshot of Basic Settings tab for Create DevTest Labs.":::
+1. Optionally, select the [Auto-shutdown](devtest-lab-create-lab.md#auto-shutdown-tab), [Networking](devtest-lab-create-lab.md#networking-tab), or [Tags](devtest-lab-create-lab.md#tags-tab) tabs at the top of the page, and customize those settings. You can also apply or change most of these settings after lab creation.
-1. Select **Review + create** to validate the configuration, and then select **Create**. For this tutorial, the default values for the other tabs are sufficient.
+1. After you complete all settings, select **Review + create** at the bottom of the page.
-1. After the creation process finishes, from the deployment notification, select **Go to resource**.
+1. If the settings are valid, **Succeeded** appears at the top of the **Review + create** page. Review the settings, and then select **Create**.
- :::image type="content" source="./media/tutorial-create-custom-lab/creation-notification.png" alt-text="Screenshot of DevTest Labs deployment notification.":::
+ > [!TIP]
+ > Select **Download a template for automation** at the bottom of the page to view and download the lab configuration as an Azure Resource Manager (ARM) template. You can use the ARM template to create more labs.
-1. The lab's **Overview** page looks similar to the following image:
+1. After the creation process finishes, from the deployment notification, select **Go to resource**.
- :::image type="content" source="./media/tutorial-create-custom-lab/lab-home-page.png" alt-text="Screenshot of DevTest Labs overview page.":::
+ :::image type="content" source="./media/tutorial-create-custom-lab/creation-notification.png" alt-text="Screenshot of the DevTest Labs deployment notification.":::
## Add a VM to the lab
-1. On the **DevTest Lab** page, select **+ Add** on the toolbar.
+To add a VM to the lab, follow these steps. For more information, see [Create lab virtual machines in Azure DevTest Labs](devtest-lab-add-vm.md).
+
+1. On the new lab's **Overview** page, select **Add** on the toolbar.
- :::image type="content" source="./media/tutorial-create-custom-lab/add-vm-to-lab-button.png" alt-text="Screenshot of DevTest Labs overview page and add button.":::
+ :::image type="content" source="./media/tutorial-create-custom-lab/add-vm-to-lab-button.png" alt-text="Screenshot of a lab Overview page with Add highlighted.":::
-1. On the **Choose a base** page, select a marketplace image for the VM. This guide use **Windows Server 2019 Datacenter**. Certain options may differ if you use a different image.
+1. On the **Choose a base** page, select **Windows Server 2019 Datacenter** as a Marketplace image for the VM. Some of the following options might be different if you use a different image.
-1. From the **Basics Settings** tab, provide the following information:
+1. On the **Basic Settings** tab of the **Create lab resource** screen, provide the following information:
- |Property |Description |
- |||
- |Virtual&nbsp;machine&nbsp;name| The text box is pre-filled with a unique autogenerated name. The name corresponds to the user name within your email address followed by a unique three-digit number. Leave as-is, or enter a unique name of your choosing.|
- |User Name| The text box is pre-filled with a unique autogenerated name. The name corresponds to the user name within your email address. Leave as-is, or enter a name of your choosing. The user is granted **administrator** privileges on the virtual machine.|
- |Use a saved secret| For this walk-through, leave the box unchecked. You can save secrets in Azure Key Vault first and then use it here. For more information, see [Store secrets in a key vault](devtest-lab-store-secrets-in-key-vault.md). If you prefer to use a saved secret, check the box and then select the secret from the **Secret** drop-down list.|
- |Password|Enter a password between 8 and 123 characters long.|
- |Save as default password| Select the checkbox to save the password in the Azure Key Vault associated with the lab.|
- |Virtual machine size| Keep the default value or select **Change Size** to select different physical components. This walk-through uses **Standard_D4_v3**.|
- |OS disk type|Keep the default value or select a different option from the drop-down list.|
- |Artifacts| Not used for this tutorial.|
+ |Setting|Value|
+ |||
+ |**Virtual machine name**|Keep the autogenerated name, or enter another unique VM name.|
+ |**User name**|Keep the autogenerated user name, or enter another user name to grant administrator privileges on the VM.|
+ |**Use a saved secret**|You can select this checkbox to use a secret from Azure Key Vault instead of a password to access the VM. For more information, see [Store secrets in a key vault](devtest-lab-store-secrets-in-key-vault.md). For this tutorial, don't select the checkbox.|
+ |**Password**|If you don't use a secret, enter a VM password between 8 and 123 characters long.|
+ |**Save as default password**|Select this checkbox to save the password in the Key Vault associated with the lab.|
+ |**Virtual machine size**|Keep the default value for the base, or select **Change Size** to select a different size.|
+ |**OS disk type**|Keep the default value for the base, or select a different option from the dropdown list.|
+ |**Artifacts**|Optionally, select **Add or Remove Artifacts** to [select and configure artifacts](devtest-lab-add-vm.md#add-artifacts-during-installation) to add to the VM.|
- :::image type="content" source="./media/tutorial-create-custom-lab/portal-lab-vm-basic-settings.png" alt-text="Screenshot of virtual machine basic settings page.":::
+ :::image type="content" source="./media/tutorial-create-custom-lab/portal-lab-vm-basic-settings.png" alt-text="Screenshot of the Basic Settings tab of the Create lab resource page.":::
-1. Select the **Advanced Settings** tab and provide the following information:
+1. Select the **Advanced Settings** tab on the **Create lab resource** screen, and change any of the following values:
- |Property |Description |
- |||
- |Virtual network| Leave as-is or select a different network from the drop-down list.|
- |Subnet&nbsp;Selector| Leave as-is or select a different subnet from the drop-down list.|
- |IP address| For this walk-through, leave the default value **Shared**. When **Shared** is selected, Azure DevTest Labs automatically enables RDP for Windows VMs and SSH for Linux VMs. If you select **Public**, RDP and SSH are enabled without any changes from DevTest Labs. |
- |Expiration date| Leave as is for no expiration date, or select the calendar icon to set an expiration date.|
- |Make this machine claimable| Leave as is at **No**. To make the VM claimable by a lab user, select **Yes**. Marking the machine as claimable means that it won't be assigned ownership at the time of creation. |
- |Number of instances| Leave as-is at **1**. The number of virtual machine instances to be created.|
- |Automation | Optional. Selecting **View ARM Template** will open the template in a new page. You can copy and save the template to create the same virtual machine later. Once saved, you can use the Azure Resource Manager template to [deploy new VMs with Azure PowerShell](../azure-resource-manager/templates/overview.md).|
+ |Setting|Value|
+ |||
+ |**Virtual network**|Keep the default, or select a network from the dropdown list. For more information, see [Add a virtual network](devtest-lab-configure-vnet.md).|
+ |**Subnet**|If necessary, select a different subnet from the dropdown list.|
+ |**IP address**|Leave at **Shared**, or select **Public** or **Private**. For more information, see [Understand shared IP addresses](devtest-lab-shared-ip.md).|
+ |**Expiration date**|Leave at **Will not expire**, or [set an expiration date](devtest-lab-use-resource-manager-template.md#set-vm-expiration-date) and time for the VM.|
+ |**Make this machine claimable**|The default is **No**, to keep the VM creator as the owner of the VM. For this tutorial, select **Yes**, so that another lab user can claim the VM after creation. For more information, see [Create and manage claimable VMs](devtest-lab-add-claimable-vm.md).|
+ |**Number of instances**|To create more than one VM with this configuration, enter the number of VMs to create.|
+ |**View ARM template**|Select to view and save the VM configuration as an Azure Resource Manager (ARM) template. You can use the ARM template to [deploy new VMs with Azure PowerShell](../azure-resource-manager/templates/overview.md).|
- :::image type="content" source="./media/tutorial-create-custom-lab/portal-lab-vm-advanced-settings.png" alt-text="Virtual machine advanced settings page.":::
+ :::image type="content" source="./media/tutorial-create-custom-lab/portal-lab-vm-advanced-settings.png" alt-text="Screenshot of the Advanced Settings tab of the Create lab resource page.":::
-1. Return to the **Basic Settings** tab and then select **Create**.
+1. After you configure all settings, on the **Basic Settings** tab of the **Create lab resource** screen, select **Create**.
-1. You're returned to the **DevTest Lab** page. Under **My Lab**, select **Claimable virtual machines**.
+During VM deployment, you can select the **Notifications** icon at the top of the screen to see progress. Creating a VM takes a while.
- :::image type="content" source="./media/tutorial-create-custom-lab/portal-lab-vm-creation-status.png" alt-text="Screenshot of lab VM creation status page.":::
+From the lab **Overview** page, you can select **Claimable virtual machines** in the left navigation to see the VM listed on the **Claimable virtual machines** page. Select **Refresh** if the VM doesn't appear. To take ownership of a VM in the claimable list, see [Use a claimable VM](devtest-lab-add-claimable-vm.md#use-a-claimable-vm).
-1. After a few minutes, select **Refresh** if your virtual machines don't appear. Installation times will vary based on the selected hardware, base image, and artifact(s). The installation for the configurations used in this walk-through was approximately 12 minutes.
## Add a user to the DevTest Labs User role
-1. Navigate to the resource group that contains the lab you created. You must be [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../role-based-access-control/built-in-roles.md#owner).
+To add users to a lab, you must be a [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) or [Owner](/azure/role-based-access-control/built-in-roles.md#owner) of the subscription the lab is in. For more information, see [Add lab owners, contributors, and users in Azure DevTest Labs](devtest-lab-add-devtest-user.md).
-1. In the left menu, select **Access control (IAM)**.
+1. On the lab's **Overview** page, under **Settings**, select **Configuration and policies**.
-1. Select **+ Add** > **Add role assignment**.
+1. On the **Configuration and policies** page, select **Access control (IAM)** from the left navigation.
- ![Access control (IAM) page with Add role assignment menu open.](../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png)
+1. Select **Add**, and then select **Add role assignment**.
+
+ :::image type="content" source="media/tutorial-create-custom-lab/add-role-assignment-menu-generic.png" alt-text="Screenshot of the Access control (IAM) page with the Add role assignment menu open.":::
1. On the **Role** tab, select the **DevTest Labs User** role.
- ![Add role assignment page with Role tab selected.](../../includes/role-based-access-control/media/add-role-assignment-role-generic.png)
+ :::image type="content" source="media/tutorial-create-custom-lab/add-role-assignment-role-generic.png" alt-text="Screenshot of the Add role assignment page with the Role tab selected.":::
-1. On the **Members** tab, select the user you want to assign the role to.
+1. On the **Members** tab, select the user to assign the role to.
1. On the **Review + assign** tab, select **Review + assign** to assign the role. ## Clean up resources
-Delete resources to avoid charges for running the lab and VM on Azure. If you plan to go through the next tutorial to access the VM in the lab, you can clean up the resources after you finish that tutorial. Otherwise, follow these steps:
+Use this lab for the next tutorial, [Access a lab in Azure DevTest Labs](tutorial-use-custom-lab.md). When you're done using the lab, delete it and its resources to avoid further charges.
+
+1. On the lab **Overview** page, select **Delete** from the top menu.
-1. Return to the home page for the lab you created.
+ :::image type="content" source="./media/tutorial-create-custom-lab/portal-lab-delete.png" alt-text="Screenshot of the lab Delete button.":::
-1. From the top menu, select **Delete**.
+1. On the **Are you sure you want to delete it** page, enter the lab name, and then select **Delete**.
- :::image type="content" source="./media/tutorial-create-custom-lab/portal-lab-delete.png" alt-text="Screenshot of lab delete button.":::
+ During the deletion process, you can select **Notifications** at the top of your screen to view progress. Deleting a lab can take a while.
-1. On the **Are you sure you want to delete it** page, enter the lab name in the text box and then select **Delete**.
+If you created the lab in an existing resource group, deleting the lab removes all of the lab resources.
-1. During the deletion, you can select **Notifications** at the top of your screen to view progress. Deleting the lab takes a while. Continue to the next step once the lab is deleted.
+If you created a resource group for the lab, you can now delete that resource group. You can't delete a resource group that has a lab in it. Deleting the resource group that contained the lab deletes all resources in the resource group. To delete the resource group:
-1. If you created the lab in an existing resource group, then all of the lab resources have been removed. If you created a new resource group for this tutorial, it's now empty and can be deleted. It wouldn't have been possible to have deleted the resource group earlier while the lab was still in it.
+1. Select the resource group that contained the lab from your subscription's **Resource groups** list.
+1. At the top of the page, select **Delete resource group**.
+1. On the **Are you sure you want to delete "\<resource group name>"** page, enter the resource group name, and then select **Delete**.
## Next steps
-In this tutorial, you: created a lab, added a VM, and then gave a user access to the lab. To learn about how to access the lab as a lab user, advance to the next tutorial:
+To learn how to access the lab and VMs as a lab user, go on to the next tutorial:
> [!div class="nextstepaction"] > [Tutorial: Access the lab](tutorial-use-custom-lab.md)
devtest-labs Tutorial Use Custom Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/tutorial-use-custom-lab.md
Title: Access a lab
-description: In this tutorial, you access a lab in Azure DevTest Labs. You use a virtual machine, unclaim it, and then claim it.
+ Title: Access a lab and lab VM
+description: Learn how to access a lab in Azure DevTest Labs, and claim, connect to, and unclaim a lab virtual machine.
Previously updated : 11/03/2021++ Last updated : 03/30/2022 # Tutorial: Access a lab in Azure DevTest Labs
-In this tutorial, you use the lab that was created in the [Tutorial: Create a lab in Azure DevTest Labs](tutorial-create-custom-lab.md).
-
-In this tutorial, you do the following actions:
+In this tutorial, you learn how to:
> [!div class="checklist"]
-> * Connect to the lab VM
-> * Unclaim the lab VM
-> * Claim the lab virtual machine (VM)
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+> * Claim a lab virtual machine (VM) in Azure DevTest Labs.
+> * Connect to the lab VM.
+> * Unclaim the lab VM.
+> * Delete the lab VM when no longer needed.
## Prerequisites
-A [lab in DevTest Labs with an Azure virtual machine](tutorial-create-custom-lab.md).
+You need at least [DevTest Labs User](/azure/role-based-access-control/built-in-roles#devtest-labs-user) access to the lab created in [Tutorial: Set up a lab in Azure DevTest Labs](tutorial-create-custom-lab.md), or to another lab that has a claimable VM.
-## Connect to the lab VM
+The owner or administrator of the lab can give you the URL to access the lab in the Azure portal, and the username and password to access the lab VM.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+## Claim a lab VM
-1. Navigate to your lab in **DevTest Labs**.
+To claim a lab VM, follow these steps. For more information about claiming VMs, see [Use claim capabilities in Azure DevTest Labs](devtest-lab-use-claim-capabilities.md).
-1. Under **My virtual machines**, select your VM.
+1. Go to the URL for your lab in the Azure portal.
- :::image type="content" source="./media/tutorial-use-custom-lab/my-virtual-machines.png" alt-text="Screenshot of VM under My virtual machines.":::
+1. On the lab **Overview** page, select **Claimable virtual machines** under **My Lab** in the left navigation.
-1. From the top menu, select **Connect**. Then select the `.rdp` file that downloads to your machine.
+1. On the **Claimable virtual machines** page, select the ellipsis **...** next to the listing for an available VM, and select **Claim machine** from the context menu.
- :::image type="content" source="./media/tutorial-use-custom-lab/vm-connect.png" alt-text="Screenshot of VM connect button.":::
+ :::image type="content" source="./media/tutorial-use-custom-lab/claimable-virtual-machines-claimed.png" alt-text="Screenshot showing Claim machine in the context menu.":::
-1. On the **Remote Desktop Connection** dialog box, select **Connect**
+1. On the lab **Overview** page, confirm that you now see the VM in the list under **My virtual machines**.
-1. On the **Enter your credentials** dialog box, enter the password, and then select **OK**.
+ :::image type="content" source="./media/tutorial-use-custom-lab/my-virtual-machines-2.png" alt-text="Screenshot showing the claimed V M in the My virtual machines list.":::
-1. If you receive a dialog box that states, **The identity of the remote computer cannot be verified**, select the check box for **Don't ask me again for connections to this computer**. Then select **Yes**.
+## Connect to a lab VM
- :::image type="content" source="./media/tutorial-use-custom-lab/remote-computer-verification.png" alt-text="Screenshot of remote computer verification.":::
+You can connect to any running lab VM. A claimable but unclaimed VM is stopped, so you must claim it to connect to it.
-For steps to connect to a Linux VM, see [Connect to a Linux VM in Azure](../virtual-machines/linux/use-remote-desktop.md).
+To connect to a Windows machine through Remote Desktop Protocol (RDP), follow these steps. For steps to connect to a Linux VM, see [Connect to a Linux VM in your lab](connect-linux-virtual-machine.md).
-## Unclaim the lab VM
+1. On the lab **Overview** page, select the VM from the list under **My virtual machines**.
-After you're done with the VM, unclaim the VM by following these steps:
+ :::image type="content" source="./media/tutorial-use-custom-lab/my-virtual-machines.png" alt-text="Screenshot of VM under My virtual machines.":::
-1. Select your VM from DevTest Labs using the same earlier steps.
+1. On the VM's **Overview** page, select **Connect** from the top menu.
-1. On the **virtual machine** page, from the top menu, select **Unclaim**.
+1. Open the *\*.rdp* file that downloads to your machine.
- :::image type="content" source="./media/tutorial-use-custom-lab/virtual-machine-unclaim.png" alt-text="Screenshot of unclaim option.":::
+ :::image type="content" source="./media/tutorial-use-custom-lab/vm-connect.png" alt-text="Screenshot of the V M Connect button and the downloaded R D P file.":::
-1. The VM is shut down before it's unclaimed. You can monitor the status of this operation in **Notifications**.
+1. On the **Remote Desktop Connection** dialog box, select **Connect**.
-1. Close the **virtual machine** page to be returned to the **DevTest Lab Overview** page.
+1. On the **Enter your credentials** dialog box, enter the username and password for the VM, and then select **OK**.
-1. Under **My Lab**, select **Claimable virtual machines**. The VM is now available to be claimed.
+1. If you receive a dialog box that states, **The identity of the remote computer cannot be verified**, select the check box for **Don't ask me again for connections to this computer**. Then select **Yes**.
- :::image type="content" source="./media/tutorial-use-custom-lab/claimable-virtual-machines.png" alt-text="Screenshot of options under claimable virtual machines.":::
+ :::image type="content" source="./media/tutorial-use-custom-lab/remote-computer-verification.png" alt-text="Screenshot of remote computer verification.":::
-## Claim a lab VM
+Once you connect to the VM, you can use it to do your work. You have [Owner](/azure/role-based-access-control/built-in-roles.md#owner) role on all lab VMs you claim or create, unless you unclaim them.
-You can claim the VM again if you need to use it.
+## Unclaim a lab VM
-1. In the list of **Claimable virtual machines**, select **...** (ellipsis), and select **Claim machine**.
+After you're done using the VM, unclaim the VM so someone else can claim it, by following these steps:
- :::image type="content" source="./media/tutorial-use-custom-lab/claimable-virtual-machines-claimed.png" alt-text="Screenshot of claim option.":::
+1. On the lab **Overview** page, select the VM from the list under **My virtual machines**.
-1. Confirm that you see the VM in the list **My virtual machines**.
+1. On the VM's **Overview** page, select **Unclaim** from the top menu.
- :::image type="content" source="./media/tutorial-use-custom-lab/my-virtual-machines-2.png" alt-text="Screenshot showing vm returned to my virtual machines.":::
+ :::image type="content" source="./media/tutorial-use-custom-lab/virtual-machine-unclaim.png" alt-text="Screenshot of Unclaim on the V M's Overview page.":::
-## Clean up resources
+1. The VM is shut down and unclaimed. You can select the **Notifications** icon at the top of the screen to see progress.
-Delete resources to avoid charges for running the lab and VM on Azure. If you plan to go through the next tutorial to access the VM in the lab, you can clean up the resources after you finish that tutorial. Otherwise, follow these steps:
+1. Return to the lab **Overview** page and confirm that the VM no longer appears under **My virtual machines**.
-1. Return to the home page for the lab you created.
+1. Select **Claimable virtual machines** in the left navigation and confirm that the VM is now available to be claimed.
-1. From the top menu, select **Delete**.
+ :::image type="content" source="./media/tutorial-use-custom-lab/claimable-virtual-machines.png" alt-text="Screenshot of the Claimable virtual machines page.":::
- :::image type="content" source="./media/tutorial-use-custom-lab/portal-lab-delete.png" alt-text="Screenshot of lab delete button.":::
+## Delete a lab VM
-1. On the **Are you sure you want to delete it** page, enter the lab name in the text box and then select **Delete**.
+When you're done using a VM, you can delete it. Or, the lab owner can delete the entire lab when it's no longer needed, which deletes all lab VMs and resources. To delete an individual lab VM, follow these steps:
-1. During the deletion, you can select **Notifications** at the top of your screen to view progress. Deleting the lab takes a while. Continue to the next step once the lab is deleted.
+1. Select the ellipsis **...** next to the VM in the **My virtual machines** list or on the **Claimable virtual machines** page, and select **Delete** from the context menu.
-1. If you created the lab in an existing resource group, then all of the lab resources have been removed. If you created a new resource group for this tutorial, it's now empty and can be deleted. It wouldn't have been possible to have deleted the resource group earlier while the lab was still in it.
-
-## Next steps
+1. On the **Are you sure you want to delete it** page, select **Delete**.
-In this tutorial, you learned how to access and use a lab in Azure DevTest Labs. For more information about accessing and using VMs in a lab, see:
+## Next steps
-> [!div class="nextstepaction"]
-> [How to: Use VMs in a lab](devtest-lab-add-vm.md)
+In this tutorial, you learned how to claim and connect to claimable VMs in Azure DevTest Labs. To create your own lab VMs, see [Create lab virtual machines in Azure DevTest Labs](devtest-lab-add-vm.md).
devtest-labs Use Command Line Start Stop Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/use-command-line-start-stop-virtual-machines.md
Title: Start & stop lab VMs with command lines description: Use Azure PowerShell or Azure CLI command lines and scripts to start and stop Azure DevTest Labs virtual machines. ++ Last updated 03/29/2022 ms.devlang: azurecli
devtest-labs Use Devtest Labs Build Release Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/use-devtest-labs-build-release-pipelines.md
Title: Use DevTest Labs in Azure Pipelines build and release pipelines description: Learn how to use Azure DevTest Labs in Azure Pipelines build and release pipelines. ++ Last updated 06/26/2020
devtest-labs Use Managed Identities Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/use-managed-identities-environments.md
Title: Use Azure managed identities to create environments description: Learn how to use managed identities in Azure to deploy environments in a lab in Azure DevTest Labs. ++ Last updated 06/26/2020
devtest-labs Use Paas Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/use-paas-services.md
Title: Use platform-as-a-service (PaaS) environments in labs description: Learn about platform-as-a-service (PaaS) environments in Azure DevTest Labs. ++ Last updated 03/22/2022
digital-twins Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/overview.md
Azure Digital Twins can be used to design a digital twin architecture that represents actual IoT devices in a wider cloud solution, and which connects to IoT Hub device twins to send and receive live data. > [!NOTE]
-> IoT Hub device twins differ from digital twins in the Azure Digital Twins service. While *IoT Hub device twins* are maintained by your IoT hub for each IoT device that you connect to it, *digital twins* can be representations of anything defined by digital models and instantiated within Azure Digital Twins.
+> IoT Hub **device twins** are different from Azure Digital Twins **digital twins**. While *IoT Hub device twins* are maintained by your IoT hub for each IoT device that you connect to it, *digital twins* in Azure Digital Twins can be representations of anything defined by digital models and instantiated within Azure Digital Twins.
Take advantage of your domain expertise on top of Azure Digital Twins to build customized, connected solutions that: * Model any environment, and bring digital twins to life in a scalable and secure manner
In Azure Digital Twins, you define the digital entities that represent the peopl
You can think of these model definitions as a specialized vocabulary to describe your business. For a building management solution, for example, you might define models such as Building, Floor, and Elevator. You can then create digital twins based on these models to represent your specific environment. - *Models* are defined in a JSON-like language called [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md), and they describe twins by their state properties, telemetry events, commands, components, and relationships. Here are some other capabilities of models: * Models define semantic *relationships* between your entities so that you can connect your twins into a graph that reflects their interactions. You can think of the models as nouns in a description of your world, and the relationships as verbs. * You can specialize twins using model *inheritance*. One model can inherit from another. * You can design your own model sets from scratch, or get started with a pre-existing set of [DTDL industry ontologies](concepts-ontologies.md) based on common vocabulary for your industry.
-DTDL is also used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). This type of commonality helps you keep your Azure Digital Twins solution connected and compatible with other parts of the Azure ecosystem.
+DTDL is also used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). This compatibility helps you connect your Azure Digital Twins solution with other parts of the Azure ecosystem.
### Live execution environment
digital-twins Quickstart Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/quickstart-azure-digital-twins-explorer.md
The Azure Digital Twins example graph you'll be working with represents a buildi
You'll need an Azure subscription to complete this quickstart. If you don't have one already, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) now.
-You'll also need to download the materials for the sample graph used in the quickstart. Use the links and instructions below to download the three required files from the [digital-twins-explorer GitHub repository](https://github.com/Azure-Samples/digital-twins-explorer). Later, you'll follow more instructions to upload them to Azure Digital Twins.
+You'll also need to download the materials for the sample graph used in the quickstart. Use the instructions below to download the three required files. Later, you'll follow more instructions to upload them to Azure Digital Twins.
* [Room.json](https://raw.githubusercontent.com/Azure-Samples/digital-twins-explorer/main/client/examples/Room.json): This is a model file representing a room in a building. Navigate to the link, right-click anywhere on the screen, and select **Save as** in your browser's right-click menu. Use the following Save As window to save the file somewhere on your machine with the name *Room.json*.
-* [Floor.json](https://raw.githubusercontent.com/Azure-Samples/digital-twins-explorer/main/client/examples/Floor.json): This is a model file representing a floor in a building. Navigate to the link, right-click anywhere on the screen, and select **Save as** in your browser's right-click menu. Use the following Save As window to save the file to the same location as Room.json, under the name *Floor.json*.
+* [Floor.json](https://raw.githubusercontent.com/Azure-Samples/digital-twins-explorer/main/client/examples/Floor.json): This is a model file representing a floor in a building. Navigate to the link, right-click anywhere on the screen, and select **Save as** in your browser's right-click menu. Use the following Save As window to save the file to the same location as *Room.json*, under the name *Floor.json*.
* [buildingScenario.xlsx](https://github.com/Azure-Samples/digital-twins-explorer/raw/main/client/examples/buildingScenario.xlsx): This file contains a graph of room and floor twins, and relationships between them. Depending on your browser settings, selecting this link may download the *buildingScenario.xlsx* file automatically to your default download location, or it may open the file in your browser with an option to download. Here is what that download option looks like in Microsoft Edge: :::image type="content" source="media/quickstart-azure-digital-twins-explorer/download-building-scenario.png" alt-text="Screenshot of the buildingScenario.xlsx file viewed in a Microsoft Edge browser. A button saying Download is highlighted." lightbox="media/quickstart-azure-digital-twins-explorer/download-building-scenario.png":::
+>[!TIP]
+> These files are from the [Azure Digital Twins Explorer repository in GitHub](https://github.com/Azure-Samples/digital-twins-explorer). You can visit the repo for other sample files, explorer code, and more.
+ ## Set up Azure Digital Twins The first step in working with Azure Digital Twins is to create an Azure Digital Twins instance. After you create an instance of the service, you can connect to the instance in Azure Digital Twins Explorer, which you'll use to work with the instance throughout the quickstart.
The rest of this section walks you through these steps.
[!INCLUDE [digital-twins-setup-portal.md](../../includes/digital-twins-setup-portal.md)]
-3. Fill in the fields on the **Basics** tab of setup, including your Subscription, Resource group, Location, and a Resource name for your new instance. Check the **Assign Azure Digital Twins Data Owner Role** box to give yourself permissions to manage data in the instance.
+3. Fill in the fields on the **Basics** tab of setup, including your Subscription, Resource group, a Resource name for your new instance, and Region. Check the **Assign Azure Digital Twins Data Owner Role** box to give yourself permissions to manage data in the instance.
+
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/create-azure-digital-twins-basics.png" alt-text="Screenshot of the Create Resource process for Azure Digital Twins in the Azure portal. The described values are filled in.":::
>[!NOTE] > If the Assign Azure Digital Twins Data Owner Role box is greyed out, it means you don't have permissions in your Azure subscription to manage user access to resources. You can continue creating the instance in this section, and then should have someone with the necessary permissions [assign you this role on the instance](how-to-set-up-instance-portal.md#assign-the-role-using-azure-identity-management-iam) before completing the rest of this quickstart.
The rest of this section walks you through these steps.
> Common roles that meet this requirement are **Owner**, **Account admin**, or the combination of **User Access Administrator** and **Contributor**. 4. Select **Review + Create** to finish creating your instance.-
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/create-azure-digital-twins-basics.png" alt-text="Screenshot of the Create Resource process for Azure Digital Twins in the Azure portal. The described values are filled in.":::
5. You will see a summary page showing the details you've entered. Confirm and create the instance by selecting **Create**.
-This will take you to an Overview page tracking deployment status of the instance.
+This will take you to an Overview page tracking the deployment status of the instance.
++
+Wait for the page to say that your deployment is complete.
### Open instance in Azure Digital Twins Explorer
-When the instance is finished deploying, use the **Go to resource** button to navigate to the instance's Overview page in the portal.
+After deployment completes, use the **Go to resource** button to navigate to the instance's Overview page in the portal.
[!INCLUDE [digital-twins-access-explorer.md](../../includes/digital-twins-access-explorer.md)]
Next, you'll import the sample models and graph into Azure Digital Twins Explore
### Models
-The first step in an Azure Digital Twins solution is to define the vocabulary for your environment. You'll create custom [models](concepts-models.md) that describe the types of entity that exist in your environment.
+The first step in an Azure Digital Twins solution is to define the vocabulary for your environment. You'll create custom *models* that describe the types of entity that exist in your environment.
-Each model is written in a language like JSON-LD called Digital Twin Definition Language (DTDL). Each model describes a single type of entity in terms of its properties, telemetry, relationships, and components. Later, you'll use these models as the basis for digital twins that represent specific instances of these types.
+Each model is written in a language like [JSON-LD](https://json-ld.org/) called *Digital Twin Definition Language (DTDL)*. Each model describes a single type of entity in terms of its properties, telemetry, relationships, and components. Later, you'll use these models as the basis for digital twins that represent specific instances of these types.
Typically, when you create a model, you'll complete three steps:
Follow these steps to upload models (the *.json* files you downloaded earlier).
1. In the Open window that appears, navigate to the folder containing the *Room.json* and *Floor.json* files that you downloaded earlier. 1. Select *Room.json* and *Floor.json*, and select **Open** to upload them both.
-Azure Digital Twins Explorer will upload these model files to your Azure Digital Twins instance. They should show up in the **Models** panel and display their friendly names and full model IDs. You can select **View Model** for either model to see the DTDL code behind it.
+Azure Digital Twins Explorer will upload these model files to your Azure Digital Twins instance. They should show up in the **Models** panel and display their friendly names and full model IDs.
+
+You can select **View Model** for either model to see the DTDL code behind it.
:::row::: :::column:::
Azure Digital Twins Explorer will upload these model files to your Azure Digital
### Twins and the twin graph
-Now that some models have been uploaded to your Azure Digital Twins instance, you can add [digital twins](concepts-twins-graph.md) that follow the model definitions.
+Now that some models have been uploaded to your Azure Digital Twins instance, you can add *digital twins* based on the model definitions.
-Digital twins represent the actual entities within your business environment. They can be things like sensors on a farm, lights in a car, orΓÇöin this quickstartΓÇörooms on a building floor. You can create many twins of any given model type, such as multiple rooms that all use the Room model. You connect them with relationships into a *twin graph* that represents the full environment.
+*Digital twins* represent the actual entities within your business environment. They can be things like sensors on a farm, lights in a car, orΓÇöin this quickstartΓÇörooms on a building floor. You can create many twins of any given model type, such as multiple rooms that all use the Room model. You connect them with relationships into a *twin graph* that represents the full environment.
In this section, you'll upload pre-created twins that are connected into a pre-created graph. The graph contains two floors and two rooms, connected in the following layout:
Follow these steps to import the graph (the *.xlsx* file you downloaded earlier)
After a few seconds, Azure Digital Twins Explorer opens an **Import** view that shows a preview of the graph to be loaded.
-3. To confirm the graph upload, select the **Save** icon in the upper-right corner of the graph preview panel.
+3. To finish importing the graph, select the **Save** icon in the upper-right corner of the graph preview panel.
:::image type="content" source="media/quickstart-azure-digital-twins-explorer/graph-preview-save.png" alt-text="Screenshot of the Azure Digital Twins Explorer highlighting the Save icon in the Graph Preview pane." lightbox="media/quickstart-azure-digital-twins-explorer/graph-preview-save.png":::
-4. Azure Digital Twins Explorer will use the uploaded file to create the requested twins and relationships between them. A dialog box appears when it's finished. Select **Close**.
+4. Azure Digital Twins Explorer will use the uploaded file to create the requested twins and relationships between them. Make sure you see the following dialog box indicating that the import was successful before moving on.
:::row::: :::column:::
Follow these steps to import the graph (the *.xlsx* file you downloaded earlier)
:::column-end::: :::row-end:::
+ Select **Close**.
+ The graph has now been uploaded to Azure Digital Twins Explorer, and the **Twin Graph** panel will reload. It will appear empty. 6. To see the graph, select the **Run Query** button in the **Query Explorer** panel, near the top of the Azure Digital Twins Explorer window.
Now you can see the uploaded graph of the sample scenario.
The circles (graph "nodes") represent digital twins. The lines represent relationships. The Floor0 twin contains Room0, and the Floor1 twin contains Room1.
-If you're using a mouse, you can drag pieces of the graph to move them around.
+If you're using a mouse, you can click and drag in the graph to move elements around.
### View twin properties
-You can select a twin to see a list of its properties and their values in the **Properties** panel.
+You can select a twin to see a list of its properties and their values in the **Twin Properties** panel.
Here are the properties of Room0: :::row::: :::column:::
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/properties-room0.png" alt-text="Screenshot of the Azure Digital Twins Explorer highlighting the Properties panel, which shows $dtId, Temperature, and Humidity properties for Room0." lightbox="media/quickstart-azure-digital-twins-explorer/properties-room0.png":::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/properties-room0.png" alt-text="Screenshot of the Azure Digital Twins Explorer highlighting the Twin Properties panel, which shows $dtId, Temperature, and Humidity properties for Room0." lightbox="media/quickstart-azure-digital-twins-explorer/properties-room0.png":::
:::column-end::: :::column::: :::column-end:::
Here are the properties of Room1:
:::row::: :::column:::
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/properties-room1.png" alt-text="Screenshot of the Azure Digital Twins Explorer highlighting the Properties panel, which shows $dtId, Temperature, and Humidity properties for Room1." lightbox="media/quickstart-azure-digital-twins-explorer/properties-room1.png":::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/properties-room1.png" alt-text="Screenshot of the Azure Digital Twins Explorer highlighting the Twin Properties panel, which shows $dtId, Temperature, and Humidity properties for Room1." lightbox="media/quickstart-azure-digital-twins-explorer/properties-room1.png":::
:::column-end::: :::column::: :::column-end:::
Room1 has a temperature of 80.
### Query the graph
-A main feature of Azure Digital Twins is the ability to [query](concepts-query-language.md) your twin graph easily and efficiently to answer questions about your environment.
+In Azure Digital Twins, you can query your twin graph to answer questions about your environment, using the SQL-style *Azure Digital Twins query language*.
-One way to query the twins in your graph is by their properties. Querying based on properties can help answer a variety of questions. For example, you can find outliers in your environment that might need attention.
+One way to query the twins in your graph is by their properties. Querying based on properties can help answer questions about your environment. For example, you can find outliers in your environment that might need attention.
In this section, you'll run a query to answer the question of how many twins in your environment have a temperature above 75.
To see the answer, run the following query in the **Query Explorer** panel.
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="TemperatureQuery":::
-Recall from viewing the twin properties earlier that Room0 has a temperature of 70, and Room1 has a temperature of 80. For this reason, only Room1 shows up in the results here.
+Recall from viewing the twin properties earlier that Room0 has a temperature of 70, and Room1 has a temperature of 80. The Floor twins don't have a Temperature property at all. For these reasons, only Room1 shows up in the results here.
:::image type="content" source="media/quickstart-azure-digital-twins-explorer/result-query-property-before.png" alt-text="Screenshot of the Azure Digital Twins Explorer showing the results of property query, which shows only Room1." lightbox="media/quickstart-azure-digital-twins-explorer/result-query-property-before.png":::
Recall from viewing the twin properties earlier that Room0 has a temperature of
## Edit data in the graph
-You can use Azure Digital Twins Explorer to edit the properties of the twins represented in your graph. In this section, we'll raise the temperature of Room0 to 76.
+In a fully connected Azure Digital Twins solution, the twins in your graph can receive live updates from real IoT devices and update their properties to stay synchronized with your real-world environment. You can also manually set the properties of the twins in your graph, using Azure Digital Twins Explorer or another development interface (like the APIs or Azure CLI).
+
+For simplicity, you'll use Azure Digital Twins Explorer here to manually set the temperature of Room0 to 76.
-To start, rerun the following query to select all digital twins. This will display the full graph once more in the **Twin Graph** panel.
+First, rerun the following query to select all digital twins. This will display the full graph once more in the **Twin Graph** panel.
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="GetAllTwins":::
-Select **Room0** to bring up its property list in the **Properties** panel.
+Select **Room0** to bring up its property list in the **Twin Properties** panel.
-The properties in this list are editable. Select the temperature value of **70** to enable entering a new value. Enter *76*, and select the **Save** icon to update the temperature to 76.
+The properties in this list are editable. Select the temperature value of **70** to enable entering a new value. Enter *76* and select the **Save** icon to update the temperature.
:::row::: :::column:::
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/new-properties-room0.png" alt-text="Screenshot of the Azure Digital Twins Explorer highlighting that the Properties panel is showing properties that can be edited for Room0." lightbox="media/quickstart-azure-digital-twins-explorer/new-properties-room0.png":::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/new-properties-room0.png" alt-text="Screenshot of the Azure Digital Twins Explorer highlighting that the Twin Properties panel is showing properties that can be edited for Room0." lightbox="media/quickstart-azure-digital-twins-explorer/new-properties-room0.png":::
:::column-end::: :::column::: :::column-end::: :::row-end:::
-Now, you'll see a **Patch Information** window where the patch code appears that was used behind the scenes with the Azure Digital Twins [APIs](concepts-apis-sdks.md) to make the update. Select **Close**.
+After a successful property update, you'll see a **Patch Information** box showing the patch code that was used behind the scenes with the [Azure Digital Twins APIs](concepts-apis-sdks.md) to make the update.
+
+ :::column:::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/patch-information.png" alt-text="Screenshot of the Azure Digital Twins Explorer showing Patch Information for the temperature update." lightbox="media/quickstart-azure-digital-twins-explorer/patch-information.png":::
+ :::column-end:::
+ :::column:::
+ :::column-end:::
+
+**Close** the patch information.
### Query to see the result
To clean up after this quickstart, choose which resources you want to remove bas
[!INCLUDE [digital-twins-cleanup-clear-instance.md](../../includes/digital-twins-cleanup-clear-instance.md)]
+* If you don't need your Azure Digital Twins instance anymore, you can delete it using the Azure portal.
+
+ Navigate back to the instance's **Overview** page in the portal. (If you've already closed that tab, you can find the instance again by searching for its name in the Azure portal search bar and selecting it from the search results.)
+
+ Select **Delete** to delete the instance, including all of its models and twins.
+
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/delete-instance.png" alt-text="Screenshot of the Overview page for an Azure Digital Twins instance in the Azure portal. The Delete button is highlighted.":::
You may also want to delete the sample project folder from your local machine.
event-hubs Apache Kafka Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/apache-kafka-configurations.md
Title: Recommended configurations for Apache Kafka clients - Azure Event Hubs description: This article provides recommended Apache Kafka configurations for clients interacting with Azure Event Hubs for Apache Kafka. Previously updated : 03/03/2021 Last updated : 03/30/2022 # Recommended configurations for Apache Kafka clients
Property | Recommended Values | Permitted Range | Notes
|:|:| `max.request.size` | 1000000 | < 1046528 | The service will close connections if requests larger than 1,046,528 bytes are sent. *This value **must** be changed and will cause issues in high-throughput produce scenarios.* `retries` | > 0 | | May require increasing delivery.timeout.ms value, see documentation.
-`request.timeout.ms` | 30000 .. 60000 | > 20000| EH will internally default to a minimum of 20,000 ms. *While requests with lower timeout values are accepted, client behavior isn't guaranteed.*
+`request.timeout.ms` | 60000 | > 20000| Event Hubs will internally default to a minimum of 20,000 ms. *While requests with lower timeout values are accepted, client behavior isn't guaranteed.*. <p>Make sure that your **request.timeout.ms** is at least the recommended value of 60000 and your **session.timeout.ms** is at least the recommended value of 30000. Having these settings too low could cause consumer timeouts, which then cause rebalances (which then cause more timeouts, which cause more rebalancing, and so on).</p>
`metadata.max.idle.ms` | 180000 | > 5000 | Controls how long the producer will cache metadata for a topic that's idle. If the elapsed time since a topic was last produced exceeds the metadata idle duration, then the topic's metadata is forgotten and the next access to it will force a metadata fetch request. `linger.ms` | > 0 | | For high throughput scenarios, linger value should be equal to the highest tolerable value to take advantage of batching. `delivery.timeout.ms` | | | Set according to the formula (`request.timeout.ms` + `linger.ms`) * `retries`.
Consumer configs can be found [here](https://kafka.apache.org/documentation/#con
Property | Recommended Values | Permitted Range | Notes |:|--:| `heartbeat.interval.ms` | 3000 | | 3000 is the default value and shouldn't be changed.
-`session.timeout.ms` | 30000 |6000 .. 300000| Start with 30000, increase if seeing frequent rebalancing because of missed heartbeats.
+`session.timeout.ms` | 30000 |6000 .. 300000| Start with 30000, increase if seeing frequent rebalancing because of missed heartbeats.<p>Make sure that your request.timeout.ms is at least the recommended value of 60000 and your session.timeout.ms is at least the recommended value of 30000. Having these settings too low could cause consumer timeouts, which then cause rebalances (which then cause more timeouts, which cause more rebalancing, and so on).</p>
## librdkafka configuration properties
Property | Recommended Values | Permitted Range | Notes
Property | Recommended Values | Permitted Range | Notes |:|--:| `retries` | > 0 | | Default is 2. We recommend that you keep this value.
-`request.timeout.ms` | 30000 .. 60000 | > 20000| EH will internally default to a minimum of 20,000 ms. `librdkafka` default value is 5000, which can be problematic. *While requests with lower timeout values are accepted, client behavior isn't guaranteed.*
+`request.timeout.ms` | 30000 .. 60000 | > 20000| Event Hubs will internally default to a minimum of 20,000 ms. `librdkafka` default value is 5000, which can be problematic. *While requests with lower timeout values are accepted, client behavior isn't guaranteed.*
`partitioner` | `consistent_random` | See librdkafka documentation | `consistent_random` is default and best. Empty and null keys are handled ideally for most cases. `compression.codec` | `none` || Compression currently not supported.
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-features.md
The following examples show the consumer group URI convention:
The following figure shows the Event Hubs stream processing architecture:
-![Event Hubs architecture](./media/event-hubs-about/event_hubs_architecture.svg)
+![Event Hubs architecture](./media/event-hubs-about/event_hubs_architecture.png)
### Stream offsets
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md
While FastPath supports most configurations, it doesn't support the following fe
* Private Link: If you connect to a [private endpoint](../private-link/private-link-overview.md) in your virtual network from your on-premises network, the connection will go through the virtual network gateway.
+### IP address limits
+
+| ExpressRoute SKU | Bandwidth | FathPath IP limit |
+| -- | -- | -- |
+| ExpressRoute Direct Port | 100Gbps | 200,000 |
+| ExpressRoute Direct Port | 10Gbps | 100,000 |
+| ExpressRoute provider circuit | 10Gbps and lower | 25,000 |
+
+> [!NOTE]
+> * ExpressRoute Direct has a cumulative limit at the port level.
+> * Traffic will flow through the ExpressRoute gateway when these limits are reached.
+>
+ ## Public preview The following FastPath features are in Public preview:
frontdoor Front Door Url Rewrite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-url-rewrite.md
zone_pivot_groups: front-door-tiers
::: zone pivot="front-door-standard-premium"
-Azure Front Door supports URL rewrite to change the path of a request that is being routed to your origin. URL rewrite also allows you to add conditions to make sure that the URL or the specified headers gets rewritten only when certain conditions gets met. These conditions are based on the request and response information.
+Azure Front Door supports URL rewrite to change the path of a request that is being routed to your origin. URL rewrite also allows you to add conditions to make sure that the URL or the specified headers gets rewritten only when certain conditions get met. These conditions are based on the request and response information.
With this feature, you can redirect users to different origins based on scenarios, device types, or the requested file type.
For example, if I set **Preserve unmatched path to No**.
Azure Front Door (classic) supports URL rewrite by configuring an optional **Custom Forwarding Path** to use when constructing the request to forward to the backend. By default, if a custom forwarding path isn't provided, the Front Door will copy the incoming URL path to the URL used in the forwarded request. The Host header used in the forwarded request is as configured for the selected backend. Read [Backend Host Header](front-door-backend-pool.md#hostheader) to learn what it does and how you can configure it.
-The robust part of URL rewrite is that the custom forwarding path will copy any part of the incoming path that matches the wildcard path to the forwarded path (these path segments are the **green** segments in the example below):
+The robust part of URL rewrite is that the custom forwarding path will copy any part of the incoming path that matches the wildcard path to the forwarded path.
+For example, if you have a **match path** of `/foo/*` and configured `/fwd/` for the **custom forwarding path**, any path segment from the wildcard onward will be copied to the forwarding path, shown in orange in the diagram below. In this example, when the **incoming URL path** is `/foo/a/b/c` you'll have a **forwarded path** of `/fwd/a/b/c`. Noticed that the `a/b/c` path segment will replace the wildcard, which is show in green in the below diagram.
+ ## URL rewrite example
frontdoor Tier Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/tier-comparison.md
Azure Front Door is offered in 2 different tiers, Azure Front Door Standard and
:::image type="content" source="../media/tier-comparison/architecture.png" alt-text="Diagram of Azure Front Door architecture.":::
+> [!NOTE]
+> In order to switch between tiers, you will need to recreate the Azure Front Door profile.
+>
+ ## Feature comparison between tiers | Features and optimization | Standard | Premium | Classic |
Azure Front Door is offered in 2 different tiers, Azure Front Door Standard and
## Next steps
-Learn how to [create a Front Door](create-front-door-portal.md)
+* Learn how to [create an Azure Front Door](create-front-door-portal.md)
+* Learn how about the [Azure Front Door architecture](../front-door-routing-architecture.md)
frontdoor Troubleshoot Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/troubleshoot-issues.md
The cause of this problem can be one of three things:
- How to disable `EnforceCertificateNameCheck` from the Azure portal:
- In the portal, use a toggle button to turn this setting on or off in the Azure Front Door **Design** pane.
+ In the portal, use a toggle button to turn this setting on or off in the Azure Front Door (classic) **Design** pane.
![Screenshot that shows the toggle button.](https://user-images.githubusercontent.com/63200992/148067710-1b9b6053-efe3-45eb-859f-f747de300653.png)
+ For Azure Front Door Standard and Premium tier, this setting can be found in the origin settings when you add an origin to an origin group or configuring a route.
+
+ :::image type="content" source="./media/troubleshoot-issues/validation-checkbox.png" alt-text="Screenshot of the certificate subject name validation checkbox.":::
+ * The backend server returns a certificate that doesn't match the FQDN of the Azure Front Door backend pool. To resolve this issue, you have two options: - The returned certificate must match the FQDN.
hdinsight Domain Joined Authentication Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/domain-joined-authentication-issues.md
Title: Authentication issues in Azure HDInsight
description: Authentication issues in Azure HDInsight Previously updated : 08/24/2020 Last updated : 03/31/2022 # Authentication issues in Azure HDInsight
This error occurs intermittently when users try to access the ADLS Gen2 using AC
## Next steps
hdinsight Apache Hadoop Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-introduction.md
description: An introduction to HDInsight, and the Apache Hadoop technology stac
Previously updated : 02/27/2020 Last updated : 03/31/2022 #Customer intent: As a data analyst, I want understand what is Hadoop and how it is offered in Azure HDInsight so that I can decide on using HDInsight instead of on premises clusters.
For examples of using Hadoop streaming with HDInsight, see the following documen
## Next steps * [Create Apache Hadoop cluster in HDInsight using the portal](../hadoop/apache-hadoop-linux-create-cluster-get-started-portal.md)
-* [Create Apache Hadoop cluster in HDInsight using ARM template](../hadoop/apache-hadoop-linux-tutorial-get-started.md)
+* [Create Apache Hadoop cluster in HDInsight using ARM template](../hadoop/apache-hadoop-linux-tutorial-get-started.md)
hdinsight Hdinsight Use Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-use-hive.md
description: Apache Hive is a data warehouse system for Apache Hadoop. You can q
Previously updated : 02/28/2020 Last updated : 03/31/2022 # What is Apache Hive and HiveQL on Azure HDInsight?
hdinsight Apache Hbase Tutorial Get Started Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-tutorial-get-started-linux.md
description: Follow this Apache HBase tutorial to start using hadoop on HDInsigh
Previously updated : 01/22/2021 Last updated : 03/31/2022 # Tutorial: Use Apache HBase in Azure HDInsight
hdinsight Hdinsight Hadoop Linux Use Ssh Unix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-linux-use-ssh-unix.md
description: "You can access HDInsight using Secure Shell (SSH). This document p
Previously updated : 02/28/2020 Last updated : 03/31/2022 # Connect to HDInsight (Apache Hadoop) using SSH
hdinsight Hdinsight Hadoop Use Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-blob-storage.md
description: Learn how to query data from Azure storage and Azure Data Lake Stor
Previously updated : 04/21/2020 Last updated : 03/31/2022 # Use Azure storage with Azure HDInsight clusters
For more information, see:
* [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](hdinsight-hadoop-use-data-lake-storage-gen2.md) * [Upload data to HDInsight](hdinsight-upload-data.md) * [Tutorial: Extract, transform, and load data using Interactive Query in Azure HDInsight](./interactive-query/interactive-query-tutorial-analyze-flight-data.md)
-* [Use Azure Storage Shared Access Signatures to restrict access to data with HDInsight](hdinsight-storage-sharedaccesssignature-permissions.md)
+* [Use Azure Storage Shared Access Signatures to restrict access to data with HDInsight](hdinsight-storage-sharedaccesssignature-permissions.md)
hdinsight Hdinsight Hadoop Use Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md
description: Learn how to use Azure Data Lake Storage Gen2 with Azure HDInsight
Previously updated : 04/24/2020 Last updated : 03/31/2022 # Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters
hdinsight Hdinsight Restrict Outbound Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-restrict-outbound-traffic.md
description: Learn how to configure outbound network traffic restriction for Azu
Previously updated : 04/17/2020 Last updated : 03/31/2022 # Configure outbound network traffic for Azure HDInsight clusters using Firewall
hdinsight Apache Kafka Producer Consumer Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-producer-consumer-api.md
description: Learn how to use the Apache Kafka Producer and Consumer APIs with K
Previously updated : 05/19/2020 Last updated : 03/31/2022 #Customer intent: As a developer, I need to create an application that uses the Kafka consumer/producer API with Kafka on HDInsight
To remove the resource group using the Azure portal:
In this document, you learned how to use the Apache Kafka Producer and Consumer API with Kafka on HDInsight. Use the following to learn more about working with Kafka: * [Use Kafka REST Proxy](rest-proxy.md)
-* [Analyze Apache Kafka logs](apache-kafka-log-analytics-operations-management.md)
+* [Analyze Apache Kafka logs](apache-kafka-log-analytics-operations-management.md)
hdinsight Apache Kafka Ssl Encryption Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-ssl-encryption-authentication.md
description: Set up TLS encryption for communication between Kafka clients and K
Previously updated : 05/01/2019 Last updated : 03/31/2022 # Set up TLS encryption and authentication for Apache Kafka in Azure HDInsight
iot-hub-device-update Import Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/import-schema.md
File hashes.
### fileHashes object
-Base64-encoded file hashes with algorithm name as key. At least SHA-256 algorithm must be specified, and additional algorithm may be specified if supported by agent.
+Base64-encoded file hashes with algorithm name as key. At least SHA-256 algorithm must be specified, and additional algorithm may be specified if supported by agent. For an example of how to calculate the hash correctly, see the [AduUpdate.psm1 script](https://github.com/Azure/iot-hub-device-update/blob/main/tools/AduCmdlets/AduUpdate.psm1).
**Properties**
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-cli.md
In this quickstart, you create a key vault in Azure Key Vault with Azure CLI. Az
## Create a resource group ## Create a key vault
Now, you have created a Key Vault, stored a certificate, and retrieved it.
## Clean up resources ## Next steps
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-powershell.md
Login-AzAccount
## Create a resource group ## Create a key vault
Set-AzKeyVaultAccessPolicy -VaultName <KeyVaultName> -ObjectId <AzureObjectID> -
## Clean up resources ## Next steps
In this quickstart you created a Key Vault and stored a certificate in it. To le
- Read an [Overview of Azure Key Vault](../general/overview.md) - See the reference for the [Azure PowerShell Key Vault cmdlets](/powershell/module/az.keyvault/) - Review the [Key Vault security overview](../general/security-features.md)
+.md)
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/quick-create-cli.md
Azure Key Vault is a cloud service that provides a secure store for [keys](../ke
## Create a resource group ## Create a key vault
Azure Key Vault is a cloud service that provides a secure store for [keys](../ke
## Clean up resources ## Next steps
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/quick-create-powershell.md
Login-AzAccount
## Create a resource group ## Create a key vault
Login-AzAccount
## Clean up resources ## Next steps
In this quickstart you created a Key Vault using Azure PowerShell. To learn more
- Read an [Overview of Azure Key Vault](overview.md) - See the reference for the [Azure PowerShell Key Vault cmdlets](/powershell/module/az.keyvault/) - Review the [Azure Key Vault security overview](security-features.md)
+.md)
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-cli.md
In this quickstart, you create a key vault in Azure Key Vault with Azure CLI. Az
## Create a resource group ## Create a key vault
Now, you have created a Key Vault, stored a key, and retrieved it.
## Clean up resources ## Next steps
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-powershell.md
Login-AzAccount
## Create a resource group ## Create a key vault
Now, you have created a Key Vault, stored a key, and retrieved it.
## Clean up resources ## Next steps
In this quickstart you created a Key Vault and stored a certificate in it. To le
- Read an [Overview of Azure Key Vault](../general/overview.md) - See the reference for the [Azure PowerShell Key Vault cmdlets](/powershell/module/az.keyvault/) - Review the [Key Vault security overview](../general/security-features.md)
+.md)
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-powershell.md
After successfully downloading the security domain, your HSM will be in an activ
## Clean up resources ## Next steps
key-vault Security Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/security-domain.md
Title: About Azure Managed HSM security domain
-description: Overview of the Managed HSM Security Domain, a set of core credentials needed to recover a Managed HSM
+description: Overview of the Managed HSM Security Domain, a set of artifacts needed to recover a Managed HSM
Previously updated : 09/15/2020 Last updated : 03/28/2022 + # About the Managed HSM Security Domain
-The Managed HSM Security Domain (SD) is a set of core credentials needed to recover a Managed HSM if there is a disaster. The Security Domain is generated in the Managed HSM hardware and the service software enclaves and represents "ownership" of the HSM.
+A managed HSM is a single-tenant, [FIPS (Federal Information Processing Standards) 140-2 validated](https://csrc.nist.gov/publications/detail/fips/140/2/final), highly available hardware security module (HSM), with a customer-controlled security domain.
+
+Every managed HSM must have a security domain to operate. The security domain is an encrypted blob file that contains artifacts such as the HSM backup, user credentials, the signing key, and the data encryption key unique to your managed HSM. It serves the following purposes:
+
+- Establishes "ownership" by cryptographically tying each managed HSM to a root of trust keys under your sole control,. This ensures that Microsoft does not have access to your cryptographic key material on the managed HSM.
+- Sets the cryptographic boundary for key material in a managed HSM instance
+- Allows you to fully recover a managed HSM instance if there is a disaster. The following disaster scenarios are covered:
+ - A catastrophic failure where all member HSM instances of a managed HSM instance are destroyed.
+ - The managed HSM instance was soft deleted by a customer and the resource was purged after the expiry of the mandatory retention period.
+ - The end customer archived a project by performing a backup that included the managed HSM instance and all data, then deleted all Azure resources associated with the project.
+
+Without the security domain, disaster recovery is not possible. And Microsoft has no way of recovering the security domain, nor can it access your keys without the security domain. Protection of the security domain is of therefore of the utmost importance for business continuity, and to ensure you are not cryptographically locked out.
+
+## Security domain protection best practices
+
+### Downloading the encrypted security domain
+
+The security domain is generated in the managed HSM hardware, and the service software enclaves at the initialization time. Once the managed HSM is provisioned, you must create at least 3 RSA key pairs and send the public keys to the service when requesting the security domain download. You also need to specify the minimum number of keys required (quorum) to decrypt the Security Domain in the future. The managed HSM will initialize the Security Domain and encrypt it with the public keys you provide using Shamir's Secret Sharing Algorithm. Once the security domain is downloaded, the Managed HSM moves into an activated state and ready for consumption.
+
+### Storing the security domain keys
+
+The keys to a security domain must be held in offline storage (such as an encrypted USB drive), with each split of the quorum on a separate storage device. The storage devices must be held at separate geographical locations, and in a physical safe or a lock box. For ultra-sensitive and high assurance use cases, you may even choose to store your security domain private keys on your on-premises, offline HSM.
+
+It is again especially important to periodically review your security policy around the managed HSM quorum. Your security policy must be accurate, you must have up-to-date records of where the security domain and its private keys are stored, and you must know who has control of the security domain.
+
+Security domain key handling prohibitions:
+- One person should never be allowed to have physical access to all quorum keys. In other words, `m` must be greater that 1 (and should ideally be >= 3).
+- The security domain keys must never be stored on a computer with an internet connection. A computer with internet connection is exposed to various threats, such as viruses and malicious hackers. You significantly reduce your risk by storing the security domain keys offline
+
+### Establishing a security domain quorum
+
+The best way to protect a security domain and prevent crypto lockout is to implement multi-person control, using the managed HSM concept called a "quorum". A quorum is split-secret threshold to divide the key encrypting the security domain among multiple persons to enforce multi-person control. In this way, the security domain is not dependent on a single person, who could leave the organization or have malicious intent.
+
+We recommend implementing a quorum of `m` persons, where `m` is greater than or equal to 3. The maximum quorum size of the security domain for the managed HSM is 10.
+
+Although a greater (`m`) size provides additional security, it imposes additional administrative overhead in terms of handling the security domain. It is therefore imperative that the security domain quorum be carefully chosen, with at least `m` >= 3. The security domain quorum size should also be periodically reviewed and updated (in the case of personnel changes, for example). It is especially important to keep records of security domain holders; your records should document every hand-off or change of possession, and your policy should enforce a rigorous adherence to quorum and documentation requirements.
+
+The security domain private keys must be held by trusted and key employees of an organization, as it contains the most sensitive and critical information of your managed HSM. Security domain holders should have separate roles and be geographically separated within your organization.
+
+For example, a security domain quorum could comprise four key pairs, with each private key given to a different person. A minimum of two people would have to come together to reconstruct a security domain. The parts could be given to key personnel, such as:
+
+- Business Unit Technical Lead
+- Security Architect
+- Security Engineer
+- Application Developer
+
+Every organization is different and enforces a different security policy based on their needs. We recommend that you periodically review your security policy for compliance and for making decisions on the quorum and its size.
+
+The security domain quorum must be periodically reviewed. Timing depends on your organization, but we recommend that you conduct a security domain review at least once every quarter, as well as when:
-Every Managed HSM must have a security domain to operate. When you request a new Managed HSM, it is provisioned but is not activated until you download the Security Domain. When a Managed HSM is in provisioned, but not activated, state, there are two ways to activate it:
-- Downloading your Security Domain is the default method, and allows you safely to store the Security Domain either to use with another Managed HSM or to recover from a total disaster.-- Upload an existing Security Domain you already have, which allows you to create multiple Managed HSM instances that share the same Security Domain.
+- A member of the quorum leaves the organization.
+- A new or emerging threat makes you decide to increase the size of the quorum.
+- There is a process change in implementing the quorum.
+- A USB drive or HSM belonging to a member of the security domain quorum is lost or compromised.
-## Download your security domain
+### Security domain compromise or loss
-When a Managed HSM is provisioned but not activated, downloading the Security Domain captures the core credentials needed to recover from a complete loss of all hardware. To download the Security Domain, you must create at least 3 (maximum 10) RSA key pairs and send these public keys to the service when requesting the Security Domain download. You also need to specify the minimum number of keys required (quorum) to decrypt the Security Domain in the future. The Managed HSM will initialize the Security Domain and encrypt it with the public keys you provide using [Shamir's Secret Sharing algorithm](https://dl.acm.org/doi/10.1145/359168.359176). The Security Domain blob you download can only be decrypted when at least a quorum of private keys are available; you must keep the private keys safe to ensure the security of the Security Domain. Once the download is complete, the Managed HSM will be in activated state ready for use.
+If your security domain is compromised, a malicious actor could use it to create their own managed HSM instance. The malicious actor could use the access to the key backups to start decrypting the data protected with the keys on the managed HSM instance. A lost security domain is considered compromised.
-> [!IMPORTANT]
-> For a full disaster recovery, you must have the Security Domain, and the quorum of private keys that were used to protect it, and a full HSM backup. If you lose the Security Domain or sufficient of the RSA keys (private key) to lose quorum, and no running instances of the Managed HSM are present, disaster recovery will not be possible.
+After a security domain compromise, all data encrypted with the current managed HSM instance must be decrypted with current key material. A new managed HSM instance must be provisioned, and a new security domain, pointing to the new URL, must be implemented.
-## Upload a security domain
+Because there is no way to migrate key material from a managed HSM instance to another with a different security domain, implementing the security domain must be well thought-out, and protected with accurate, periodically reviewed record keeping.
-When a Managed HSM is provisioned but not activated, you can initiate a Security Domain recovery process. Managed HSM will generate an RSA key pair and return the public key. Then you can upload the Security Domain to the Managed HSM. Before uploading, the client (Azure CLI or PowerShell) will need to be provided with the minimum quorum number of private keys you used while downloading the security domain. The client will decrypt the Security Domain using this quorum and re-encrypt it using the public key you downloaded when you requested recovery. Once the upload is complete, the Managed HSM will be in activated state.
+## Summary
-## Backup and restore behavior
+The security domain and its corresponding private keys play an important part in managed HSM operations. These artifacts are analogous to the combination of a safe, and poor management may easily comprise strong algorithms and systems. If a safe combination is known to an adversary, the strongest safe provides no security. The proper management of the security domain and its private keys is essential to the effective use of the managed HSM.
-Backups (either full backup or a single key backup) can only be successfully restored if the source Managed HSM (where the backup was created) and the destination Managed HSM (where the backup will be restored) share the same Security Domain. In this way, a Security Domain also defines a cryptographic boundary for each Managed HSM.
+It is highly recommended that you review [NIST Special Publication 800-57](https://csrc.nist.gov/publications/detail/sp/800-57-part-1/rev-5/final) for key management best practices, before developing and implementing the policies, systems, and standards necessary to meet and enhance your organization's security goals.
## Next steps
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-cli.md
This quickstart requires version 2.0.4 or later of the Azure CLI. If using Azure
## Create a resource group ## Create a key vault
Now, you have created a Key Vault, stored a secret, and retrieved it.
## Clean up resources ## Next steps
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-powershell.md
Connect-AzAccount
## Create a resource group ## Create a key vault
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/virtual-network-ipv4-ipv6-dual-stack-powershell.md
+
+ Title: Deploy IPv6 dual stack application - Basic Load Balancer - PowerShell
+
+description: This article shows how deploy an IPv6 dual stack application in Azure virtual network using Azure PowerShell.
+++ Last updated : 03/31/2022++++
+# Deploy an IPv6 dual stack application using Basic Load Balancer - PowerShell
+
+This article shows you how to deploy a dual stack (IPv4 + IPv6) application with Basic Load Balancer using Azure PowerShell that includes a dual stack virtual network and subnet, a Basic Load Balancer with dual (IPv4 + IPv6) front-end configurations, VMs with NICs that have a dual IP configuration, network security group, and public IPs.
+
+To deploy a dual stack (IPV4 + IPv6) application using Standard Load Balancer, see [Deploy an IPv6 dual stack application with Standard Load Balancer using Azure PowerShell](../virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md).
+++
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
++
+## Create a resource group
+
+Before you can create your dual-stack virtual network, you must create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). The following example creates a resource group named *myRGDualStack* in the *east us* location:
+
+```azurepowershell-interactive
+ $rg = New-AzResourceGroup `
+ -ResourceGroupName "dsRG1" `
+ -Location "east us"
+```
+
+## Create IPv4 and IPv6 public IP addresses
+To access your virtual machines from the Internet, you need IPv4 and IPv6 public IP addresses for the load balancer. Create public IP addresses with [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress). The following example creates IPv4 and IPv6 public IP address named *dsPublicIP_v4* and *dsPublicIP_v6* in the *dsRG1* resource group:
+
+```azurepowershell-interactive
+$PublicIP_v4 = New-AzPublicIpAddress `
+ -Name "dsPublicIP_v4" `
+ -ResourceGroupName $rg.ResourceGroupName `
+ -Location $rg.Location `
+ -AllocationMethod Dynamic `
+ -IpAddressVersion IPv4
+
+$PublicIP_v6 = New-AzPublicIpAddress `
+ -Name "dsPublicIP_v6" `
+ -ResourceGroupName $rg.ResourceGroupName `
+ -Location $rg.Location `
+ -AllocationMethod Dynamic `
+ -IpAddressVersion IPv6
+```
+To access your virtual machines using a RDP connection, create a IPV4 public IP addresses for the virtual machines with [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress).
+
+```azurepowershell-interactive
+ $RdpPublicIP_1 = New-AzPublicIpAddress `
+ -Name "RdpPublicIP_1" `
+ -ResourceGroupName $rg.ResourceGroupName `
+ -Location $rg.Location `
+ -AllocationMethod Dynamic `
+ -IpAddressVersion IPv4
+
+ $RdpPublicIP_2 = New-AzPublicIpAddress `
+ -Name "RdpPublicIP_2" `
+ -ResourceGroupName $rg.ResourceGroupName `
+ -Location $rg.Location `
+ -AllocationMethod Dynamic `
+ -IpAddressVersion IPv4
+```
+
+## Create Basic Load Balancer
+
+In this section, you configure dual frontend IP (IPv4 and IPv6) and the back-end address pool for the load balancer and then create a Basic Load Balancer.
+
+### Create front-end IP
+
+Create a front-end IP with [New-AzLoadBalancerFrontendIpConfig](/powershell/module/az.network/new-azloadbalancerfrontendipconfig). The following example creates IPv4 and IPv6 frontend IP configurations named *dsLbFrontEnd_v4* and *dsLbFrontEnd_v6*:
+
+```azurepowershell-interactive
+$frontendIPv4 = New-AzLoadBalancerFrontendIpConfig `
+ -Name "dsLbFrontEnd_v4" `
+ -PublicIpAddress $PublicIP_v4
+
+$frontendIPv6 = New-AzLoadBalancerFrontendIpConfig `
+ -Name "dsLbFrontEnd_v6" `
+ -PublicIpAddress $PublicIP_v6
+
+```
+
+### Configure back-end address pool
+
+Create a back-end address pool with [New-AzLoadBalancerBackendAddressPoolConfig](/powershell/module/az.network/new-azloadbalancerbackendaddresspoolconfig). The VMs attach to this back-end pool in the remaining steps. The following example creates back-end address pools named *dsLbBackEndPool_v4* and *dsLbBackEndPool_v6* to include VMs with both IPV4 and IPv6 NIC configurations:
+
+```azurepowershell-interactive
+$backendPoolv4 = New-AzLoadBalancerBackendAddressPoolConfig `
+-Name "dsLbBackEndPool_v4"
+
+$backendPoolv6 = New-AzLoadBalancerBackendAddressPoolConfig `
+-Name "dsLbBackEndPool_v6"
+```
+### Create a health probe
+Use [Add-AzLoadBalancerProbeConfig](/powershell/module/az.network/add-azloadbalancerprobeconfig) to create a health probe to monitor the health of the VMs.
+```azurepowershell
+$probe = New-AzLoadBalancerProbeConfig -Name MyProbe -Protocol tcp -Port 3389 -IntervalInSeconds 15 -ProbeCount 2
+```
+### Create a load balancer rule
+
+A load balancer rule is used to define how traffic is distributed to the VMs. You define the frontend IP configuration for the incoming traffic and the backend IP pool to receive the traffic, along with the required source and destination port. To make sure only healthy VMs receive traffic, you can optionally define a health probe. Basic load balancer uses an IPv4 probe to assess health for both IPv4 and IPv6 endpoints on the VMs. Standard load balancer includes support for explicitly IPv6 health probes.
+
+Create a load balancer rule with [Add-AzLoadBalancerRuleConfig](/powershell/module/az.network/add-azloadbalancerruleconfig). The following example creates load balancer rules named *dsLBrule_v4* and *dsLBrule_v6* and balances traffic on *TCP* port *80* to the IPv4 and IPv6 frontend IP configurations:
+
+```azurepowershell-interactive
+$lbrule_v4 = New-AzLoadBalancerRuleConfig `
+ -Name "dsLBrule_v4" `
+ -FrontendIpConfiguration $frontendIPv4 `
+ -BackendAddressPool $backendPoolv4 `
+ -Protocol Tcp `
+ -FrontendPort 80 `
+ -BackendPort 80 `
+ -probe $probe
+
+$lbrule_v6 = New-AzLoadBalancerRuleConfig `
+ -Name "dsLBrule_v6" `
+ -FrontendIpConfiguration $frontendIPv6 `
+ -BackendAddressPool $backendPoolv6 `
+ -Protocol Tcp `
+ -FrontendPort 80 `
+ -BackendPort 80 `
+ -probe $probe
+```
+
+### Create load balancer
+
+Create the Basic Load Balancer with [New-AzLoadBalancer](/powershell/module/az.network/new-azloadbalancer). The following example creates a public Basic Load Balancer named *myLoadBalancer* using the IPv4 and IPv6 frontend IP configurations, backend pools, and load-balancing rules that you created in the preceding steps:
+
+```azurepowershell-interactive
+$lb = New-AzLoadBalancer `
+-ResourceGroupName $rg.ResourceGroupName `
+-Location $rg.Location `
+-Name "MyLoadBalancer" `
+-Sku "Basic" `
+-FrontendIpConfiguration $frontendIPv4,$frontendIPv6 `
+-BackendAddressPool $backendPoolv4,$backendPoolv6 `
+-LoadBalancingRule $lbrule_v4,$lbrule_v6
+
+```
+
+## Create network resources
+Before you deploy some VMs and can test your balancer, you must create supporting network resources - availability set, network security group, virtual network, and virtual NICs.
+### Create an availability set
+To improve the high availability of your app, place your VMs in an availability set.
+
+Create an availability set with [New-AzAvailabilitySet](/powershell/module/az.compute/new-azavailabilityset). The following example creates an availability set named *myAvailabilitySet*:
+
+```azurepowershell-interactive
+$avset = New-AzAvailabilitySet `
+ -ResourceGroupName $rg.ResourceGroupName `
+ -Location $rg.Location `
+ -Name "dsAVset" `
+ -PlatformFaultDomainCount 2 `
+ -PlatformUpdateDomainCount 2 `
+ -Sku aligned
+```
+
+### Create network security group
+
+Create a network security group for the rules that will govern inbound and outbound communication in your VNET.
+
+#### Create a network security group rule for port 3389
+
+Create a network security group rule to allow RDP connections through port 3389 with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig).
+
+```azurepowershell-interactive
+$rule1 = New-AzNetworkSecurityRuleConfig `
+-Name 'myNetworkSecurityGroupRuleRDP' `
+-Description 'Allow RDP' `
+-Access Allow `
+-Protocol Tcp `
+-Direction Inbound `
+-Priority 100 `
+-SourceAddressPrefix * `
+-SourcePortRange * `
+-DestinationAddressPrefix * `
+-DestinationPortRange 3389
+```
+#### Create a network security group rule for port 80
+
+Create a network security group rule to allow internet connections through port 80 with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig).
+
+```azurepowershell-interactive
+$rule2 = New-AzNetworkSecurityRuleConfig `
+ -Name 'myNetworkSecurityGroupRuleHTTP' `
+ -Description 'Allow HTTP' `
+ -Access Allow `
+ -Protocol Tcp `
+ -Direction Inbound `
+ -Priority 200 `
+ -SourceAddressPrefix * `
+ -SourcePortRange 80 `
+ -DestinationAddressPrefix * `
+ -DestinationPortRange 80
+```
+#### Create a network security group
+
+Create a network security group with [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup).
+
+```azurepowershell-interactive
+$nsg = New-AzNetworkSecurityGroup `
+-ResourceGroupName $rg.ResourceGroupName `
+-Location $rg.Location `
+-Name "dsNSG1" `
+-SecurityRules $rule1,$rule2
+```
+### Create a virtual network
+
+Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). The following example creates a virtual network named *myVnet* with *mySubnet*:
+
+```azurepowershell-interactive
+# Create dual stack subnet
+$subnet = New-AzVirtualNetworkSubnetConfig `
+-Name "dsSubnet" `
+-AddressPrefix "10.0.0.0/24","fd00:db8:deca:deed::/64"
+
+# Create the virtual network
+$vnet = New-AzVirtualNetwork `
+ -ResourceGroupName $rg.ResourceGroupName `
+ -Location $rg.Location `
+ -Name "dsVnet" `
+ -AddressPrefix "10.0.0.0/16","fd00:db8:deca::/48" `
+ -Subnet $subnet
+```
+
+### Create NICs
+
+Create virtual NICs with [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface). The following example creates two virtual NICs both with IPv4 and IPv6 configurations. (One virtual NIC for each VM you create for your app in the following steps).
+
+```azurepowershell-interactive
+ $Ip4Config=New-AzNetworkInterfaceIpConfig `
+ -Name dsIp4Config `
+ -Subnet $vnet.subnets[0] `
+ -PrivateIpAddressVersion IPv4 `
+ -LoadBalancerBackendAddressPool $backendPoolv4 `
+ -PublicIpAddress $RdpPublicIP_1
+
+ $Ip6Config=New-AzNetworkInterfaceIpConfig `
+ -Name dsIp6Config `
+ -Subnet $vnet.subnets[0] `
+ -PrivateIpAddressVersion IPv6 `
+ -LoadBalancerBackendAddressPool $backendPoolv6
+
+ $NIC_1 = New-AzNetworkInterface `
+ -Name "dsNIC1" `
+ -ResourceGroupName $rg.ResourceGroupName `
+ -Location $rg.Location `
+ -NetworkSecurityGroupId $nsg.Id `
+ -IpConfiguration $Ip4Config,$Ip6Config
+
+ $Ip4Config=New-AzNetworkInterfaceIpConfig `
+ -Name dsIp4Config `
+ -Subnet $vnet.subnets[0] `
+ -PrivateIpAddressVersion IPv4 `
+ -LoadBalancerBackendAddressPool $backendPoolv4 `
+ -PublicIpAddress $RdpPublicIP_2
+
+ $NIC_2 = New-AzNetworkInterface `
+ -Name "dsNIC2" `
+ -ResourceGroupName $rg.ResourceGroupName `
+ -Location $rg.Location `
+ -NetworkSecurityGroupId $nsg.Id `
+ -IpConfiguration $Ip4Config,$Ip6Config
+
+```
+
+### Create virtual machines
+
+Set an administrator username and password for the VMs with [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential):
+
+```azurepowershell-interactive
+$cred = get-credential -Message "DUAL STACK VNET SAMPLE: Please enter the Administrator credential to log into the VMs."
+```
+
+Now you can create the VMs with [New-AzVM](/powershell/module/az.compute/new-azvm). The following example creates two VMs and the required virtual network components if they do not already exist.
+
+```azurepowershell-interactive
+$vmsize = "Standard_A2"
+$ImagePublisher = "MicrosoftWindowsServer"
+$imageOffer = "WindowsServer"
+$imageSKU = "2019-Datacenter"
+
+$vmName= "dsVM1"
+$VMconfig1 = New-AzVMConfig -VMName $vmName -VMSize $vmsize -AvailabilitySetId $avset.Id 3> $null | Set-AzVMOperatingSystem -Windows -ComputerName $vmName -Credential $cred -ProvisionVMAgent 3> $null | Set-AzVMSourceImage -PublisherName $ImagePublisher -Offer $imageOffer -Skus $imageSKU -Version "latest" 3> $null | Set-AzVMOSDisk -Name "$vmName.vhd" -CreateOption fromImage 3> $null | Add-AzVMNetworkInterface -Id $NIC_1.Id 3> $null
+$VM1 = New-AzVM -ResourceGroupName $rg.ResourceGroupName -Location $rg.Location -VM $VMconfig1
+
+$vmName= "dsVM2"
+$VMconfig2 = New-AzVMConfig -VMName $vmName -VMSize $vmsize -AvailabilitySetId $avset.Id 3> $null | Set-AzVMOperatingSystem -Windows -ComputerName $vmName -Credential $cred -ProvisionVMAgent 3> $null | Set-AzVMSourceImage -PublisherName $ImagePublisher -Offer $imageOffer -Skus $imageSKU -Version "latest" 3> $null | Set-AzVMOSDisk -Name "$vmName.vhd" -CreateOption fromImage 3> $null | Add-AzVMNetworkInterface -Id $NIC_2.Id 3> $null
+$VM2 = New-AzVM -ResourceGroupName $rg.ResourceGroupName -Location $rg.Location -VM $VMconfig2
+```
+
+## Determine IP addresses of the IPv4 and IPv6 endpoints
+Get all Network Interface Objects in the resource group to summarize the IP's used in this deployment with `get-AzNetworkInterface`. Also, get the Load Balancer's frontend addresses of the IPv4 and IPv6 endpoints with `get-AzpublicIpAddress`.
+
+```azurepowershell-interactive
+$rgName= "dsRG1"
+$NICsInRG= get-AzNetworkInterface -resourceGroupName $rgName
+write-host `nSummary of IPs in this Deployment:
+write-host ******************************************
+foreach ($NIC in $NICsInRG) {
+
+ $VMid= $NIC.virtualmachine.id
+ $VMnamebits= $VMid.split("/")
+ $VMname= $VMnamebits[($VMnamebits.count-1)]
+ write-host `nPrivate IP addresses for $VMname
+ $IPconfigsInNIC= $NIC.IPconfigurations
+ foreach ($IPconfig in $IPconfigsInNIC) {
+
+ $IPaddress= $IPconfig.privateipaddress
+ write-host " "$IPaddress
+ IF ($IPconfig.PublicIpAddress.ID) {
+
+ $IDbits= ($IPconfig.PublicIpAddress.ID).split("/")
+ $PipName= $IDbits[($IDbits.count-1)]
+ $PipObject= get-azPublicIpAddress -name $PipName -resourceGroup $rgName
+ write-host " "RDP address: $PipObject.IpAddress
+ }
+ }
+ }
+
+
+
+ write-host `nPublic IP addresses on Load Balancer:
+
+ (get-AzpublicIpAddress -resourcegroupname $rgName | where { $_.name -notlike "RdpPublicIP*" }).IpAddress
+```
+The following figure shows a sample output that lists the private IPv4 and IPv6 addresses of the two VMs, and the frontend IPv4 and IPv6 IP addresses of the Load Balancer.
+
+![IP summary of dual stack (IPv4/IPv6) application deployment in Azure](./media/virtual-network-ipv4-ipv6-dual-stack-powershell/dual-stack-application-summary.png)
+
+## View IPv6 dual stack virtual network in Azure portal
+You can view the IPv6 dual stack virtual network in Azure portal as follows:
+1. In the portal's search bar, enter *dsVnet*.
+2. When **myVirtualNetwork** appears in the search results, select it. This launches the **Overview** page of the dual stack virtual network named *dsVnet*. The dual stack virtual network shows the two NICs with both IPv4 and IPv6 configurations located in the dual stack subnet named *dsSubnet*.
+
+ ![IPv6 dual stack virtual network in Azure](./media/virtual-network-ipv4-ipv6-dual-stack-powershell/dual-stack-vnet.png)
++
+## Clean up resources
+
+When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group, VM, and all related resources.
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name dsRG1
+```
+
+## Next steps
+
+In this article, you created a Basic Load Balancer with a dual frontend IP configuration (IPv4 and IPv6). You also created a two virtual machines that included NICs with dual IP configurations (IPV4 + IPv6) that were added to the back-end pool of the load balancer. To learn more about IPv6 support in Azure virtual networks, see [What is IPv6 for Azure Virtual Network?](../../virtual-network/ip-services/ipv6-overview.md)
load-testing How To Export Test Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-export-test-results.md
Previously updated : 11/30/2021 Last updated : 03/31/2022
In this article, you'll learn how to download the test results from Azure Load Testing Preview in the Azure portal. You might use these results for reporting in third-party tools.
-The test results contain a comma-separated values (CSV) file with details of each application request. In addition, all files for running the Apache JMeter dashboard locally are included.
+The test results contain a comma-separated values (CSV) file with details of each application request. See [Apache JMeter CSV log format](https://jmeter.apache.org/usermanual/listeners.html#csvlogformat) and the [Apache JMeter Glossary](https://jmeter.apache.org/usermanual/glossary.html) for details about the different fields.
+
+You can also use the test results to diagnose errors during a load test. The `responseCode` and `responseMessage` fields give you more information about failed requests. For more information about investigating errors, see [Troubleshoot test execution errors](./how-to-find-download-logs.md).
+
+In addition, all files for running the Apache JMeter dashboard locally are included.
> [!NOTE] > The Apache JMeter dashboard generation is temporarily disabled. You can download the CSV files with the test results.
In this section, you'll retrieve and download the Azure Load Testing results fil
:::image type="content" source="media/how-to-export-test-results/test-results-zip.png" alt-text="Screenshot that shows the test results zip file in the downloads list.":::
- The *testreport.csv* file contains the individual requests that the test engine executed during the load test. The Apache JMeter dashboard, which is also included in the zip file, uses this file for its graphs.
+ The *testreport.csv* file contains details of each request that the test engine executed during the load test. The Apache JMeter dashboard, which is also included in the zip file, uses this file for its graphs.
## Next steps
+- Learn more about [Troubleshooting test execution errors](./how-to-find-download-logs.md).
- For information about comparing test results, see [Compare multiple test results](./how-to-compare-multiple-test-runs.md).- - To learn about performance test automation, see [Configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-sap-connector.md
This article explains how you can access your SAP resources from Azure Logic App
For more information, review [SAP Note 1850230 - GW: "Registration of tp &lt;program ID&gt; not allowed"](https://userapps.support.sap.com/sap/support/knowledge/en/1850230).
- * Set up your SAP gateway security logging to help find Access Control List (ACL) issues. For more information, review the [SAP help topic for setting up gateway logging](https://help.sap.com/erp_hcm_ias2_2015_02/helpdata/en/48/b2a710ca1c3079e10000000a42189b/frameset.htm).
+ * Set up your SAP gateway security logging to help find Access Control List (ACL) issues. For more information, review the [SAP help topic for setting up gateway logging](https://help.sap.com/viewer/62b4de4187cb43668d15dac48fc00732/7.31.25/en-US/48b2a710ca1c3079e10000000a42189b.html).
* In the **Configuration of RFC Connections** (T-Code SM59) dialog box, create an RFC connection with the **TCP/IP** type. The **Activation Type** must be **Registered Server Program**. Set the RFC connection's **Communication Type with Target System** value to **Unicode**.
The SAP connector supports the following message and data integration types from
The SAP connector uses the [SAP .NET Connector (NCo) library](https://support.sap.com/en/product/connectors/msnet.html).
-To use the available [SAP trigger](#triggers) and [SAP actions](#actions), you need to first authenticate your connection. You can authenticate your connection with a username and password. The SAP connector also supports [SAP Secure Network Communications (SNC)](https://help.sap.com/doc/saphelp_nw70/7.0.31/e6/56f466e99a11d1a5b00000e835363f/content.htm?no_cache=true) for authentication. You can use SNC for SAP NetWeaver single sign-on (SSO), or for additional security capabilities from external products. If you use SNC, review the [SNC prerequisites](#snc-prerequisites) and the [SNC prerequisites for the ISE connector](#snc-prerequisites-ise).
+To use the available [SAP trigger](#triggers) and [SAP actions](#actions), you need to first authenticate your connection. You can authenticate your connection with a username and password. The SAP connector also supports [SAP Secure Network Communications (SNC)](https://help.sap.com/viewer/e73bba71770e4c0ca5fb2a3c17e8e229/7.31.25/en-US/e656f466e99a11d1a5b00000e835363f.html) for authentication. You can use SNC for SAP NetWeaver single sign-on (SSO), or for additional security capabilities from external products. If you use SNC, review the [SNC prerequisites](#snc-prerequisites) and the [SNC prerequisites for the ISE connector](#snc-prerequisites-ise).
### Network prerequisites
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/workflow-definition-language-functions-reference.md
formatDateTime('<timestamp>', '<format>'?, '<locale>'?)
``` formatDateTime('03/15/2018') // Returns '2018-03-15T00:00:00.0000000'. formatDateTime('03/15/2018 12:00:00', 'yyyy-MM-ddTHH:mm:ss') // Returns '2018-03-15T12:00:00'.- formatDateTime('01/31/2016', 'dddd MMMM d') // Returns 'Sunday January 31'. formatDateTime('01/31/2016', 'dddd MMMM d', 'fr-fr') // Returns 'dimanche janvier 31'. formatDateTime('01/31/2016', 'dddd MMMM d', 'fr-FR') // Returns 'dimanche janvier 31'.
nthIndexOf('<text>', '<searchText>', <occurrence>)
|--|-||-| | <*text*> | Yes | String | The string that contains the substring to find | | <*searchText*> | Yes | String | The substring to find |
-| <*ocurrence*> | Yes | Integer | A positive number that specifies the *n*th occurrence of the substring to find.|
+| <*ocurrence*> | Yes | Integer | A positive number that specifies the *n*th occurrence of the substring to find. |
||||| | Return value | Type | Description |
And returns this result: `"Sophia Owen"`
Return the timestamp from a string that contains a timestamp. ```
-parseDateTime('<timestamp>', '<locale>', '<format>'?)
+parseDateTime('<timestamp>', '<locale>'?, '<format>'?)
``` | Parameter | Required | Type | Description | |--|-||-| | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*locale*> | Yes | String | The locale to use. If *locale* isn't a valid value, an error is generated that the provided locale isn't valid or doesn't have an associated locale. |
-| <*format*> | Yes | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
+| <*locale*> | No | String | The locale to use. <br><br>If not specified, default locale is used. <br><br>If *locale* isn't a valid value, an error is generated that the provided locale isn't valid or doesn't have an associated locale. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br> If not specified, the parsing will be attempted with multiple compatible with the provided locale. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
|||| | Return value | Type | Description |
And returns this array with the remaining items: `[1,2,3]`
Return a substring by specifying the starting and ending position or value. ```
-slice('<text>', <startIndex>, <endIndex>)
+slice('<text>', <startIndex>, <endIndex>?)
``` | Parameter | Required | Type | Description |
machine-learning Latent Dirichlet Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/latent-dirichlet-allocation.md
This component requires a dataset that contains a column of text, either raw or
1. Add the **Latent Dirichlet Allocation** component to your pipeline.
+ In the list of assets under *Text Analytics*, drag and drop the **Latent Dirichlet Allocation** component onto the canvas.
+ 2. As input for the component, provide a dataset that contains one or more text columns. 3. For **Target columns**, choose one or more columns that contain text to analyze.
machine-learning Dsvm Tutorial Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tutorial-resource-manager.md
Title: 'Quickstart: Create a Data Science VM - Resource Manager template'
description: In this quickstart, you use an Azure Resource Manager template to quickly deploy a Data Science Virtual Machine --++ Last updated 06/10/2020
machine-learning Dsvm Ubuntu Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro.md
Title: 'Quickstart: Create an Ubuntu Data Science Virtual Machine'
description: Configure and create a Data Science Virtual Machine for Linux (Ubuntu) to do analytics and machine learning. --++ Last updated 03/10/2020
machine-learning Provision Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/provision-vm.md
description: Configure and create a Data Science Virtual Machine on Azure for analytics and machine learning. --++ Last updated 12/31/2019
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Main changes:
- Further `Log4j` vulnerability mitigation - although not used, we moved all `log4j` to version v2, we have removed old log4j jars1.0 and moved `log4j` version 2.0 jars. - Azure CLI to version 2.33.1 - Redesign of Conda environments - we're continuing with alignment and refining the Conda environments so we created:
- - `azureml_py38`: environment based on Python 3.8 with preinstalled [AzureML SDK](/python/api/overview/azure/ml/?view=azure-ml-py) containing also [AutoML](/azure/machine-learning/concept-automated-ml) environment
+ - `azureml_py38`: environment based on Python 3.8 with preinstalled [AzureML SDK](/python/api/overview/azure/ml/?view=azure-ml-py&preserve-view=true) containing also [AutoML](/azure/machine-learning/concept-automated-ml) environment
- `azureml_py38_PT_TF`: complementary environment `azureml_py38` with preinstalled with latest TensorFlow and PyTorch - `py38_default`: default system environment based on Python 3.8 - we removed `azureml_py36_tensorflow`, `azureml_py36_pytorch`, `py38_tensorflow` and `py38_pytorch` environments.
machine-learning How To Access Terminal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-terminal.md
Any of the [available Jupyter Kernels](https://github.com/jupyter/jupyter/wiki/J
Select **View active sessions** in the terminal toolbar to see a list of all active terminal sessions. When there are no active sessions, this tab will be disabled.
-Close any unused sessions to preserve your compute instance's resources.
+> [!WARNING]
+> Make sure you close any unused sessions to preserve your compute instance's resources. Idle terminals may impact performance of compute instances.
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-assign-roles.md
Previously updated : 10/21/2021 Last updated : 03/23/2022
The following table is a summary of Azure Machine Learning activities and the pe
| Activity | Subscription-level scope | Resource group-level scope | Workspace-level scope | | -- | -- | -- | -- |
-| Create new workspace | Not required | Owner or contributor | N/A (becomes Owner or inherits higher scope role after creation) |
+| Create new workspace <sub>1</sub> | Not required | Owner or contributor | N/A (becomes Owner or inherits higher scope role after creation) |
| Request subscription level Amlcompute quota or set workspace level quota | Owner, or contributor, or custom role </br>allowing `/locations/updateQuotas/action`</br> at subscription scope | Not Authorized | Not Authorized | | Create new compute cluster | Not required | Not required | Owner, contributor, or custom role allowing: `/workspaces/computes/write` | | Create new compute instance | Not required | Not required | Owner, contributor, or custom role allowing: `/workspaces/computes/write` | | Submitting any type of run | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/*/read", "/workspaces/environments/write", "/workspaces/experiments/runs/write", "/workspaces/metadata/artifacts/write", "/workspaces/metadata/snapshots/write", "/workspaces/environments/build/action", "/workspaces/experiments/runs/submit/action", "/workspaces/environments/readSecrets/action"` | | Publishing pipelines and endpoints | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/endpoints/pipelines/*", "/workspaces/pipelinedrafts/*", "/workspaces/modules/*"` |
+| Attach an AKS resource <sub>2</sub> | Not required | Owner or contributor on the resource group that contains AKS |
| Deploying a registered model on an AKS/ACI resource | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/services/aks/write", "/workspaces/services/aci/write"` | | Scoring against a deployed AKS endpoint | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/services/aks/score/action", "/workspaces/services/aks/listkeys/action"` (when you are not using Azure Active Directory auth) OR `"/workspaces/read"` (when you are using token auth) | | Accessing storage using interactive notebooks | Not required | Not required | Owner, contributor, or custom role allowing: `"/workspaces/computes/read", "/workspaces/notebooks/samples/read", "/workspaces/notebooks/storage/*", "/workspaces/listStorageAccountKeys/action", "/workspaces/listNotebookAccessToken/read"`| | Create new custom role | Owner, contributor, or custom role allowing `Microsoft.Authorization/roleDefinitions/write` | Not required | Owner, contributor, or custom role allowing: `/workspaces/computes/write` |
-> [!TIP]
-> If you receive a failure when trying to create a workspace for the first time, make sure that your role allows `Microsoft.MachineLearningServices/register/action`. This action allows you to register the Azure Machine Learning resource provider with your Azure subscription.
+1: If you receive a failure when trying to create a workspace for the first time, make sure that your role allows `Microsoft.MachineLearningServices/register/action`. This action allows you to register the Azure Machine Learning resource provider with your Azure subscription.
+
+2: When attaching an AKS cluster, you also need to the [Azure Kubernetes Service Cluster Admin Role](/azure/role-based-access-control/built-in-roles#azure-kubernetes-service-cluster-admin-role) on the cluster.
+
+### Create a workspace using a customer-managed key
+
+When using a customer-managed key (CMK), an Azure Key Vault is used to store the key. The user or service principal used to create the workspace must have owner or contributor access to the key vault.
+
+Within the key vault, the user or service principal must have create, get, delete, and purge access to the key through a key vault access policy. For more information, see [Azure Key Vault security](/azure/key-vault/general/security-features#controlling-access-to-key-vault-data).
### User-assigned managed identity with Azure ML compute cluster
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md
If your Azure Machine Learning workspace uses a private endpoint and virtual net
* If you are __OK__ with the CLI v2 communication over the public internet, use the following `--public-network-access` parameter for the `az ml workspace update` command to enable public network access. For example, the following command updates a workspace for public network access: ```azurecli
- az ml workspace update --name myworkspace --public-network-access
+ az ml workspace update --name myworkspace --public-network-access enabled
``` * If you are __not OK__ with the CLI v2 communication over the public internet, you can use an Azure Private Link to increase security of the communication. Use the following links to secure communications with Azure Resource Manager by using Azure Private Link.
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-kubernetes.md
Azure Machine Learning can deploy trained machine learning models to Azure Kuber
The AML control plane does not talk to this Public IP. It talks to the AKS control plane for deployments.
+- To attach an AKS cluster, the service principal/user performing the operation must be assigned the __Owner or contributor__ Azure role-based access control (Azure RBAC) role on the Azure resource group that contains the cluster. The service principal/user must also be assigned [Azure Kubernetes Service Cluster Admin Role](/azure/role-based-access-control/built-in-roles#azure-kubernetes-service-cluster-admin-role) on the cluster.
+ - If you **attach** an AKS cluster, which has an [Authorized IP range enabled to access the API server](../aks/api-server-authorized-ip-ranges.md), enable the AML control plane IP ranges for the AKS cluster. The AML control plane is deployed across paired regions and deploys inference pods on the AKS cluster. Without access to the API server, the inference pods cannot be deployed. Use the [IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=56519) for both the [paired regions](../availability-zones/cross-region-replication-azure.md) when enabling the IP ranges in an AKS cluster. Authorized IP ranges only works with Standard Load Balancer.
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-cli.md
If your Azure Machine Learning workspace uses a private endpoint and virtual net
* If you are __OK__ with the CLI v2 communication over the public internet, use the following `--public-network-access` parameter for the `az ml workspace update` command to enable public network access. For example, the following command updates a workspace for public network access: ```azurecli
- az ml workspace update --name myworkspace --public-network-access
+ az ml workspace update --name myworkspace --public-network-access enabled
``` * If you are __not OK__ with the CLI v2 communication over the public internet, you can use an Azure Private Link to increase security of the communication. Use the following links to secure communications with Azure Resource Manager by using Azure Private Link.
machine-learning How To Run Jupyter Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-run-jupyter-notebooks.md
Using the following keystroke shortcuts, you can more easily navigate and run co
## Troubleshooting
-* If you can't connect to a notebook, ensure that web socket communication is **not** disabled. For compute instance Jupyter functionality to work, web socket communication must be enabled. Ensure your [network allows websocket connections](how-to-access-azureml-behind-firewall.md?tabs=ipaddress#microsoft-hosts) to *.instances.azureml.net and *.instances.azureml.ms.
+* **Connecting to a notebook**: If you can't connect to a notebook, ensure that web socket communication is **not** disabled. For compute instance Jupyter functionality to work, web socket communication must be enabled. Ensure your [network allows websocket connections](how-to-access-azureml-behind-firewall.md?tabs=ipaddress#microsoft-hosts) to *.instances.azureml.net and *.instances.azureml.ms.
-* When a compute instance is deployed in a workspace with a private endpoint, it can be only be [accessed from within virtual network](./how-to-secure-training-vnet.md). If you are using custom DNS or hosts file, add an entry for < instance-name >.< region >.instances.azureml.ms with the private IP address of your workspace private endpoint. For more information see the [custom DNS](./how-to-custom-dns.md?tabs=azure-cli) article.
+* **Private endpoint**: When a compute instance is deployed in a workspace with a private endpoint, it can be only be [accessed from within virtual network](./how-to-secure-training-vnet.md). If you are using custom DNS or hosts file, add an entry for < instance-name >.< region >.instances.azureml.ms with the private IP address of your workspace private endpoint. For more information see the [custom DNS](./how-to-custom-dns.md?tabs=azure-cli) article.
-* If your kernel crashed and was restarted, you can run the following command to look at jupyter log and find out more details: `sudo journalctl -u jupyter`. If kernel issues persist, consider using a compute instance with more memory.
+* **Kernel crash**: If your kernel crashed and was restarted, you can run the following command to look at jupyter log and find out more details: `sudo journalctl -u jupyter`. If kernel issues persist, consider using a compute instance with more memory.
-* If you run into an expired token issue, sign out of your Azure ML studio, sign back in, and then restart the notebook kernel.
+* **Kernel not found** or **Kernel operations were disabled**: When using the default Python 3.8 kernel on a compute instance, you may get an error such as "Kernel not found" or "Kernel operations were disabled". To fix, use one of the following methods:
+ * Create a new compute instance. This will use a new image where this problem has been resolved.
+ * Use the Py 3.6 kernel on the existing compute instance.
+ * From a terminal in the default py38 environment, run ```pip install ipykernel==6.6.0``` OR ```pip install ipykernel==6.0.3```
-* When uploading a file through the notebook's file explorer, you are limited files that are smaller than 5TB. If you need to upload a file larger than this, we recommend that you use one of the following methods:
+* **Expired token**: If you run into an expired token issue, sign out of your Azure ML studio, sign back in, and then restart the notebook kernel.
+
+* **File upload limit**: When uploading a file through the notebook's file explorer, you are limited files that are smaller than 5TB. If you need to upload a file larger than this, we recommend that you use one of the following methods:
* Use the SDK to upload the data to a datastore. For more information, see the [Upload the data](./tutorial-1st-experiment-bring-data.md#upload) section of the tutorial. * Use [Azure Data Factory](how-to-data-ingest-adf.md) to create a data ingestion pipeline. + ## Next steps * [Run your first experiment](tutorial-1st-experiment-sdk-train.md)
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
This article is part of a series on securing an Azure Machine Learning workflow.
* [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md) * [Use a firewall](how-to-access-azureml-behind-firewall.md)
+* [Tutorial: Create a secure workspace](tutorial-create-secure-workspace.md)
+* [Tutorial: Create a secure workspace using a template](tutorial-create-secure-workspace-template.md)
machine-learning How To Use Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-event-grid.md
---++ Last updated 10/21/2021
machine-learning How To Use Reinforcement Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-reinforcement-learning.md
description: Learn how to use Azure Machine Learning reinforcement learning (pre
--++ Last updated 10/21/2021
machine-learning Overview What Is Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-is-azure-machine-learning.md
--++ Last updated 08/03/2021 adobe-target: true
marketplace Analytics System Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-system-queries.md
description: Learn about system queries you can use to programmatically get anal
Previously updated : 02/23/2022 Last updated : 03/30/2022
The following sections provide various report queries.
**Report query**:
-`Date,OfferName,ReferralDomain,CountryName,PageVisits,GetItNow,ContactMe,TestDrive,FreeTrial FROM ISVMarketplaceInsights TIMESPAN LAST_6_MONTHS`
+`SELECT Date,OfferName,ReferralDomain,CountryName,PageVisits,GetItNow,ContactMe,TestDrive, FreeTrial FROM ISVMarketplaceInsights TIMESPAN LAST_6_MONTHS`
## Revenue report query **Report description**: Revenue report for the last 6M
-**QueryID**: `6fd7624b-aa9f-42df-a61d-67d42fd00e92`
+**QueryID**: `bf54dde4-7dc4-492f-a69a-f45de049bfcb`
**Report query**:
mysql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-read-replicas.md
Last updated 06/17/2021
MySQL is one of the popular database engines for running internet-scale web and mobile applications. Many of our customers use it for their online education services, video streaming services, digital payment solutions, e-commerce platforms, gaming services, news portals, government, and healthcare websites. These services are required to serve and scale as the traffic on the web or mobile application increases.
-On the applications side, the application is typically developed in Java or php and migrated to run on Azure virtual machine scale sets or Azure App Services or are containerized to run on Azure Kubernetes Service (AKS). With virtual machine scale set, App Service or AKS as underlying infrastructure, application scaling is simplified by instantaneously provisioning new VMs and replicating the stateless components of applications to cater to the requests but often, database ends up being a bottleneck as centralized stateful component.
+On the applications side, the application is typically developed in Java or PHP and migrated to run on Azure virtual machine scale sets or Azure App Services or are containerized to run on Azure Kubernetes Service (AKS). With virtual machine scale set, App Service or AKS as underlying infrastructure, application scaling is simplified by instantaneously provisioning new VMs and replicating the stateless components of applications to cater to the requests but often, database ends up being a bottleneck as centralized stateful component.
The read replica feature allows you to replicate data from an Azure Database for MySQL flexible server to a read-only server. You can replicate from the source server to up to **10** replicas. Replicas are updated asynchronously using the MySQL engine's native binary log (binlog) file position-based replication technology. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
network-watcher Network Watcher Troubleshoot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-overview.md
The resource troubleshooting log files are stored in a storage account after res
> [!NOTE] > 1. In some cases, only a subset of the logs files is written to storage.
-> 2. For newer Gateway versions, the IkeErrors.txt, Scrubbed-wfpdiag.txt and wfpdiag.txt.sum have been replaced by a IkeLogs.txt file that contains the whole IKE activity (not just errors).
+> 2. For newer Gateway versions, the IkeErrors.txt, Scrubbed-wfpdiag.txt and wfpdiag.txt.sum have been replaced by an IkeLogs.txt file that contains the whole IKE activity (not just errors).
For instructions on downloading files from Azure storage accounts, refer to [Get started with Azure Blob storage using .NET](../storage/blobs/storage-quickstart-blobs-dotnet.md). Another tool that can be used is Storage Explorer. More information about Storage Explorer can be found here at the following link: [Storage Explorer](https://storageexplorer.com/)
Elapsed Time 330 sec
To learn how to diagnose a problem with a gateway or gateway connection, see [Diagnose communication problems between networks](diagnose-communication-problem-between-networks.md). <!--Image references-->
-[1]: ./media/network-watcher-troubleshoot-overview/GatewayTenantWorkerLogs.png
+[1]: ./media/network-watcher-troubleshoot-overview/gateway-tenant-worker-logs-new.png
[2]: ./media/network-watcher-troubleshoot-overview/portal.png
openshift Quickstart Openshift Arm Bicep Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/quickstart-openshift-arm-bicep-template.md
The following example shows how your ARM template should look when configured fo
The template defines three Azure resources: * [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
-* [**Microsoft.Network/virtualNetworks/providers/roleAssignments**](/azure/templates/microsoft.network/virtualnetworks/providers/roleassignments)
+* [**Microsoft.Network/virtualNetworks/providers/roleAssignments**](/azure/templates/microsoft.authorization/roleassignments)
* [**Microsoft.RedHatOpenShift/OpenShiftClusters**](/azure/templates/microsoft.redhatopenshift/openshiftclusters) More Azure Red Hat OpenShift template samples can be found on the [Red Hat OpenShift web site](https://docs.openshift.com/container-platform/4.9/installing/installing_azure/installing-azure-user-infra.html).
The following example shows how your Azure Bicep file should look when configure
The Bicep file defines three Azure resources: * [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks)
-* [Microsoft.Network/virtualNetworks/providers/roleAssignments](/azure/templates/microsoft.network/virtualnetworks/providers/roleassignments)
+* [Microsoft.Network/virtualNetworks/providers/roleAssignments](/azure/templates/microsoft.authorization/roleassignments)
* [Microsoft.RedHatOpenShift/OpenShiftClusters](/azure/templates/microsoft.redhatopenshift/openshiftclusters) More Azure Red Hat OpenShift templates can be found on the [Red Hat OpenShift web site](https://docs.openshift.com/container-platform/4.9/installing/installing_azure/installing-azure-user-infra.html).
Advance to the next article to learn how to configure the cluster for authentica
* [Configure authentication with Azure Active Directory using the command line](configure-azure-ad-cli.md)
-* [Configure authentication with Azure Active Directory using the Azure portal and OpenShift web console](configure-azure-ad-cli.md)i
+* [Configure authentication with Azure Active Directory using the Azure portal and OpenShift web console](configure-azure-ad-cli.md)i
private-5g-core Activate Sims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/activate-sims.md
- Title: Activate SIMs-
-description: This how-to guide shows how to activate SIMs used by user equipment so they can use your private mobile network.
---- Previously updated : 01/17/2022---
-# Activate SIMs for Azure Private 5G Core Preview
-
-SIM resources represent physical or eSIMs used by user equipment (UE). Activating a SIM resource allows the UE to use the corresponding physical or eSIM to access your private mobile network. In this how-to guide, you'll learn how to activate the SIMs you've provisioned.
-
-## Prerequisites
--- Ensure you can sign in to the Azure portal using an account with access to the active subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md).-- Ensure you've provisioned and assigned a SIM policy to the SIMs you want to activate, as described in [Provision SIMs - Azure portal](provision-sims-azure-portal.md).-- Get the name of the private mobile network containing the SIMs you want to activate.-
-## Activate your chosen SIMs
-
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCPortal](https://aka.ms/AP5GCPortal).
-1. Search for and select the Mobile Network resource representing the private mobile network for which you want to activate SIMs.
-
- :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a mobile network resource.":::
-
-1. In the **Resource** menu, select **SIMs**.
-1. You're shown a list of provisioned SIMs in the private mobile network. Tick the checkbox next to the name of each SIM you want to activate.
-3. In the **Command** bar, select **Activate**.
-4. In the pop-up that appears, select **Activate** to confirm that you want to activate your chosen SIMs.
-5. The activation process can take a few minutes. During this time, the value in the **Activation** status column for each SIM will display as **Activating**. Keep selecting **Refresh** in the command bar until the **Activation** status field for all of the relevant SIMs changes to **Activated**.
-
-## Next steps
--- [Learn more about policy control](policy-control.md)
private-5g-core Collect Required Information For A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-a-site.md
Azure Private 5G Core Preview private mobile networks include one or more sites.
## Prerequisites
-You must have completed all of the steps in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices) for your new site.
+You must have completed all of the steps in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses), [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools), and [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices) for your new site.
## Collect Mobile Network Site resource values
Collect all the values in the following table to define the packet core instance
Collect all the values in the following table to define the packet core instance's connection to the data network over the N6 interface. > [!IMPORTANT]
-> Where noted, you must use the same values you used when deploying the AKS-HCI cluster on your Azure Stack Edge Pro device. You did this as part of the steps in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md).
+> Where noted, you must use the same values you used when deploying the AKS-HCI cluster on your Azure Stack Edge Pro device. You did this as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices).
|Value |Field name in Azure portal | ||| |The name of the data network. |**Data network**| |The network address of the data subnet in CIDR notation. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N6 subnet**| |The data subnet default gateway. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and it must match the value you used when deploying the AKS-HCI cluster. |**N6 gateway**|
- | The network address of the subnet from which IP addresses must be allocated to User Equipment (UEs), given in CIDR notation. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**UE IP subnet**|
+ | The network address of the subnet from which dynamic IP addresses must be allocated to User Equipment (UEs), given in CIDR notation. You won't need this if you don't want to support dynamic IP address allocation. You identified this in [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**|
+ | The network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs), given in CIDR notation. You won't need this if you don't want to support static IP address allocation. You identified this in [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`198.51.100.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
|Whether Network Address and Port Translation (NAPT) should be enabled for this data network. NAPT allows you to translate a large pool of private IP addresses for UEs to a small number of public IP addresses. The translation is performed at the point where traffic enters the core network, maximizing the utility of a limited supply of public IP addresses. |**NAPT**| ## Next steps
private-5g-core Collect Required Information For Private Mobile Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-private-mobile-network.md
As part of creating your private mobile network, you can provision one or more S
If you want to provision SIMs as part of deploying your private mobile network, you must choose one of the following provisioning methods: - Manually entering values for each SIM into fields in the Azure portal. This option is best when provisioning a small number of SIMs.-- Importing a JSON file containing values for one or more SIM resources. This option is best when provisioning a large number of SIMs. The file format required for this JSON file is given in [Provision SIM resources through the Azure portal using a JSON file](#provision-sim-resources-through-the-azure-portal-using-a-json-file).
+- Importing a JSON file containing values for one or more SIM resources. This option is best when provisioning a large number of SIMs. The file format required for this JSON file is given in [JSON file format for provisioning SIMs](#json-file-format-for-provisioning-sims).
You must then collect each of the values given in the following table for each SIM resource you want to provision.
You must then collect each of the values given in the following table for each S
|The derived operator code (OPc). This is derived from the SIM's Ki and the network's operator code (OP), and is used by the packet core to authenticate a user using a standards-based algorithm. This must be a 32-character string, containing hexadecimal characters only. |**Opc**|`operatorKeyCode`| |The type of device that is using this SIM. This is an optional, free-form string. You can use it as required to easily identify device types that are using the enterprise's mobile networks. |**Device type**|`deviceType`|
-### Provision SIM resources through the Azure portal using a JSON file
+### JSON file format for provisioning SIMs
The following example shows the file format you'll need if you want to provision your SIM resources using a JSON file. It contains the parameters required to provision two SIMs (SIM1 and SIM2).
The following example shows the file format you'll need if you want to provision
] ```
+## Decide whether you want to use the default service and SIM policy
+
+You'll be given the option of creating a default service and SIM policy as part of deploying your private mobile network. They allow all traffic in both directions for all the SIMs you provision. They're designed to allow you to quickly deploy a private mobile network and bring SIMs into service automatically, without the need to design your own policy control configuration.
+
+Decide whether the default service and SIM policy are suitable for the initial use of your private mobile network. You can find information on each of the specific settings for these resources in [Default service and SIM policy](default-service-sim-policy.md) if you need it.
+
+If they aren't suitable, you can choose to deploy the private mobile network without any services or SIM policies. In this case, any SIMs you provision won't be brought into service when you create your private mobile network. You'll need to create your own services and SIM policies later.
+
+For detailed information on services and SIM policies, see [Policy control](policy-control.md).
+ ## Next steps You can now use the information you've collected to deploy your private mobile network.
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
For each of the following networks, allocate a subnet and then identify the list
### Management network - Network address in Classless Inter-Domain Routing (CIDR) notation. -- Default gateway.
+- Default gateway.
- One IP address for the Azure Stack Edge Pro device's management port. - Three sequential IP addresses for the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster nodes. - One IP address for accessing local monitoring tools for the packet core instance.
For each of the following networks, allocate a subnet and then identify the list
- One IP address for port 6 on the Azure Stack Edge Pro device. - One IP address for the packet core instance's N6 interface.
-### User Equipment (UE) IP address pool
+## Allocate user equipment (UE) IP address pools
+
+Azure Private 5G Core supports the following IP address allocation methods for UEs.
+
+- Dynamic. Dynamic IP address allocation automatically assigns a new IP address to a UE each time it connects to the private mobile network.
+
+- Static. Static IP address allocation ensures that a UE receives the same IP address every time it connects to the private mobile network. This is useful when you want Internet of Things (IoT) applications to be able to consistently connect to the same device. For example, you may configure a video analysis application with the IP addresses of the cameras providing video streams. If these cameras have static IP addresses, you will not need to reconfigure the video analysis application with new IP addresses each time the cameras restart. You'll allocate static IP addresses to a UE as part of [provisioning its SIM](provision-sims-azure-portal.md).
+
+You can choose to support one or both of these methods for each site in your private mobile network.
+
+For each site you're deploying, do the following:
+
+- Decide which IP address allocation methods you want to support.
+- For each method you want to support, identify an IP address pool from which IP addresses can be allocated to UEs. You'll need to provide each IP address pool in CIDR notation.
-- IP address pool in CIDR notation. This should contain IP addresses for each UE that will be served by the private mobile network.
+ If you decide to support both methods for a particular site, ensure that the IP address pools are of the same size and do not overlap.
## Order and set up your Azure Stack Edge Pro device(s)
-You must do the following for each site you want to add to your private mobile network. Detailed instructions for how to carry out each step are included in the **Detailed instructions** column where applicable.
+Do the following for each site you want to add to your private mobile network. Detailed instructions for how to carry out each step are included in the **Detailed instructions** column where applicable.
| Step No. | Description | Detailed instructions | |--|--|--|
private-5g-core Configure Sim Policy Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/configure-sim-policy-azure-portal.md
- Ensure you can sign in to the Azure portal using an account with access to the active subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). This account must have the built-in Contributor role at the subscription scope. - Identify the name of the Mobile Network resource corresponding to your private mobile network. - Collect all the configuration values in [Collect the required information for a SIM policy](collect-required-information-for-sim-policy.md) for your chosen SIM policy.-- Decide whether you want to assign this SIM policy to any SIMs as part of configuring it. If you do, you must have provisioned these SIMs following the instructions in [Provision SIMs - Azure portal](provision-sims-azure-portal.md) and ensured they aren't currently active.
+- Decide whether you want to assign this SIM policy to any SIMs as part of configuring it. If you do, you must have provisioned these SIMs following the instructions in [Provision SIMs - Azure portal](provision-sims-azure-portal.md).
## Configure the SIM policy
## Next steps -- If you assigned this SIM policy to some SIMs, [activate the SIMs so they can access your private mobile network](activate-sims.md).
+- [Learn more about policy control](policy-control.md)
private-5g-core Create A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-a-site.md
Azure Private 5G Core private mobile networks include one or more *sites*. Each
## Prerequisites -- Complete the steps in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses) and [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices) for your new site.
+- Complete the steps in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses), [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools), and [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices) for your new site.
- Collect all of the information in [Collect the required information for a site](collect-required-information-for-a-site.md). - Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.
In this step, you'll create the **Mobile Network Site** resource representing th
- A **Packet Core Data Plane** resource representing the data plane function of the packet core instance in the site. - An **Attached Data Network** resource representing the site's view of the data network.
- :::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/site-and-related-resources.png" alt-text="Screenshot of the Azure portal showing a resource group containing a site and its related resources." lightbox="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/site-and-related-resources.png":::
+ :::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/site-related-resources.png" alt-text="Screenshot of the Azure portal showing a resource group containing a site and its related resources." lightbox="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/site-related-resources.png":::
## Next steps
private-5g-core Default Service Sim Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/default-service-sim-policy.md
+
+ Title: Default service and SIM policy
+
+description: Information on the default service and SIM policy that can be created as part of deploying a private mobile network.
++++ Last updated : 03/18/2022+++
+# Default service and SIM policy
+
+You're given the option of creating a default service and SIM policy when you first create a private mobile network using the instructions in [Deploy a private mobile network through Azure Private 5G Core Preview - Azure portal](how-to-guide-deploy-a-private-mobile-network-azure-portal.md).
+
+- The default service allows all traffic in both directions.
+- The default SIM policy is automatically assigned to all SIMs you provision as part of creating the private mobile network, and applies the default service to these SIMs.
+
+They're designed to allow you to quickly deploy a private mobile network and bring SIMs into service automatically, without the need to design your own policy control configuration.
+
+The following sections provide the settings for the default service and SIM policy. You can use these to decide whether they're suitable for the initial deployment of your private mobile network. If you need more information on any of these settings, see [Collect the required information for a service](collect-required-information-for-service.md) and [Collect the required information for a SIM policy](collect-required-information-for-sim-policy.md).
+
+## Default service
+
+The following tables provide the settings for the default service and its associated data flow policy rule and data flow policy template.
+
+### Service settings
+
+|Setting |Value |
+|||
+|The service name. |*Allow-all-traffic* |
+|A precedence value that the packet core instance must use to decide between services when identifying the QoS values to offer.|*253* |
+|The Maximum Bit Rate (MBR) for uploads across all service data flows that will be included in data flow policy rules configured on this service.|*2 Gbps* |
+|The Maximum Bit Rate (MBR) for downloads across all service data flows that will be included in data flow policy rules configured on this service. |*2 Gbps* |
+|The default QoS Flow Allocation and Retention Policy (ARP) priority level.| *9* |
+|The default 5G QoS Indicator (5QI) value for this service. The 5QI identifies a set of 5G QoS characteristics that control QoS forwarding treatment for QoS Flows, such as limits for Packet Error Rate. | *9* |
+|The default QoS Flow preemption capability for QoS Flows for this service. The preemption capability of a QoS Flow controls whether it can preempt another QoS Flow with a lower priority level. |*May not preempt* |
+|The default QoS Flow preemption vulnerability for QoS Flows for this service. The preemption vulnerability of a QoS Flow controls whether it can be preempted another QoS Flow with a higher priority level. |*Preemptable* |
+
+### Data flow policy rule settings
+
+|Setting |Value |
+|||
+|The name of the rule. | *All-traffic* |
+|A precedence value that the packet core instance must use to decide between data flow policy rules. | *253* |
+|A traffic control setting determining whether flows that match the data flow template on this data flow policy rule are permitted. | *Enabled* |
+
+### Data flow template settings
+
+|Setting |Value |
+|||
+|The name of the template. | *Any-traffic* |
+|A list of allowed protocol(s) for this flow. | *All* |
+|The direction of this flow. | *Bidirectional* |
+|The remote IP address(es) to which SIMs will connect for this flow. | *any* |
+
+## Default SIM policy
+
+The following tables provide the settings for the default SIM policy and its associated network scope.
+
+### SIM policy settings
+
+|Setting |Value |
+|||
+|The SIM policy name. | *Default-policy* |
+|The UE Aggregated Maximum Bit Rate (UE-AMBR) for uplink traffic (traveling away from SIMs) across all Non-GBR QoS Flows for a SIM to which this SIM policy is assigned. | *2 Gbps* |
+|The UE Aggregated Maximum Bit Rate (UE-AMBR) for downlink traffic (traveling towards SIMs) across all Non-GBR QoS Flows for a SIM to which this SIM policy is assigned. | *2 Gbps* |
+|The interval between UE registrations for SIMs to which this SIM policy is assigned, given in seconds. | *3240* |
+
+### Network scope settings
+
+|Setting |Value |
+|||
+|The names of the services permitted on this data network. | *Allow-all-traffic* |
+|The maximum bitrate for uplink traffic (traveling away from SIMs) across all Non-GBR QoS Flows of a given PDU session on this data network. | *2 Gbps* |
+|The maximum bitrate for downlink traffic (traveling towards SIMs) across all Non-GBR QoS Flows of a given PDU session on this data network. | *2 Gbps* |
+|The default 5G QoS Indicator (5QI) value for this data network. The 5QI identifies a set of 5G QoS characteristics that control QoS forwarding treatment for QoS Flows, such as limits for Packet Error Rate. | *9* |
+|The default QoS Flow Allocation and Retention Policy (ARP) priority level for this data network. Flows with a higher ARP priority level preempt those with a lower ARP priority level. | *1* |
+|The default QoS Flow preemption capability for QoS Flows on this data network. The preemption capability of a QoS Flow controls whether it can preempt another QoS Flow with a lower priority level. | *May not preempt* |
+|The default QoS Flow preemption vulnerability for QoS Flows on this data network. The preemption vulnerability of a QoS Flow controls whether it can be preempted another QoS Flow with a higher priority level. | *Preemptable* |
+
+## Next steps
+
+Once you've decided whether the default service and SIM policy are suitable, you can start deploying your private mobile network.
+
+- [Collect the required information to deploy a private mobile network](collect-required-information-for-private-mobile-network.md)
+- [Deploy a private mobile network through Azure Private 5G Core Preview - Azure portal](how-to-guide-deploy-a-private-mobile-network-azure-portal.md)
private-5g-core Distributed Tracing Share Traces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/distributed-tracing-share-traces.md
+
+ Title: Export, upload and share traces
+
+description: In this how-to guide, learn how to export, upload and share your detailed traces for diagnostics.
++++ Last updated : 03/03/2022+++
+# Export, upload and share traces
+
+Azure Private 5G Core Preview offers a distributed tracing web GUI, which you can use to collect detailed traces for signaling flows involving packet core instances. You can export these traces and upload them to a storage account to allow your support representative to access them and provide troubleshooting assistance.
+
+## Prerequisites
+
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). This account must have the built-in Contributor role at the subscription scope.
+- Ensure you can sign in to the distributed tracing web GUI as described in [Distributed tracing](distributed-tracing.md).
+
+## Create a storage account and blob container in Azure
+
+ When uploading and sharing a trace for the first time, you'll need to create a storage account and a container resource to store your traces. You can skip this step if this has already been done.
+
+1. [Create a storage account](../storage/common/storage-account-create.md) with the following additional configuration:
+ 1. In the **Advanced** tab, select **Enable storage account key access**. This will allow your support representative to download traces stored in this account using the URLs you share with them.
+ 1. In the **Data protection** tab, under **Access control**, select **Enable version-level immutability support**. This will allow you to specify a time-based retention policy for the account in the next step.
+1. If you would like the traces in your storage account to be automatically deleted after a period of time, [configure a default time-based retention policy](../storage/blobs/immutable-policy-configure-version-scope.md#configure-a-default-time-based-retention-policy) for your storage account.
+1. [Create a container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) for your traces.
+
+## Export trace from the distributed tracing web GUI
+
+In this step, you'll export the trace from the distributed tracing web GUI and save it locally.
+
+1. Sign in to the distributed tracing web GUI at https://*\<LocalMonitoringIP\>*/sas, where *\<LocalMonitoringIP\>* is the IP address you set up for accessing local monitoring tools.
+1. In the **Search** tab, specify the SUPI and time for the event you're interested in and select **Search**.
+
+ :::image type="content" source="media\distributed-tracing-share-traces\distributed-tracing-search.png" alt-text="Screenshot of the Search display in the distributed tracing web G U I, showing the S U P I search field and date and time range options.":::
+
+1. Find the relevant trace in the **Diagnostics Search Results** tab and select it.
+
+ :::image type="content" source="media\distributed-tracing\distributed-tracing-search-results.png" alt-text="Screenshot of search results on a specific S U P I in the distributed tracing web G U I. It shows matching Successful P D U Session Establishment records.":::
+
+1. Select **Export** and save the file locally.
+
+ :::image type="content" source="media\distributed-tracing-share-traces\distributed-tracing-summary-view-export.png" alt-text="Screenshot of the Summary view of a specific trace in the distributed tracing web G U I, providing information on a Successful P D U Session Establishment record. The Export button in the top ribbon is highlighted." lightbox="media\distributed-tracing-share-traces\distributed-tracing-summary-view-export.png":::
+
+## Upload trace to your blob container
+
+You can now upload the trace to the container you created in [Create a storage account and blob container in Azure](#create-a-storage-account-and-blob-container-in-azure).
+
+1. Sign in to the Azure portal at [https://aka.ms/AP5GCPortal](https://aka.ms/AP5GCPortal).
+1. Navigate to your Storage account resource.
+1. In the **Resource** menu, select **Containers**.
+
+ :::image type="content" source="media\distributed-tracing-share-traces\containers-resource-menu.png" alt-text="Screenshot of the Azure portal showing the Containers option in the resource menu of a Storage account resource." lightbox="media\distributed-tracing-share-traces\containers-resource-menu.png":::
+
+1. Select the container you created for your traces.
+1. Select **Upload**. In the **Upload blob** window, search for the trace file you exported in the previous step and upload it.
+
+ :::image type="content" source="media\distributed-tracing-share-traces\upload-blob-tab.png" alt-text="Screenshot of the Azure portal showing the Overview display of a Container resource. The Upload button is highlighted." lightbox="media\distributed-tracing-share-traces\upload-blob-tab.png":::
+
+## Create URL for sharing the trace
+
+You'll now generate a shared access signature (SAS) URL for your trace. Once you create the URL, you can share it with your support representative for assistance with troubleshooting.
+
+1. Navigate to your Container resource.
+
+ :::image type="content" source="media\distributed-tracing-share-traces\container-overview-tab.png" alt-text="Screenshot of the Azure portal showing the Overview display of a Container resource.":::
+
+1. Select the trace you'd like to share.
+1. Select the **Generate SAS** tab.
+
+ :::image type="content" source="media\distributed-tracing-share-traces\generate-shared-access-signature-tab.png" alt-text="Screenshot of the Azure portal showing the container overview and the trace blob information window. The Generate S A S tab is highlighted." lightbox="media\distributed-tracing-share-traces\generate-shared-access-signature-tab.png":::
+
+1. Fill out the fields with the following configuration:
+ 1. Under **Signing method**, select **Account key**. This means anyone with access to the URL you generate will be able to paste it into a browser and download the trace.
+ 1. Under **Permissions**, select **Read**.
+ 1. Under **Start and expiry date/time**, set an expiration window of 48 hours for your token and URL.
+ 1. If you know the IP address from which your support representative will download the trace, set it under **Allowed IP addresses**. Otherwise, you can leave this blank.
+
+1. Select **Generate SAS token and URL**.
+
+ :::image type="content" source="media\distributed-tracing-share-traces\generate-shared-access-signature-token-url.png" alt-text="Screenshot of the Azure portal showing the Generate S A S tab in the trace blob information window. The Generate S A S token and U R L button is highlighted." lightbox="media\distributed-tracing-share-traces\generate-shared-access-signature-token-url.png":::
+
+1. Copy the contents of the **Blob SAS URL** field and share the URL with your support representative.
+
+## Delete trace
+
+You should free up space in your blob storage by deleting the traces you'll no longer need. To delete a trace:
+
+1. Navigate to your Container resource.
+1. Choose the trace you want to delete.
+1. Select **Delete**.
++
+## Next steps
+
+- [Learn more about the distributed tracing web GUI](distributed-tracing.md)
private-5g-core Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/distributed-tracing.md
Azure Private 5G Core Preview offers a *distributed tracing web GUI*, which you can use to collect detailed traces for signaling flows involving packet core instances. You can use *traces* to diagnose many common configuration, network, and interoperability problems affecting user service.
-## Searching for specific information
+## Access the distributed tracing web GUI
+
+To sign in to the distributed tracing web GUI:
+
+1. In your browser, enter https://*\<LocalMonitoringIP\>*/sas, where *\<LocalMonitoringIP\>* is the IP address for accessing the local monitoring tools that you set up in [Management network](complete-private-mobile-network-prerequisites.md#management-network).
+
+ :::image type="content" source="media\distributed-tracing\distributed-tracing-sign-in.png" alt-text="Screenshot of the distributed tracing web G U I sign in page, with fields for the username and password.":::
+
+1. Sign in using your credentials.
+
+ If you're accessing the distributed tracing web GUI for the first time after installing the packet core instance, you should fill in the fields with the default username and password. Afterwards, follow the prompts to set up a new password that you will use from the next time you sign in.
+
+ - **Name**: *admin*
+ - **Password**: *packetCoreAdmin*
+
+Once you're signed in to the distributed tracing web GUI, you can use the top-level menu to sign out or change your credentials. Select **Logout** to end your current session, and **Change Password** to update your password.
+
+## Search for specific information
The distributed tracing web GUI provides two search tabs to allow you to search for diagnostics.
Long search ranges result in slower searches, so it's recommended that you keep
> [!TIP] > You can select the **cog icon** next to the **Date/time range** heading to customize the date and time format, default search period, and time zone according to your preferences.
-Once youΓÇÖve entered your chosen search parameters, select **Search** to begin your search. The following image shows an example of the results returned for a search on a particular SUPI.
+Once youΓÇÖve entered your chosen search parameters, select **Search**. The following image shows an example of the results returned for a search on a particular SUPI.
:::image type="content" source="media\distributed-tracing\distributed-tracing-search-results.png" alt-text="Screenshot of search results on a specific S U P I in the distributed tracing web G U I. It shows matching Successful P D U Session Establishment records.":::
-You can view more information on any result by selecting it.
+You can select an entry in the search results to view detailed information for that call flow or error.
-## Viewing diagnostics details
+## View diagnostics details
-When you select on a specific result, the display shows the following tabs containing different categories of information.
+When you select a specific result, the display shows the following tabs containing different categories of information.
> [!NOTE] > In addition to the tabs described below, the distributed tracing web GUI also includes a **User Experience** tab. This tab is not used by Azure Private 5G Core Preview and will not display any information.
The **Summary** view displays a description of the flow or error.
The **Detailed Timeline** view shows the sequence of operations and events that occurred during the flow or error. Each entry in the list shows summary information for a specific event that occurred during the flow or error. Each entry includes the date and time at which the event occurred and the name of the component on which it occurred. When you select a specific entry in this list, the panel at the bottom of the screen provides more detail about the selected event.
The **Call Flow** view shows the sequence of messages flowing between components
The vertical lines in the diagram show the network components involved in the flow. -- Black lines indicate packet core Network Functions that have logged sending or receiving messages for this flow.-- Grey lines indicate other components that don't log messages.
+- **Black lines** indicate packet core Network Functions that have logged sending or receiving messages for this flow.
+- **Gray lines** indicate other components that don't log messages.
You can customize the view by showing or hiding individual columns and giving them more descriptive display names. To view these options, select the current column name and then select the **+** (plus) sign that appears to the right of it to open a dropdown menu. Additionally, you can select multiple columns by holding down the Ctrl key as you select each column; the **+** (plus) sign remains next to the latest column that you selected.
The messages appear in the diagram in the order in which they occurred. An axis
If the call flow diagram is too large to fit in the browser window, you can use the vertical and horizontal scrollbars to move around the display.
-## Viewing help information
+## View help information
To view help information, select the **Options** symbol in the upper-right corner and choose **Help**. The help information appears in a panel at the bottom of the display. To hide this panel, select the **X** symbol at the upper-right corner of the panel. ## Next steps -- [Learn more about how you can monitor your deployment using the packet core dashboards](packet-core-dashboards.md)
+- [Learn how to export, upload and share your traces for diagnostics](distributed-tracing-share-traces.md)
+- [Learn more about how you can monitor your deployment using the packet core dashboards](packet-core-dashboards.md)
private-5g-core Enable Log Analytics For Private 5G Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/enable-log-analytics-for-private-5g-core.md
+
+ Title: Enable Log Analytics for a packet core instance
+
+description: In this how-to guide, you'll learn how to enable Log Analytics to allow you to monitor and analyze activity for a packet core instance.
++++ Last updated : 03/08/2022+++
+# Enable Log Analytics for a packet core instance
+
+Log Analytics is a tool in the Azure portal used to edit and run log queries with data in Azure Monitor Logs. You can write queries to retrieve records or visualize data in charts, allowing you to monitor and analyze activity in your private mobile network. In this how-to guide, you'll learn how to enable Log Analytics for a packet core instance.
+
+> [!IMPORTANT]
+> Log Analytics is part of Azure Monitor and is chargeable. [Estimate costs](monitor-private-5g-core-with-log-analytics.md#estimate-costs) provides information on estimating the cost of using Log Analytics to monitor your private mobile network. You shouldn't enable Log Analytics if you don't want to incur any costs. If you don't enable Log Analytics, you can still monitor your packet core instances from the local network using the [packet core dashboards](packet-core-dashboards.md).
+
+## Prerequisites
+
+- Identify the Kubernetes - Azure Arc resource representing the Azure Arc-enabled Kubernetes cluster on which your packet core instance is running.
+- Ensure you have [Contributor](../role-based-access-control/built-in-roles.md#contributor) role assignment on the Azure subscription containing the Kubernetes - Azure Arc resource.
+- Ensure your local machine has kubectl access to the Azure Arc-enabled Kubernetes cluster.
+
+## Create an Azure Monitor extension
+
+Follow the steps in [Azure Monitor Container Insights for Azure Arc-enabled Kubernetes clusters](../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md) to create an Azure Monitor extension for the Azure Arc-enabled Kubernetes cluster. Ensure that you use the instructions for the Azure CLI, and that you choose **Option 4 - On Azure Stack Edge** when you carry out [Create extension instance using Azure CLI](../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md#create-extension-instance-using-azure-cli).
+
+## Configure and deploy the ConfigMap
+
+In this step, you'll configure and deploy a ConfigMap which will allow Container Insights to collect Prometheus metrics from the Azure Arc-enabled Kubernetes cluster.
+
+1. Copy the following yaml file into a text editor and save it as *99-azure-monitoring-configmap.yml*.
+
+ ```yml
+ kind: ConfigMap
+ apiVersion: v1
+ data:
+ schema-version:
+ # string.used by agent to parse config. supported versions are {v1}. Configs with other schema versions will be
+ # rejected by the agent.
+ v1
+ config-version:
+ # string.used by customer to keep track of this config file's version in their source control/repository (max
+ # allowed 10 chars, other chars will be truncated)
+ ver1
+ log-data-collection-settings: |-
+ # Log data collection settings
+ # Any errors related to config map settings can be found in the KubeMonAgentEvents table in the Log Analytics
+ # workspace that the cluster is sending data to.
+
+ [log_collection_settings]
+ [log_collection_settings.stdout]
+ # In the absense of this configmap, default value for enabled is true
+ enabled = false
+ # exclude_namespaces setting holds good only if enabled is set to true.
+ # kube-system log collection is disabled by default in the absence of 'log_collection_settings.stdout'
+ # setting. If you want to enable kube-system, remove it from the following setting.
+ # If you want to continue to disable kube-system log collection keep this namespace in the following setting
+ # and add any other namespace you want to disable log collection to the array.
+ # In the absense of this configmap, default value for exclude_namespaces = ["kube-system"].
+ exclude_namespaces = ["kube-system"]
+
+ [log_collection_settings.stderr]
+ # Default value for enabled is true
+ enabled = false
+ # exclude_namespaces setting holds good only if enabled is set to true.
+ # kube-system log collection is disabled by default in the absence of 'log_collection_settings.stderr'
+ # setting. If you want to enable kube-system, remove it from the following setting.
+ # If you want to continue to disable kube-system log collection keep this namespace in the following setting
+ # and add any other namespace you want to disable log collection to the array.
+ # In the absense of this cofigmap, default value for exclude_namespaces = ["kube-system"].
+ exclude_namespaces = ["kube-system"]
+
+ [log_collection_settings.env_var]
+ # In the absense of this configmap, default value for enabled is true
+ enabled = false
+
+ [log_collection_settings.enrich_container_logs]
+ # In the absense of this configmap, default value for enrich_container_logs is false.
+ # When this is enabled (enabled = true), every container log entry (both stdout & stderr)
+ # will be enriched with container Name & container Image.
+ enabled = false
+
+ [log_collection_settings.collect_all_kube_events]
+ # In the absense of this configmap, default value for collect_all_kube_events is false.
+ # When the setting is set to false, only the kube events with !normal event type will be collected.
+ # When this is enabled (enabled = true), all kube events including normal events will be collected.
+ enabled = false
+
+ prometheus-data-collection-settings: |-
+ # Custom Prometheus metrics data collection settings
+ [prometheus_data_collection_settings.cluster]
+ # Cluster level scrape endpoint(s). These metrics will be scraped from agent's Replicaset (singleton)
+ # Any errors related to prometheus scraping can be found in the KubeMonAgentEvents table in the Log Analytics
+ # workspace that the cluster is sending data to.
+
+ # Interval specifying how often to scrape for metrics. This is duration of time and can be specified for
+ # supporting settings by combining an integer value and time unit as a string value. Valid time units are ns,
+ # us (or ┬╡s), ms, s, m, h.
+ interval = "1m"
+
+ ## Uncomment the following settings with valid string arrays for prometheus scraping
+ fieldpass = ["subscribers_count", "amf_registered_subscribers", "amf_registered_subscribers_connected", "amf_connected_gnb", "subgraph_counts", "cppe_bytes_total", "amfcc_mm_initial_registration_failure", "amfcc_n1_auth_failure", "amfcc_n1_auth_reject", "amfn2_n2_pdu_session_resource_setup_request", "amfn2_n2_pdu_session_resource_setup_response", "amfn2_n2_pdu_session_resource_modify_request", "amfn2_n2_pdu_session_resource_modify_response", "amfn2_n2_pdu_session_resource_release_command", "amfn2_n2_pdu_session_resource_release_response", "amfcc_n1_service_reject", "amfn2_n2_pathswitch_request_failure", "amfn2_n2_handover_failure"]
+
+ #fielddrop = ["metric_to_drop"]
+
+ # An array of urls to scrape metrics from.
+ # urls = ["http://myurl:9101/metrics"]
+
+ # An array of Kubernetes services to scrape metrics from.
+ # kubernetes_services = ["http://my-service-dns.my-namespace:9102/metrics"]
+
+ # When monitor_kubernetes_pods = true, replicaset will scrape Kubernetes pods for the following prometheus
+ # annotations:
+ # - prometheus.io/scrape: Enable scraping for this pod
+ # - prometheus.io/scheme: If the metrics endpoint is secured then you will need to
+ # set this to `https` & most likely set the tls config.
+ # - prometheus.io/path: If the metrics path is not /metrics, define it with this annotation.
+ # - prometheus.io/port: If port is not 9102 use this annotation
+ monitor_kubernetes_pods = true
+
+ ## Restricts Kubernetes monitoring to namespaces for pods that have annotations set and are scraped using the
+ ## monitor_kubernetes_pods setting.
+ ## This will take effect when monitor_kubernetes_pods is set to true
+ ## ex: monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]
+ # monitor_kubernetes_pods_namespaces = ["default1"]
+
+ [prometheus_data_collection_settings.node]
+ # Node level scrape endpoint(s). These metrics will be scraped from agent's DaemonSet running in every node in
+ # the cluster
+ # Any errors related to prometheus scraping can be found in the KubeMonAgentEvents table in the Log Analytics
+ # workspace that the cluster is sending data to.
+
+ # Interval specifying how often to scrape for metrics. This is duration of time and can be specified for
+ # supporting settings by combining an integer value and time unit as a string value. Valid time units are ns,
+ # us (or ┬╡s), ms, s, m, h.
+ interval = "1m"
+
+ ## Uncomment the following settings with valid string arrays for prometheus scraping
+
+ # An array of urls to scrape metrics from. $NODE_IP (all upper case) will substitute of running Node's IP
+ # address
+ # urls = ["http://$NODE_IP:9103/metrics"]
+
+ #fieldpass = ["metric_to_pass1", "metric_to_pass12"]
+
+ #fielddrop = ["metric_to_drop"]
+
+ metric_collection_settings: |-
+ # Metrics collection settings for metrics sent to Log Analytics and MDM
+ [metric_collection_settings.collect_kube_system_pv_metrics]
+ # In the absense of this configmap, default value for collect_kube_system_pv_metrics is false
+ # When the setting is set to false, only the persistent volume metrics outside the kube-system namespace will be
+ # collected
+ enabled = false
+ # When this is enabled (enabled = true), persistent volume metrics including those in the kube-system namespace
+ # will be collected
+
+ alertable-metrics-configuration-settings: |-
+ # Alertable metrics configuration settings for container resource utilization
+ [alertable_metrics_configuration_settings.container_resource_utilization_thresholds]
+ # The threshold(Type Float) will be rounded off to 2 decimal points
+ # Threshold for container cpu, metric will be sent only when cpu utilization exceeds or becomes equal to the
+ # following percentage
+ container_cpu_threshold_percentage = 95.0
+ # Threshold for container memoryRss, metric will be sent only when memory rss exceeds or becomes equal to the
+ # following percentage
+ container_memory_rss_threshold_percentage = 95.0
+ # Threshold for container memoryWorkingSet, metric will be sent only when memory working set exceeds or becomes
+ # equal to the following percentage
+ container_memory_working_set_threshold_percentage = 95.0
+
+ # Alertable metrics configuration settings for persistent volume utilization
+ [alertable_metrics_configuration_settings.pv_utilization_thresholds]
+ # Threshold for persistent volume usage bytes, metric will be sent only when persistent volume utilization
+ # exceeds or becomes equal to the following percentage
+ pv_usage_threshold_percentage = 60.0
+ integrations: |-
+ [integrations.azure_network_policy_manager]
+ collect_basic_metrics = false
+ collect_advanced_metrics = false
+ metadata:
+ name: container-azm-ms-agentconfig
+ namespace: kube-system
+ ```
+1. In a command line with kubectl access to the Azure Arc-enabled Kubernetes cluster, navigate to the folder containing the *99-azure-monitoring-configmap.yml* file and run the following command.
+
+ `kubectl apply -f 99-azure-monitoring-configmap.yml`
+
+ The configuration change can take a few minutes to finish before taking effect, and all omsagent pods in the cluster will restart. The restart is a rolling restart for all omsagent pods, not all restart at the same time. When the restarts are finished, a message is displayed that's similar to the following and includes the result: `configmap "container-azm-ms-agentconfig" created`.
+
+## Run a query
+
+In this step, you'll run a query in the Log Analytics workspace to confirm that you can retrieve logs for the packet core instance.
+
+1. Sign in to the Azure portal at [https://aka.ms/AP5GCPortal](https://aka.ms/AP5GCPortal).
+1. Search for and select the Log Analytics workspace you used when creating the Azure Monitor extension in [Create an Azure Monitor extension](#create-an-azure-monitor-extension).
+1. Select **Logs** from the resource menu.
+ :::image type="content" source="media/log-analytics-workspace.png" alt-text="Screenshot of the Azure portal showing a Log Analytics workspace resource. The Logs option is highlighted.":::
+1. If it appears, select **X** to dismiss the **Queries** window.
+1. Select **Select scope**.
+
+ :::image type="content" source="media/enable-log-analytics-for-private-5g-core/select-scope.png" alt-text="Screenshot of the Log Analytics interface. The Select scope option is highlighted.":::
+
+1. Under **Select a scope**, deselect the Log Analytics workspace.
+1. Search for and select the **Kubernetes - Azure Arc** resource representing the Azure Arc-enabled Kubernetes cluster.
+1. Select **Apply**.
+
+ :::image type="content" source="media/enable-log-analytics-for-private-5g-core/select-kubernetes-cluster-scope.png" alt-text="Screenshot of the Azure portal showing the Select a scope screen. The search bar, Kubernetes - Azure Arc resource and Apply option are highlighted.":::
+
+1. Copy and paste the following query into the query window, and then select **Run**.
+
+ ```kusto
+ InsightsMetrics
+ | where Namespace == "prometheus"
+ | where Name == "amf_connected_gnb"
+ | extend Time=TimeGenerated
+ | extend GnBs=Val
+ | project GnBs, Time
+ ```
+
+ :::image type="content" source="media/enable-log-analytics-for-private-5g-core/run-query.png" alt-text="Screenshot of the Log Analytics interface. The Run option is highlighted." lightbox="media/enable-log-analytics-for-private-5g-core/run-query.png":::
+
+1. Verify that the results window displays the results of the query, showing how many gNodeBs have been connected to the packet core instance in the last 24 hours.
+
+ :::image type="content" source="media/enable-log-analytics-for-private-5g-core/query-results.png" alt-text="Screenshot of the results window displaying results from a query.":::
+
+## Next steps
+
+- [Learn more about monitoring Azure Private 5G Core using Log Analytics](monitor-private-5g-core-with-log-analytics.md)
+- [Learn more about Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md)
private-5g-core How To Guide Deploy A Private Mobile Network Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/how-to-guide-deploy-a-private-mobile-network-azure-portal.md
Title: Deploy a private mobile network
+ Title: Deploy a private mobile network - Azure portal
description: This how-to guide shows how to deploy a private mobile network through Azure Private 5G Core Preview using the Azure portal
Private mobile networks provide high performance, low latency, and secure connectivity for 5G Internet of Things (IoT) devices. In this how-to guide, you'll use the Azure portal to deploy a private mobile network to match your enterprise's requirements.
-You'll create the following resources as part of this how-to guide:
--- The Mobile Network resource representing your private mobile network as a whole.-- (Optionally) SIM resources representing the physical SIMs or eSIMs that will be served by the private mobile network.- ## Prerequisites - Complete all of the steps in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). - Ensure you can sign in to the Azure portal using an account with access to the active subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). This account must have the built-in Contributor or Owner role at the subscription scope.-- Collect all of the information listed in [Collect the required information to deploy a private mobile network - Azure portal](collect-required-information-for-private-mobile-network.md).-- If you decided when collecting the information in [Collect the required information to deploy a private mobile network - Azure portal](collect-required-information-for-private-mobile-network.md) that you wanted to provision SIMs using a JSON file as part of deploying your private mobile network, you must have prepared this file and made it available on the machine you'll use to access the Azure portal. For more information on the file format, see [Provision SIM resources through the Azure portal using a JSON file](collect-required-information-for-private-mobile-network.md#provision-sim-resources-through-the-azure-portal-using-a-json-file).
+- Collect all of the information listed in [Collect the required information to deploy a private mobile network - Azure portal](collect-required-information-for-private-mobile-network.md). You may also need to take the following steps based on the decisions you made when collecting this information.
+
+ - If you decided you wanted to provision SIMs using a JSON file, ensure you've prepared this file and made it available on the machine you'll use to access the Azure portal. For more information on the file format, see [JSON file format for provisioning SIMs](collect-required-information-for-private-mobile-network.md#json-file-format-for-provisioning-sims).
+ - If you decided you want to use the default service and SIM policy, identify the name of the data network to which your private mobile network will connect.
-## Create the Mobile Network and (optionally) SIM resources
-In this step, you'll create the Mobile Network resource representing your private mobile network as a whole. You can also provision one or more SIMs.
+## Deploy your private mobile network
+In this step, you'll create the Mobile Network resource representing your private mobile network as a whole. You can also provision one or more SIMs, and / or create the default service and SIM policy.
1. Sign in to the Azure portal at [https://aka.ms/AP5GCPortal](https://aka.ms/AP5GCPortal). 1. In the **Search** bar, type *mobile networks* and then select the **Mobile Networks** service from the results that appear.
- :::image type="content" source="media/mobile-networks-search.png" alt-text="Screenshot of the Azure portal showing a search for the Mobile Networks service." lightbox="media/mobile-networks-search.png":::
+ :::image type="content" source="media/mobile-networks-search.png" alt-text="Screenshot of the Azure portal showing a search for the Mobile Networks service.":::
1. On the **Mobile Networks** page, select **Create**.
In this step, you'll create the Mobile Network resource representing your privat
:::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/create-private-mobile-network-basics-tab.png" alt-text="Screenshot of the Azure portal showing the Basics configuration tab."::: 1. On the **SIMs** configuration tab, select your chosen input method by selecting the appropriate option next to **How would you like to input the SIMs information?**. You can then input the information you collected in [Collect SIM values](collect-required-information-for-private-mobile-network.md#collect-sim-values).-
- :::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/create-private-mobile-network-sims-tab.png" alt-text="Screenshot of the Azure portal showing the SIMs configuration tab.":::
-
+
- If you select **Upload JSON file**, the **Upload SIM profile configurations** field will appear. Use this field to upload your chosen JSON file. - If you select **Add manually**, a new set of fields will appear under **Enter SIM profile configurations**. Fill out the first row of these fields with the correct settings for the first SIM you want to provision. If you've got more SIMs you want to provision, add the settings for each of these SIMs to a new row. - If you decided that you don't want to provision any SIMs at this point, select **Add SIMs later**.
-1. Once you've selected the input method and provided information for any SIMs you want to provision, select **Review + create**.
+ :::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/create-private-mobile-network-sims-tab.png" alt-text="Screenshot of the Azure portal showing the SIMs configuration tab.":::
+
+1. If you want to use the default service and SIM policy, set **Do you wish to create a basic, default SIM policy and assign it these SIMs?** to **Yes**, and then enter the name of the data network into the **Data network name** field that appears.
+1. Select **Review + create**.
1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation. :::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/create-private-mobile-network-review-create-tab.png" alt-text="Screenshot of the Azure portal showing validated configuration for a private mobile network.":::
In this step, you'll create the Mobile Network resource representing your privat
:::image type="content" source="media/pmn-deployment-complete.png" alt-text="Screenshot of the Azure portal. It shows confirmation of the successful creation of a private mobile network.":::
- Select **Go to resource group**, and then check that your new resource group contains the correct **Mobile Network** resource, any **SIM** resources, and a default **Service** resource named **Allow-all-traffic**.
+ Select **Go to resource group**, and then check that your new resource group contains the correct **Mobile Network** resource. It may also contain the following, depending on the choices you made during the procedure.
+
+ - One or more **SIM** resources (if you provisioned any).
+ - **Service**, **SIM Policy**, **Data Network**, and **Slice** resources (if you decided to use the default service and SIM policy).
- :::image type="content" source="media/pmn-deployment-resource-group.png" alt-text="Screenshot of the Azure portal showing a resource group containing Mobile Network and Service resources.":::
+ :::image type="content" source="media/pmn-deployment-resource-group.png" alt-text="Screenshot of the Azure portal showing a resource group containing Mobile Network, SIM, Service, SIM policy, Data Network, and Slice resources.":::
## Next steps
-You can either begin designing policy control to determine how your private mobile network will handle traffic, or you can start adding sites to your private mobile network.
+You can either begin designing policy control to determine how your private mobile network will handle traffic, or you can start adding sites to your private mobile network.
- [Learn more about designing the policy control configuration for your private mobile network](policy-control.md) - [Collect the required information for a site](collect-required-information-for-a-site.md)
private-5g-core Monitor Private 5G Core With Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/monitor-private-5g-core-with-log-analytics.md
+
+ Title: Monitor Azure Private 5G Core Preview with Log Analytics
+description: Information on using Log Analytics to monitor and analyze activity in your private mobile network.
++++ Last updated : 03/08/2022+++
+# Monitor Azure Private 5G Core Preview with Log Analytics
+
+Log Analytics is a tool in the Azure portal used to edit and run log queries with data in Azure Monitor Logs. You can write queries to retrieve records or visualize data in charts, allowing you to monitor and analyze activity in your private mobile network.
+
+## Enable Log Analytics
+
+You'll need to carry out the steps in [Enabling Log Analytics for Azure Private 5G Core](enable-log-analytics-for-private-5g-core.md) before you can use Log Analytics with Azure Private 5G Core.
+
+> [!IMPORTANT]
+> Log Analytics is part of Azure Monitor and is chargeable. [Estimate costs](#estimate-costs) provides information on estimating the cost of using Log Analytics to monitor your private mobile network. You shouldn't enable Log Analytics if you don't want to incur any costs. If you don't enable Log Analytics, you can still monitor your packet core instances from the local network using the [packet core dashboards](packet-core-dashboards.md).
+
+## Access Log Analytics for a packet core instance
+
+Once you've enabled Log Analytics, you can begin working with it in the Azure portal. Navigate to the Log Analytics workspace you assigned to the Kubernetes cluster on which a packet core instance is running. Select **Logs** from the left hand menu.
++
+You'll then be shown the Log Analytics tool where you can enter your queries.
++
+For detailed information on using the Log Analytics tool, see [Overview of Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md).
+
+## Construct queries
+
+You can find a tutorial for writing queries using the Log Analytics tool at [Get started with log queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md). Each packet core instance streams the following logs to the Log Analytics tool. You can use these logs to construct queries that will allow you to monitor your private mobile network. You'll need to run all queries at the scope of the **Kubernetes - Azure Arc** resource that represents the Kubernetes cluster on which your packet core instance is running.
+
+| Log name | Description |
+|--|--|
+| subscribers_count | Number of successfully provisioned SIMs. |
+| amf_registered_subscribers | Number of currently registered SIMs. |
+| amf_connected_gnb | Number of gNodeBs that are currently connected to the Access and Mobility Management Function (AMF). |
+| subgraph_counts | Number of active PDU sessions being handled by the User Plane Function (UPF). |
+| cppe_bytes_total | Total number of bytes received or transmitted by the UPF at each interface since the UPF last restarted. The value is given as a 64-bit unsigned integer. |
+| amfcc_mm_initial_registration_failure | Total number of failed initial registration attempts handled by the AMF. |
+| amfcc_n1_auth_failure | Counter of Authentication Failure Non-Access Stratum (NAS) messages. The Authentication Failure NAS message is sent by the user equipment (UE) to the AMF to indicate that authentication of the network has failed. |
+| amfcc_n1_auth_reject | Counter of Authentication Reject NAS messages. The Authentication Reject NAS message is sent by the AMF to the UE to indicate that the authentication procedure has failed and that the UE shall abort all activities. |
+| amfn2_n2_pdu_session_resource_setupΓÇï_request | Total number of PDU SESSION RESOURCE SETUP REQUEST Next Generation Application Protocol (NGAP) messages received by the AMF. |
+| amfn2_n2_pdu_session_resource_setupΓÇï_response | Total number of PDU SESSION RESOURCE SETUP RESPONSE NGAP messages received by the AMF. |
+| amfn2_n2_pdu_session_resource_modifyΓÇï_request | Total number of PDU SESSION RESOURCE MODIFY REQUEST NGAP messages received by the AMF. |
+| amfn2_n2_pdu_session_resource_modifyΓÇï_response | Total number of PDU SESSION RESOURCE MODIFY RESPONSE NGAP messages received by the AMF. |
+| amfn2_n2_pdu_session_resource_releaseΓÇï_command | Total number of PDU SESSION RESOURCE RELEASE COMMAND NGAP messages received by the AMF. |
+| amfn2_n2_pdu_session_resource_releaseΓÇï_response | Total number of PDU SESSION RESOURCE RELEASE RESPONSE NGAP messages received by the AMF. |
+| amfcc_n1_service_reject | Total number of Service reject NAS messages received by the AMF. |
+| amfn2_n2_pathswitch_request_failure | Total number of PATH SWITCH REQUEST FAILURE NGAP messages received by the AMF. |
+| amfn2_n2_handover_failure | Total number of HANDOVER FAILURE NGAP messages received by the AMF. |
+
+
+## Example queries
+
+The following are some example queries you can run to retrieve logs relating to Key Performance Indicators (KPIs) for your private mobile network. You should run all of these queries at the scope of the **Kubernetes - Azure Arc** resource that represents the Kubernetes cluster on which your packet core instance is running.
+
+### PDU sessions
+
+```Kusto
+InsightsMetrics
+ | where Namespace == "prometheus"
+ | where Name == "subgraph_counts"
+ | summarize PduSessions=max(Val) by Time=TimeGenerated
+```
+
+### Registered UEs
+
+```Kusto
+let Time = InsightsMetrics
+ | where Namespace == "prometheus"
+ | summarize by Time=TimeGenerated;
+let RegisteredDevices = InsightsMetrics
+ | where Namespace == "prometheus"ΓÇ»
+ | where Name == "amf_registered_subscribers"
+ | summarize by RegisteredDevices=Val, Time=TimeGenerated;
+Time
+ | join kind=leftouter (RegisteredDevices) on Time
+ | project Time, RegisteredDevices
+```
+
+### Connected gNodeBs
+
+```kusto
+InsightsMetrics
+ | where Namespace == "prometheus"
+ | where Name == "amf_connected_gnb"
+ | extend Time=TimeGenerated
+ | extend GnBs=Val
+ | project GnBs, Time
+```
+
+## Log Analytics dashboards
+
+Log Analytics dashboards can visualize all of your saved log queries, giving you the ability to find, correlate, and share data about your private mobile network.
+
+You can find information on how to create a Log Analytics dashboard in [Create and share dashboards of Log Analytics data](../azure-monitor/visualize/tutorial-logs-dashboards.md).
+
+## Estimate costs
+
+Log Analytics will ingest an average of 8GB of data a day for each log streamed to it by a single packet core instance. [Monitor usage and estimated costs in Azure Monitor](../azure-monitor/usage-estimated-costs.md) provides information on how to estimate the cost of using Log Analytics to monitor Azure Private 5G Core.
+
+## Next steps
+- [Enable Log Analytics for Azure Private 5G Core](enable-log-analytics-for-private-5g-core.md)
+- [Learn more about Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md)
private-5g-core Packet Core Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/packet-core-dashboards.md
The *packet core dashboards* provide a flexible way to monitor key statistics re
The packet core dashboards are powered by *Grafana*, an open-source, metric analytics and visualization suite. For more information, see the [Grafana documentation](https://grafana.com/docs/grafana/v6.1/).
+## Access the packet core dashboards
+
+To sign in to the packet core dashboards:
+
+1. In your browser, enter https://*\<LocalMonitoringIP\>*/grafana, where *\<LocalMonitoringIP\>* is the IP address for accessing the local monitoring tools that you set up in [Management network](complete-private-mobile-network-prerequisites.md#management-network).
+
+ :::image type="content" source="media\packet-core-dashboards\grafana-sign-in.png" alt-text="Screenshot of the Grafana sign in page, with fields for the username and password.":::
+
+1. Sign in using your credentials.
+
+ If you're accessing the packet core dashboards for the first time after installing the packet core instance, you should fill in the fields with the default username and password. Afterwards, follow the prompts to set up a new password that you will use from the next time you sign in.
+
+ - **Email or username**: *admin*
+ - **Password**: *admin*
+
+Once you're signed in to the packet core dashboards, you can hover over your user icon in the left pane to access the options to sign out or change your password.
+ ## Use the packet core dashboards We'll go through the common concepts and operations you'll need to understand before you can use the packet core dashboards. If you need more information on using Grafana, see the [Grafana documentation](https://grafana.com/docs/grafana/v6.1/).
The packet core dashboards use the following types of panel. For all panels, you
:::image type="content" source="media/packet-core-dashboards/packet-core-table-panel.png" alt-text="Screenshot of a table panel in the packet core dashboards. The table displays information on currently active alerts.":::
-## Switching between dashboards
+## Switch between dashboards
You can access the lists of available dashboards and switch between them using the drop-down **dashboard links** on the upper right of each dashboard. Dashboards are grouped by the level of information that they provide.
You can choose to use the search bar to find a dashboard by name or select from
:::image type="content" source="media/packet-core-dashboards/packet-core-dashboard-picker-drop-down.png" alt-text="Screenshot showing the drop-down menu of the dashboard picker. A search bar is displayed, along with a list of available dashboards.":::
-## Adjusting the time range
+## Adjust the time range
The **Time picker** in the top right-hand corner of each packet core dashboard allows you to adjust the time range for which the dashboard will display statistics. You can use the time picker to retrieve diagnostics for historical problems. You can choose a relative time range (such as the last 15 minutes), or an absolute time range (such as statistics for a particular month). You can also use the **Refresh dashboard** icon to configure how regularly the statistics displayed on the dashboard will be updated. For detailed information on using the time range controls, see [Time range controls](https://grafana.com/docs/grafana/v6.1/reference/timerange/) in the Grafana documentation.
private-5g-core Policy Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/policy-control.md
When you first come to design the policy control configuration for your own priv
1. [Collect the required information for a SIM policy](collect-required-information-for-sim-policy.md) 1. [Configure a SIM policy - Azure portal](configure-sim-policy-azure-portal.md)
-1. Optionally, activate the SIMs to allow them to use the private mobile network.
- ## Next steps - [Learn how to create an example set of policy control configuration](tutorial-create-example-set-of-policy-control-configuration.md)
private-5g-core Private 5G Core Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/private-5g-core-overview.md
Azure Private 5G Core is available as a native Azure service, offering the same
## Azure centralized monitoring
-Azure Private 5G Core is integrated with Log Analytics in Azure Monitor, as described in [Overview of Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md). You can write queries to retrieve records or visualize data in charts. This lets you monitor and analyze activity in your private mobile network directly from the Azure portal.
+Azure Private 5G Core is integrated with Log Analytics in Azure Monitor, as described in [Monitor Azure Private 5G Core with Log Analytics](monitor-private-5g-core-with-log-analytics.md). You can write queries to retrieve records or visualize data in charts. This lets you monitor and analyze activity in your private mobile network directly from the Azure portal.
## Next steps
private-5g-core Provision Sims Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/provision-sims-azure-portal.md
# Provision SIMs for Azure Private 5G Core Preview - Azure portal
-*SIM resources* represent physical SIMs or eSIMs used by user equipment (UEs) served by the private mobile network. In this how-to guide, we'll provision new SIMs for an existing private mobile network.
+*SIM resources* represent physical SIMs or eSIMs used by user equipment (UEs) served by the private mobile network. In this how-to guide, we'll provision new SIMs for an existing private mobile network. You can also choose to assign static IP addresses and a SIM policy to the SIMs you provision.
## Prerequisites - Ensure you can sign in to the Azure portal using an account with access to the active subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). This account must have the built-in Contributor role at the subscription scope. - Identify the name of the Mobile Network resource corresponding to your private mobile network.-- For each SIM you want to provision, decide whether you want to assign a SIM policy to it. If you do, you must have already created the relevant SIM policies using the instructions in [Configure a SIM policy - Azure portal](configure-sim-policy-azure-portal.md). SIMs can't access your private mobile network unless they have an assigned SIM policy. - Decide on the method you'll use to provision SIMs. You can choose from the following: - Manually entering each provisioning value into fields in the Azure portal. This option is best if you're provisioning a few SIMs. - Importing a JSON file containing values for one or more SIM resources. This option is best if you're provisioning a large number of SIMs. You'll need a good JSON editor if you want to use this option.
+- For each SIM you want to provision, decide whether you want to assign a SIM policy to it. If you do, you must have already created the relevant SIM policies using the instructions in [Configure a SIM policy - Azure portal](configure-sim-policy-azure-portal.md). SIMs can't access your private mobile network unless they have an assigned SIM policy.
+- If you've configured static IP address allocation for your packet core instance(s), decide whether you want to assign a static IP address to any of the SIMs you're provisioning. If you have multiple sites in your private mobile network, you can assign a different static IP address for each site to the same SIM.
+
+ Each IP address must come from the pool you assigned for static IP address allocation when creating the relevant site, as described in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values). For more information, see [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools).
+
+ If you're assigning a static IP address to a SIM, you'll also need the following information.
+
+ - The SIM policy to assign to the SIM. You won't be able to set a static IP address for a SIM without also assigning a SIM policy.
+ - The name of the data network the SIM will use.
+ - The site at which the SIM will use this static IP address.
## Collect the required information for your SIMs
To begin, collect the values in the following table for each SIM you want to pro
| The derived operator code (OPc). The OPc is taken from the SIM's Ki and the network's operator code (OP). The packet core instance uses it to authenticate a user using a standards-based algorithm. The OPc must be a 32-character string, containing hexadecimal characters only. | **Opc** | `operatorKeyCode` | | The type of device using this SIM. This value is an optional free-form string. You can use it as required to easily identify device types using the enterprise's private mobile network. | **Device type** | `deviceType` |
-## If applicable, create the JSON file
+## Create the JSON file
Only carry out this step if you decided in [Prerequisites](#prerequisites) to use a JSON file to provision your SIMs. Otherwise, you can skip to [Begin provisioning the SIMs in the Azure portal](#begin-provisioning-the-sims-in-the-azure-portal).
In this step, you'll enter provisioning values for your SIMs directly into the A
:::image type="content" source="media/provision-sims-azure-portal/new-sim-resource.png" alt-text="Screenshot of the Azure portal showing the configuration a new SIM resource." lightbox="media/provision-sims-azure-portal/new-sim-resource.png"::: 1. Repeat this entire step for any other SIMs that you want to provision.
-1. If you decided in [Prerequisites](#prerequisites) that you wanted to assign a SIM policy to any of your provisioned SIMs, move to [Assign a SIM policy](#assign-a-sim-policy). Otherwise, you've finished your provisioning.
## Provision SIMs using a JSON file In this step, you'll provision SIMs using a JSON file.
-1. In **Add SIMs** on the right, select **Browse** and then select the JSON file you created in [If applicable, create the JSON file](#if-applicable-create-the-json-file).
+1. In **Add SIMs** on the right, select **Browse** and then select the JSON file you created in [Create the JSON file](#create-the-json-file).
1. Select **Add**. If the **Add** button is greyed out, check your JSON file to confirm that it's correctly formatted. 1. The Azure portal will now begin deploying the SIMs. When the deployment is complete, select **Go to resource group**.
In this step, you'll provision SIMs using a JSON file.
:::image type="content" source="media/provision-sims-azure-portal/sims-list.png" alt-text="Screenshot of the Azure portal. It shows a list of currently provisioned SIMs for a private mobile network." lightbox="media/provision-sims-azure-portal/sims-list.png":::
-1. If you decided in [Prerequisites](#prerequisites) that you wanted to assign a SIM policy to any of your provisioned SIMs, move to [Assign a SIM policy](#assign-a-sim-policy). Otherwise, you've finished your provisioning.
+## Assign static IP addresses
-## Assign a SIM policy
+In this step, you'll assign static IP addresses to your SIMs. You can skip this step if you don't want to assign any static IP addresses.
-In this step, you'll assign a SIM policy to your SIMs. SIMs need an assigned SIM policy before they can use your private mobile network. You can skip this step and come back to it later if you don't want the SIMs to be able to access the private mobile network straight away.
+1. Search for and select the **Mobile Network** resource representing the private mobile network containing your SIMs.
-1. Search for and select the **Mobile Network** resource representing the private mobile network for which you want to provision SIMs.
+ :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource.":::
+
+1. In the resource menu, select **SIMs**.
+1. You'll see a list of provisioned SIMs in the private mobile network. Select each SIM to which you want to assign a static IP address, and then select **Assign Static IPs**.
+
+ :::image type="content" source="media/provision-sims-azure-portal/assign-static-ips.png" alt-text="Screenshot of the Azure portal showing a list of provisioned SIMs. Selected SIMs and the Assign Static I Ps button are highlighted.":::
+
+1. In **Assign static IP configurations** on the right, run the following steps for each SIM in turn. If your private mobile network has multiple sites and you want to assign a different static IP address for each site to the same SIM, you'll need to repeat these steps on the same SIM for each IP address.
+
+ 1. Set **SIM name** your chosen SIM.
+ 1. Set **SIM policy** to the SIM policy you want to assign to this SIM.
+ 1. Set **Slice** to **slice-1**.
+ 1. Set **Data network name** to the name of the data network this SIM will use.
+ 1. Set **Site** to the site at which the SIM will use this static IP address.
+ 1. Set **Static IP** to your chosen IP address.
+ 1. Select **Save static IP configuration**. The SIM will then appear in the list under **Number of pending changes**.
+
+ :::image type="content" source="media/provision-sims-azure-portal/assign-static-ip-configurations.png" alt-text="Screenshot of the Azure portal showing the Assign static I P configurations screen.":::
+
+1. Once you have assigned static IP addresses to all of your chosen SIMs, select **Assign static IP configurations**.
+1. The Azure portal will now begin deploying the configuration change. When the deployment is complete, select **Go to resource** (if you have assigned a static IP address to a single SIM) or **Go to resource group** (if you have assigned static IP addresses to multiple SIMs).
+
+ - If you assigned a static IP address to a single SIM, you'll be taken to that SIM resource. Check the **SIM policy** field in the **Management** section and the list under the **Static IP Configuration** section to confirm that the correct SIM policy and static IP address have been assigned successfully.
+ - If you assigned a SIM policy to multiple SIMs, you'll be taken to the resource group containing your private mobile network. Select the **Mobile Network** resource, and then select **SIMs** in the resource menu. Check the **SIM policy** column in the SIMs list to confirm the correct SIM policy has been assigned to your chosen SIMs. You can then select an individual SIM and check the **Static IP Configuration** section to confirm that the correct static IP address has been assigned to that SIM.
+
+## Assign SIM policies
+
+In this step, you'll assign SIM policies to your SIMs. SIMs need an assigned SIM policy before they can use your private mobile network. You can skip this step and come back to it later if you don't want the SIMs to be able to access the private mobile network straight away. You can also skip this step for any SIMs to which you've assigned a static IP address, as these SIMs will already have an assigned SIM policy.
+
+1. Search for and select the **Mobile Network** resource representing the private mobile network containing your SIMs.
:::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource."::: 1. In the resource menu, select **SIMs**. 1. You'll see a list of provisioned SIMs in the private mobile network. For each SIM policy you want to assign to one or more SIMs, do the following:
- 1. Tick the checkbox next to the name of each SIM to which you assign the SIM policy.
+
+ 1. Tick the checkbox next to the name of each SIM to which you want to assign the SIM policy.
1. Select **Assign SIM policy**. 1. In **Assign SIM policy** on the right, select your chosen SIM policy from the **SIM policy** drop-down menu. 1. Select **Assign SIM policy**.
In this step, you'll assign a SIM policy to your SIMs. SIMs need an assigned SIM
## Next steps -- [Activate your SIMs to allow them to use your private mobile network](activate-sims.md)
+- [Learn more about policy control](policy-control.md)
purview Reference Azure Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/reference-azure-purview-glossary.md
A type of annotation used to identify an attribute of an asset or a column such
A classification rule is a set of conditions that determine how scanned data should be classified when content matches the specified pattern. ## Classified asset An asset where Azure Purview extracts schema and applies classifications during an automated scan. The scan rule set determines which assets get classified. If the asset is considered a candidate for classification and no classifications are applied during scan time, an asset is still considered a classified asset.
+## Collection
+An organization-defined grouping of assets, terms, annotations, and sources. Collections allow for easier fine-grained access control and discoverability of assets within a data catalog.
## Column pattern A regular expression included in a classification rule that represents the column names that you want to match. ## Contact
Glossary terms that are linked to other terms within the organization.
A single asset that represents many partitioned files or objects in storage. For example, Azure Purview stores partitioned Apache Spark output as a single resource set instead of unique assets for each individual file. ## Role Permissions assigned to a user within an Azure Purview instance. Roles, such as Azure Purview Data Curator or Azure Purview Data Reader, determine what can be done within the product.
+## Root collection
+A system-generated collection that has the same friendly name as the Azure Purview account. All assets belong to the root collection by default.
## Scan An Azure Purview process that examines a source or set of sources and ingests its metadata into the data catalog. Scans can be run manually or on a schedule using a scan trigger. ## Scan ruleset
An individual who defines the standards for a glossary term. They are responsibl
A definition of attributes included in a glossary term. Users can either use the system-defined term template or create their own to include custom attributes. ## Next steps
-To get started with Azure Purview, see [Quickstart: Create an Azure Purview account](create-catalog-portal.md).
+To get started with Azure Purview, see [Quickstart: Create an Azure Purview account](create-catalog-portal.md).
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
This article outlines the process to register an Azure SQL data source in Azure
* Data lineage extraction is currently supported only for Stored procedure runs
+When scanning Azure SQL Database, Azure Purview supports:
+
+- Extracting technical metadata including:
+
+ - Server
+ - Database
+ - Schemas
+ - Tables including the columns
+ - Views including the columns
+ - Store procedures (with lineage extraction enabled)
+ - Store procedure runs (with lineage extraction enabled)
+
+When setting up scan, you can further scope the scan after providing the database name by selecting tables and views as needed.
+ ### Known limitations * Azure Purview doesn't support over 300 columns in the Schema tab and it will show "Additional-Columns-Truncated" if there are more than 300 columns.
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
To create and run a new scan using Azure runtime, perform the following steps:
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-create-service-principle.png" alt-text="Screenshot how to create a Service Principle.":::
-8. From Azure Active Directory dashboard, select newly created application and then select App registration. Assign the application the following delegated permissions and grant admin consent for the tenant:
+8. From Azure Active Directory dashboard, select newly created application and then select **App permissions**. Assign the application the following delegated permissions and grant admin consent for the tenant:
- Power BI Service Tenant.Read.All - Microsoft Graph openid
role-based-access-control Conditions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-overview.md
Attribute-based access control (ABAC) is an authorization system that defines ac
## What are role assignment conditions?
-Azure role-based access control (Azure RBAC) is an authorization system that helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to. In most cases, Azure RBAC will provide the access management you need by using role definitions and role assignments. However, in some cases you might want to provide more fined-grained access management or simplify the management of hundreds of role assignments.
+[Azure role-based access control (Azure RBAC)](overview.md) is an authorization system that helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to. In most cases, Azure RBAC will provide the access management you need by using role definitions and role assignments. However, in some cases you might want to provide more fined-grained access management or simplify the management of hundreds of role assignments.
Azure ABAC builds on Azure RBAC by adding role assignment conditions based on attributes in the context of specific actions. A *role assignment condition* is an additional check that you can optionally add to your role assignment to provide more fine-grained access control. A condition filters down permissions granted as a part of the role definition and role assignment. For example, you can add a condition that requires an object to have a specific tag to read the object. You cannot explicitly deny access to specific resources using conditions.
search Search Howto Managed Identities Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-cosmos-db.md
Last updated 02/11/2022
# Set up an indexer connection to a Cosmos DB database using a managed identity
-This article describes how to set up an indexer connection to an Azure Cosmos DB database using a managed identity instead of providing credentials in the data source object connection string.
+This article describes how to set up an Azure Cognitive Search indexer connection to an Azure Cosmos DB database using a managed identity instead of providing credentials in the connection string.
-You can use a system-assigned managed identity or a user-assigned managed identity (preview).
+You can use a system-assigned managed identity or a user-assigned managed identity (preview). Managed identities are Azure AD logins and require Azure role assignments to access data in Cosmos DB.
Before learning more about this feature, it is recommended that you have an understanding of what an indexer is and how to set up an indexer for your data source. More information can be found at the following links:
search Search Howto Managed Identities Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-sql.md
Last updated 02/11/2022
# Set up an indexer connection to Azure SQL Database using a managed identity
-This article describes how to set up an indexer connection to Azure SQL Database using a managed identity instead of providing credentials in the data source object connection string.
+This article describes how to set up an Azure Cognitive Search indexer connection to Azure SQL Database using a managed identity instead of providing credentials in the connection string.
-You can use a system-assigned managed identity or a user-assigned managed identity (preview).
+You can use a system-assigned managed identity or a user-assigned managed identity (preview). Managed identities are Azure AD logins and require Azure role assignments to access data in Azure SQL.
Before learning more about this feature, it is recommended that you have an understanding of what an indexer is and how to set up an indexer for your data source. More information can be found at the following links:
search Search Howto Managed Identities Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-storage.md
Last updated 03/10/2022
# Set up a connection to an Azure Storage account using a managed identity
-This article describes how to set up an indexer connection to an Azure Storage account using a managed identity instead of providing credentials in the data source object connection string.
+This article describes how to set up an Azure Cognitive Search indexer connection to an Azure Storage account using a managed identity instead of providing credentials in the connection string.
-You can use a system-assigned managed identity or a user-assigned managed identity (preview).
+You can use a system-assigned managed identity or a user-assigned managed identity (preview). Managed identities are Azure AD logins and require Azure role assignments to access data in Azure Storage.
This article assumes familiarity with indexer concepts and configuration. If you're new to indexers, start with these links:
search Search Indexer Howto Access Ip Restricted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-ip-restricted.md
On behalf of an indexer, a search service will issue outbound calls to an extern
This article explains how to find the IP address of your search service and configure an inbound IP rule on an Azure Storage account. While specific to Azure Storage, this approach also works for other Azure resources that use IP firewall rules for data access, such as Cosmos DB and Azure SQL.
-> [!NOTE]
-> IP firewall rules for a storage account are only effective if the storage account and the search service are in different regions. If your setup does not permit this, we recommend utilizing the [trusted service exception option](search-indexer-howto-access-trusted-service-exception.md) as an alternative.
+## Prerequisites
+
+The storage account and the search service must be in different regions. If your setup doesn't permit this, try the [trusted service exception](search-indexer-howto-access-trusted-service-exception.md) or [resource instance rule](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances-preview).
## Get a search service IP address
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
Previously updated : 03/30/2022 Last updated : 03/31/2022 # Service limits in Azure Cognitive Search
Maximum limits on storage, workloads, and quantities of indexes and other object
| Resource | Free | Basic&nbsp;<sup>1</sup> | S1 | S2 | S3 | S3&nbsp;HD | L1 | L2 | | -- | - | - | | | | | | | | Maximum indexes |3 |5 or 15 |50 |200 |200 |1000 per partition or 3000 per service |10 |10 |
-| Maximum simple fields per index&nbsp;<sup>2</sup> |1000 |100 |1000 |1000 |1000 |1000 |1000 |1000 |
+| Maximum simple fields per index&nbsp;<sup>2</sup> |1000 |100 |3000 |3000 |3000 |1000 |1000 |1000 |
| Maximum complex collections per index |40 |40 |40 |40 |40 |40 |40 |40 | | Maximum elements across all complex collections per document&nbsp;<sup>3</sup> |3000 |3000 |3000 |3000 |3000 |3000 |3000 |3000 | | Maximum depth of complex fields |10 |10 |10 |10 |10 |10 |10 |10 |
Maximum limits on storage, workloads, and quantities of indexes and other object
| Maximum [scoring profiles](/rest/api/searchservice/add-scoring-profiles-to-a-search-index) per index |100 |100 |100 |100 |100 |100 |100 |100 | | Maximum functions per profile |8 |8 |8 |8 |8 |8 |8 |8 |
-<sup>1</sup> Basic services created before December 2017 have lower limits (5 instead of 15) on indexes. Basic tier is the only SKU with a lower limit of 100 fields per index.
+<sup>1</sup> Basic services created before December 2017 have lower limits (5 instead of 15) on indexes. Basic tier is the only SKU with a lower limit of 100 fields per index. You might find some variation in maximum limits for Basic if your service is provisioned on a more powerful cluster. The limits here represent the common denominator. Indexes built to the above specifications will be portable across service tiers in any region.
<sup>2</sup> The upper limit on fields includes both first-level fields and nested subfields in a complex collection. For example, if an index contains 15 fields and has two complex collections with 5 subfields each, the field count of your index is 25.
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-overview.md
Cognitive Search has three basic network traffic patterns:
Inbound requests that target a search service endpoint consist of:
-+ Creating and managing indexes, indexers, and other objects
-+ Sending requests for indexing, running indexer jobs, executing skills
++ Creating or managing indexes, indexers, data sources, skillsets, or synonym lists++ Running indexers and skillsets + Querying an index For inbound access to data and operations on your search service, you can implement a progression of security measures, starting with [network security features](#service-access-and-authentication). You can create either inbound rules in an IP firewall, or private endpoints that fully shield your search service from the public internet.
Independent of network security, all inbound requests must be authenticated. Key
### Outbound traffic
-Outbound requests from a search service to other applications are typically made by indexers for text-based indexing and some aspects of AI enrichment. Outbound requests include both read and write operations. Outbound requests are made by the search service on its own behalf, and on the behalf of an indexer or skillset.
+Outbound requests from a search service to other applications are typically made by indexers for text-based indexing and some aspects of AI enrichment. Outbound requests include both read and write operations.
+
+Outbound requests are made by the search service on its own behalf, and on the behalf of an indexer or skillset:
-+ Indexer connects to external data sources to read in data for indexing.
-+ Indexer writes to Azure Storage when creating knowledge stores, persisting cached enrichments, and persisting debug sessions.
-+ A custom skill connects to an Azure function or app to run external code that's hosted off-service. The request for external processing is sent during skillset execution.
+ Search connects to Azure Key Vault for a customer-managed key used to encrypt and decrypt sensitive data.++ Indexers [connect to external data sources](search-indexer-securing-resources.md) to read in data for indexing.++ Indexers write to Azure Storage when creating knowledge stores, persisting cached enrichments, and persisting debug sessions.++ Custom skills connect to an Azure function or app to run external code that's hosted off-service. The request for external processing is sent during skillset execution. Outbound connections can be made using a resource's full access connection string that includes a key or a database login, or an Azure AD login ([a managed identity](search-howto-managed-identities-data-sources.md)) if you're using Azure Active Directory.
If your Azure resource is behind a firewall, you'll need to [create rules that a
### Internal traffic
-Internal requests are secured and managed by Microsoft. Internal traffic consists of service-to-service calls for tasks like authentication and authorization through Azure Active Directory, diagnostic logging in Azure Monitor, private endpoint connections, and requests made to Cognitive Services for built-in skills.
+Internal requests are secured and managed by Microsoft. Internal traffic consists of:
+++ Service-to-service calls for tasks like authentication and authorization through Azure Active Directory, diagnostic logging in Azure Monitor, and private endpoint connections.++ Requests made to Cognitive Services APIs for built-in skills. <a name="service-access-and-authentication"></a>
sentinel Ama Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ama-migrate.md
Each organization will have different metrics of success and internal migration
**Include the following steps in your migration process**:
-1. Make sure that you've considered your environmental requirements and understand the gaps between the different agents. For more information, see [Plan your migration](../azure-monitor/agents/azure-monitor-agent-migration.md#plan-your-migration) in the Azure Monitor documentation.
+1. Make sure that you've considered your environmental requirements and understand the gaps between the different agents. For more information, see [When should I migrate](../azure-monitor/agents/azure-monitor-agent-migration.md#when-should-i-migrate-to-the-azure-monitor-agent) in the Azure Monitor documentation.
1. Run a proof of concept to test how the AMA sends data to Microsoft Sentinel, ideally in a development or sandbox environment.
sentinel Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd.md
Before connecting your Microsoft Sentinel workspace to an external source contro
Microsoft Sentinel currently supports connections only with GitHub and Azure DevOps repositories. -- An **Owner** role in the resource group that contains your Microsoft Sentinel workspace. The **Owner** role is required to create the connection between Microsoft Sentinel and your source control repository. If you are using Azure Lighthouse in your environment, you can instead have the combination of **User Access Administrator** and **Sentinel Contributor** roles to create the connection.
+- An **Owner** role in the resource group that contains your Microsoft Sentinel workspace. This role is required to create the connection between Microsoft Sentinel and your source control repository. If you are unable to use the Owner role in your environment, you can instead use the combination of **User Access Administrator** and **Sentinel Contributor** roles to create the connection.
### Maximum connections and deployments
service-fabric Service Fabric Technical Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-technical-overview.md
Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices. Service Fabric is a container and process orchestrator that allows you to [host your clusters anywhere](service-fabric-deploy-anywhere.md): on Azure, in an on-premises datacenter, or on any cloud provider. You can use any framework to write your services and choose where to run the application from multiple environment choices. This article details the terminology used by Service Fabric to understand the terms used in the documentation.
+The related training videos mentioned below detail the application, packaging, deployment, abstractions, and terminology used by Service Fabric:
+* [<b>Service Fabric concepts:</b>](/shows/building-microservices-applications-on-azure-service-fabric/what-is-a-service-fabric-cluster)
+* [<b>Design time concepts:</b>](/shows/building-microservices-applications-on-azure-service-fabric/design-time-concepts)
+* [<b>Runtime concepts</b>](/shows/building-microservices-applications-on-azure-service-fabric/run-time-concepts)
## Infrastructure concepts **Cluster**: A network-connected set of virtual or physical machines into which your microservices are deployed and managed. Clusters can scale to thousands of machines.
storage Storage Blob Index How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-index-how-to.md
description: See examples of how to use blob index tags to categorize, manage, a
Previously updated : 06/14/2021 Last updated : 03/30/2022 - ms.devlang: csharp
Blob index tags categorize data in your storage account using key-value tag attr
To learn more about this feature along with known issues and limitations, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md).
-## Prerequisites
+## Upload a new blob with index tags
-# [Portal](#tab/azure-portal)
+This task can be performed by a [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) or a security principal that has been given permission to the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftstorage) via a custom Azure role.
-- An Azure subscription registered and approved for access-- Access to the [Azure portal](https://portal.azure.com/)
+### [Portal](#tab/azure-portal)
-# [.NET v12 SDK](#tab/net)
+1. In the [Azure portal](https://portal.azure.com/), select your storage account.
-1. Set up your Visual Studio project to get started with the Azure Blob Storage client library v12 for .NET. To learn more, see [.NET Quickstart](storage-quickstart-blobs-dotnet.md)
+2. Navigate to the **Containers** option under **Data storage**, and select your container.
-2. In the NuGet Package Manager, Find the **Azure.Storage.Blobs** package, and install version **12.7.0** or newer to your project. You can also run the PowerShell command: `Install-Package Azure.Storage.Blobs -Version 12.7.0`
+3. Select the **Upload** button and browse your local file system to find a file to upload as a block blob.
- To learn how, see [Find and install a package](/nuget/consume-packages/install-use-packages-visual-studio#find-and-install-a-package).
+4. Expand the **Advanced** dropdown and go to the **Blob Index Tags** section.
-3. Add the following using statements to the top of your code file.
+5. Input the key/value blob index tags that you want applied to your data.
- ```csharp
- using Azure;
- using Azure.Storage.Blobs;
- using Azure.Storage.Blobs.Models;
- using Azure.Storage.Blobs.Specialized;
- using System;
- using System.Collections.Generic;
- using System.Threading.Tasks;
- ```
+6. Select the **Upload** button to upload the blob.
-
+ :::image type="content" source="media/storage-blob-index-concepts/blob-index-upload-data-with-tags.png" alt-text="Screenshot of the Azure portal showing how to upload a blob with index tags.":::
-## Upload a new blob with index tags
+### [PowerShell](#tab/azure-powershell)
-This task can be performed by a [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) or a security principal that has been given permission to the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftstorage) via a custom Azure role.
+1. Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions.
-# [Portal](#tab/azure-portal)
+ ```powershell
+ Connect-AzAccount
+ ```
-1. In the [Azure portal](https://portal.azure.com/), select your storage account.
+2. If your identity is associated with more than one subscription, then set your active subscription. Then, get the storage account context.
-2. Navigate to the **Containers** option under **Data storage**, and select your container.
+ ```powershell
+ $context = Get-AzSubscription -SubscriptionId <subscription-id>
+ Set-AzContext $context
+ $storageAccount = Get-AzStorageAccount -ResourceGroupName "<resource-group-name>" -AccountName "<storage-account-name>"
+ $ctx = $storageAccount.Context
+ ```
-3. Select the **Upload** button and browse your local file system to find a file to upload as a block blob.
+3. Upload a blob by using the `Set-AzStorageBlobContent` command. Set tags by using the `-Tag` parameter.
-4. Expand the **Advanced** dropdown and go to the **Blob Index Tags** section.
+ ```powershell
+ $containerName = "myContainer"
+ $file = "C:\demo-file.txt"
-5. Input the key/value blob index tags that you want applied to your data.
+ Set-AzStorageBlobContent -File $file -Container $containerName -Context $ctx -Tag @{"tag1" = "value1"; "tag2" = "value2" }
+ ```
-6. Select the **Upload** button to upload the blob.
+### [Azure CLI](#tab/azure-cli)
+1. Open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.
-# [.NET v12 SDK](#tab/net)
+2. Install the `storage-preview` extension.
-The following example shows how to create an append blob with tags set during creation.
+ ```azurecli
+ az extension add -n storage-preview
+ ```
-```csharp
-static async Task BlobIndexTagsOnCreateAsync()
-{
- var serviceClient = new BlobServiceClient(ConnectionString);
- var container = serviceClient.GetBlobContainerClient("mycontainer");
+2. If you're using Azure CLI locally, run the login command.
- // Create a container
- await container.CreateIfNotExistsAsync();
+ ```azurecli
+ az login
+ ```
- // Create an append blob
- AppendBlobClient appendBlobWithTags = container.GetAppendBlobClient("myAppendBlob0.logs");
+3. If your identity is associated with more than one subscription, then set your active subscription to subscription of the storage account.
- // Blob index tags to upload
- AppendBlobCreateOptions appendOptions = new AppendBlobCreateOptions();
- appendOptions.Tags = new Dictionary<string, string>
- {
- { "Sealed", "false" },
- { "Content", "logs" },
- { "Date", "2020-04-20" }
- };
+ ```azurecli
+ az account set --subscription <subscription-id>
+ ```
- // Upload data with tags set on creation
- await appendBlobWithTags.CreateAsync(appendOptions);
-}
-```
+ Replace the `<subscription-id>` placeholder value with the ID of your subscription.
+3. Upload a blob by using the `az storage blob upload` command. Set tags by using the `--tags` parameter.
+
+ ```azurecli
+ az storage blob upload --account-name mystorageaccount --container-name myContainer --name demo-file.txt --file C:\demo-file.txt --tags tag1=value1 tag2=value2 --auth-mode login
+ ```
## Get, set, and update blob index tags
Getting blob index tags can be performed by a [Storage Blob Data Owner](../../ro
Setting and updating blob index tags can be performed by a [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) or a security principal that has been given permission to the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftstorage) via a custom Azure role.
-# [Portal](#tab/azure-portal)
+### [Portal](#tab/azure-portal)
1. In the [Azure portal](https://portal.azure.com/), select your storage account.
Setting and updating blob index tags can be performed by a [Storage Blob Data Ow
6. Select the **Save** button to confirm any updates to your blob. -
-# [.NET v12 SDK](#tab/net)
-
-```csharp
-static async Task BlobIndexTagsExample()
-{
- var serviceClient = new BlobServiceClient(ConnectionString);
- var container = serviceClient.GetBlobContainerClient("mycontainer");
-
- // Create a container
- await container.CreateIfNotExistsAsync();
-
- // Create a new append blob
- AppendBlobClient appendBlob = container.GetAppendBlobClient("myAppendBlob1.logs");
- await appendBlob.CreateAsync();
-
- // Set or update blob index tags on existing blob
- Dictionary<string, string> tags = new Dictionary<string, string>
- {
- { "Project", "Contoso" },
- { "Status", "Unprocessed" },
- { "Sealed", "true" }
- };
- await appendBlob.SetTagsAsync(tags);
-
- // Get blob index tags
- Response<IDictionary<string, string>> tagsResponse = await appendBlob.GetTagsAsync();
- Console.WriteLine(appendBlob.Name);
- foreach (KeyValuePair<string, string> tag in tagsResponse.Value)
- {
- Console.WriteLine($"{tag.Key} = {tag.Value}");
- }
-
- // List blobs with all options returned including blob index tags
- await foreach (BlobItem blobItem in container.GetBlobsAsync(BlobTraits.All))
- {
- Console.WriteLine(Environment.NewLine + blobItem.Name);
- foreach (KeyValuePair<string, string> tag in blobItem.Tags)
- {
- Console.WriteLine($"{tag.Key} = {tag.Value}");
- }
- }
-
- // Delete existing blob index tags by replacing all tags
- var noTags = new Dictionary<string, string>();
- await appendBlob.SetTagsAsync(noTags);
-}
-```
+ :::image type="content" source="media/storage-blob-index-concepts/blob-index-get-set-tags.png" alt-text="Screenshot of the Azure portal showing how to get, set, update, and delete index tags on blobs.":::
+
+### [PowerShell](#tab/azure-powershell)
+
+1. Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions.
+
+ ```powershell
+ Connect-AzAccount
+ ```
+
+2. If your identity is associated with more than one subscription, then set your active subscription. Then, get the storage account context.
+
+ ```powershell
+ $context = Get-AzSubscription -SubscriptionId <subscription-id>
+ Set-AzContext $context
+ $storageAccount = Get-AzStorageAccount -ResourceGroupName "<resource-group-name>" -AccountName "<storage-account-name>"
+ $ctx = $storageAccount.Context
+ ```
+
+3. To get the tags of a blob, use the `Get-AzStorageBlobTag` command and set the `-Blob` parameter to the name of the blob.
+
+ ```powershell
+ $containerName = "myContainer"
+ $blobName = "myBlob"
+ Get-AzStorageBlobTag -Context $ctx -Container $containerName -Blob $blobName
+ ```
+
+4. To set the tags of a blob, use the `Set-AzStorageBlobTag` command. Set the `-Blob` parameter to the name of the blob, and set the `-Tag` parameter to a collection of name and value pairs.
+
+ ```powershell
+ $containerName = "myContainer"
+ $blobName = "myBlob"
+ $tags = @{"tag1" = "value1"; "tag2" = "value2" }
+ Set-AzStorageBlobTag -Context $ctx -Container $containerName -Blob $blobName -Tag $tags
+ ```
+
+### [Azure CLI](#tab/azure-cli)
+
+1. Open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.
+
+2. Install the `storage-preview` extension.
+
+ ```azurecli
+ az extension add -n storage-preview
+ ```
+
+2. If you're using Azure CLI locally, run the login command.
+
+ ```azurecli
+ az login
+ ```
+
+3. If your identity is associated with more than one subscription, then set your active subscription to subscription of the storage account.
+
+ ```azurecli
+ az account set --subscription <subscription-id>
+ ```
+
+ Replace the `<subscription-id>` placeholder value with the ID of your subscription.
++
+3. To get the tags of a blob, use the `az storage blob tag list` command and set the `--name` parameter to the name of the blob.
+
+ ```azurecli
+ az storage blob tag list --account-name mystorageaccount --container-name myContainer --name demo-file.txt --auth-mode login
+ ```
+
+4. To set the tags of a blob, use the `az storage blob tag set` command. Set the `--name` parameter to the name of the blob, and set the `--tags` parameter to a collection of name and value pairs.
+
+ ```azurecli
+ az storage blob tag set --account-name mystorageaccount --container-name myContainer --name demo-file.txt --tags tag1=value1 tag2=value2 --auth-mode login
+ ```
This task can be performed by a [Storage Blob Data Owner](../../role-based-acces
> [!NOTE] > You can't use index tags to retrieve previous versions. Tags for previous versions aren't passed to the blob index engine. For more information, see [Conditions and known issues](storage-manage-find-blobs.md#conditions-and-known-issues).
-# [Portal](#tab/azure-portal)
+### [Portal](#tab/azure-portal)
Within the Azure portal, the blob index tags filter automatically applies the `@container` parameter to scope your selected container. If you wish to filter and find tagged data across your entire storage account, use our REST API, SDKs, or tools.
Within the Azure portal, the blob index tags filter automatically applies the `@
5. Select the **Blob Index tags filter** button to add additional tag filters (up to 10). -
-# [.NET v12 SDK](#tab/net)
-
-```csharp
-static async Task FindBlobsByTagsExample()
-{
- var serviceClient = new BlobServiceClient(ConnectionString);
- var container1 = serviceClient.GetBlobContainerClient("mycontainer");
- var container2 = serviceClient.GetBlobContainerClient("mycontainer2");
-
- // Blob index queries and selection
- var singleEqualityQuery = @"""Archive"" = 'false'";
- var andQuery = @"""Archive"" = 'false' AND ""Priority"" = '01'";
- var rangeQuery = @"""Date"" >= '2020-04-20' AND ""Date"" <= '2020-04-30'";
- var containerScopedQuery = @"@container = 'mycontainer' AND ""Archive"" = 'false'";
-
- var queryToUse = containerScopedQuery;
-
- // Create a container
- await container1.CreateIfNotExistsAsync();
- await container2.CreateIfNotExistsAsync();
-
- // Create append blobs
- var appendBlobWithTags0 = container1.GetAppendBlobClient("myAppendBlob00.logs");
- var appendBlobWithTags1 = container1.GetAppendBlobClient("myAppendBlob01.logs");
- var appendBlobWithTags2 = container1.GetAppendBlobClient("myAppendBlob02.logs");
- var appendBlobWithTags3 = container2.GetAppendBlobClient("myAppendBlob03.logs");
- var appendBlobWithTags4 = container2.GetAppendBlobClient("myAppendBlob04.logs");
- var appendBlobWithTags5 = container2.GetAppendBlobClient("myAppendBlob05.logs");
-
- // Blob index tags to upload
- CreateAppendBlobOptions appendOptions = new CreateAppendBlobOptions();
- appendOptions.Tags = new Dictionary<string, string>
- {
- { "Archive", "false" },
- { "Priority", "01" },
- { "Date", "2020-04-20" }
- };
-
- CreateAppendBlobOptions appendOptions2 = new CreateAppendBlobOptions();
- appendOptions2.Tags = new Dictionary<string, string>
- {
- { "Archive", "true" },
- { "Priority", "02" },
- { "Date", "2020-04-24" }
- };
-
- // Upload data with tags set on creation
- await appendBlobWithTags0.CreateAsync(appendOptions);
- await appendBlobWithTags1.CreateAsync(appendOptions);
- await appendBlobWithTags2.CreateAsync(appendOptions2);
- await appendBlobWithTags3.CreateAsync(appendOptions);
- await appendBlobWithTags4.CreateAsync(appendOptions2);
- await appendBlobWithTags5.CreateAsync(appendOptions2);
-
- // Find Blobs given a tags query
- Console.WriteLine($"Find Blob by Tags query: {queryToUse}");
-
- var blobs = new List<TaggedBlobItem>();
- await foreach (TaggedBlobItem taggedBlobItem in serviceClient.FindBlobsByTagsAsync(queryToUse))
- {
- blobs.Add(taggedBlobItem);
- }
-
- foreach (var filteredBlob in blobs)
- {
- Console.WriteLine($"BlobIndex result: ContainerName= {filteredBlob.ContainerName}, " +
- $"BlobName= {filteredBlob.Name}");
- }
-}
-```
+ :::image type="content" source="media/storage-blob-index-concepts/blob-index-tag-filter-within-container.png" alt-text="Screenshot of the Azure portal showing how to Filter and find tagged blobs using index tags":::
-
+### [PowerShell](#tab/azure-powershell)
-## Lifecycle management with blob index tag filters
+1. Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions.
-# [Portal](#tab/azure-portal)
+ ```powershell
+ Connect-AzAccount
+ ```
-1. In the [Azure portal](https://portal.azure.com/), select your storage account.
+2. If your identity is associated with more than one subscription, then set your active subscription. Then, get the storage account context.
+
+ ```powershell
+ $context = Get-AzSubscription -SubscriptionId <subscription-id>
+ Set-AzContext $context
+ $storageAccount = Get-AzStorageAccount -ResourceGroupName "<resource-group-name>" -AccountName "<storage-account-name>"
+ $ctx = $storageAccount.Context
+ ```
+
+3. To find all blobs that match a specific blob tag, use the `Get-AzStorageBlobByTag` command.
+
+ ```powershell
+ $filterExpression = """tag1""='value1'"
+ Get-AzStorageBlobByTag -TagFilterSqlExpression $filterExpression -Context $ctx
+ ```
+
+4. To find blobs only in a specific container, include the container name in the `-TagFilterSqlExpression`.
+
+ ```powershell
+ $filterExpression = "@container='myContainer' AND ""tag1""='value1'"
+ Get-AzStorageBlobByTag -TagFilterSqlExpression $filterExpression -Context $ctx
+ ```
+
+### [Azure CLI](#tab/azure-cli)
+
+1. Open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.
+
+2. Install the `storage-preview` extension.
+
+ ```azurecli
+ az extension add -n storage-preview
+ ```
+
+2. If you're using Azure CLI locally, run the login command.
-2. Navigate to the **Lifecycle Management** option under **Blob Service**
+ ```azurecli
+ az login
+ ```
-3. Select *Add rule* and then fill out the Action set form fields
+3. If your identity is associated with more than one subscription, then set your active subscription to subscription of the storage account.
-4. Select **Filter** set to add optional filter for prefix match and blob index match
+ ```azurecli
+ az account set --subscription <subscription-id>
+ ```
- :::image type="content" source="media/storage-blob-index-concepts/blob-index-match-lifecycle-filter-set.png" alt-text="Screenshot of the Azure portal showing how to add index tags for lifecycle management.":::
+ Replace the `<subscription-id>` placeholder value with the ID of your subscription.
-5. Select **Review + add** to review the rule settings
- :::image type="content" source="media/storage-blob-index-concepts/blob-index-lifecycle-management-example.png" alt-text="Screenshot of the Azure portal showing a lifecycle management rule with blob index tags filter example":::
+3. To find all blobs that match a specific blob tag, use the `az storage blob filter` command.
-6. Select **Add** to apply the new rule to the lifecycle management policy
+ ```azurecli
+ az storage blob filter --account-name mystorageaccount --tag-filter ""tag1"='value1' and "tag2"='value2'" --auth-mode login
+ ```
-# [.NET v12 SDK](#tab/net)
+4. To find blobs only in a specific container, include the container name in the `--tag-filter` parameter.
-[Lifecycle management](./lifecycle-management-overview.md) policies are applied for each storage account at the control plane level. For .NET, install the [Microsoft Azure Management Storage Library](https://www.nuget.org/packages/Microsoft.Azure.Management.Storage/) version 16.0.0 or higher.
+ ```azurecli
+ az storage blob filter --account-name mystorageaccount --tag-filter ""@container"='myContainer' and "tag1"='value1' and "tag2"='value2'" --auth-mode login
+ ```
stream-analytics Cicd Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/cicd-overview.md
Follow the steps in this guide to create a CI/CD pipeline for Stream Analytics.
Use Azure Stream Analytics tools for [Visual Studio Code](./quick-create-visual-studio-code.md) or [Visual Studio](stream-analytics-quick-create-vs.md) to [develop and test queries locally](develop-locally.md). You can also [export an existing job](visual-studio-code-explore-jobs.md#export-a-job-to-a-local-project) to a local project.
+ > [!NOTE]
+ > We strongly recommend using [**Stream Analytics tools for Visual Studio Code**](./quick-create-visual-studio-code.md) for best local development experience. There are known feature gaps in Stream Analytics tools for Visual Studio 2019 (version 2.6.3000.0) and it won't be improved going forward.
+ 2. Commit your Azure Stream Analytics projects to your source control system, like a Git repository. 3. Use [Azure Stream Analytics CI/CD tools](cicd-tools.md) to build the projects and generate Azure Resource Manager templates for the deployment.
stream-analytics Custom Deserializer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/custom-deserializer.md
Title: Tutorial - Custom .NET deserializers for Azure Stream Analytics cloud jobs
-description: This tutorial demonstrates how to create a custom .NET deserializer for an Azure Stream Analytics cloud job using Visual Studio.
+ Title: Custom .NET deserializers for Azure Stream Analytics cloud jobs
+description: This doc demonstrates how to create a custom .NET deserializer for an Azure Stream Analytics cloud job using Visual Studio.
Last updated 12/17/2020
-# Tutorial: Custom .NET deserializers for Azure Stream Analytics
+# Custom .NET deserializers for Azure Stream Analytics in Visual Studio
Azure Stream Analytics has [built-in support for three data formats](stream-analytics-parsing-json.md): JSON, CSV, and Avro. With custom .NET deserializers, you can read data from other formats such as [Protocol Buffer](https://developers.google.com/protocol-buffers/), [Bond](https://github.com/Microsoft/bond) and other user defined formats for both cloud and edge jobs.
stream-analytics Develop Locally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/develop-locally.md
The environments in the following table support local development:
|[Visual Studio 2019](stream-analytics-tools-for-visual-studio-install.md) |Stream Analytics Tools is part of the Azure development and Data storage and processing workloads in Visual Studio. You can use Visual Studio to write custom C# user-defined functions and deserializers. To learn more, see [Create an Azure Stream Analytics job by using Visual Studio](stream-analytics-quick-create-vs.md).| |[Command prompt or terminal](stream-analytics-tools-for-visual-studio-cicd.md)|The Azure Stream Analytics CI/CD NuGet package provides tools for Visual studio project build, local testing on an arbitrary machine. The Azure Stream Analytics CI/CD npm package provides tools for Visual Studio Code project builds (which generates an Azure Resource Manager template) on an arbitrary machine.|
+> [!NOTE]
+> We strongly recommend using [**Stream Analytics tools for Visual Studio Code**](./quick-create-visual-studio-code.md) for best local development experience. There are known feature gaps in Stream Analytics tools for Visual Studio 2019 (version 2.6.3000.0) and it won't be improved going forward.
+ ## Next steps * [Test Stream Analytics queries locally with sample data using Visual Studio Code](visual-studio-code-local-run.md)
stream-analytics Stream Analytics Add Inputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-add-inputs.md
Stream Analytics has first-class integration with four kinds of resources as inp
These input resources can live in the same Azure subscription as your Stream Analytics job, or from a different subscription.
-You can use the [Azure portal](stream-analytics-quick-create-portal.md#configure-job-input), [Azure PowerShell](/powershell/module/az.streamanalytics/New-azStreamAnalyticsInput), [.NET API](/dotnet/api/microsoft.azure.management.streamanalytics.inputsoperationsextensions), [REST API](/rest/api/streamanalytics/2020-03-01/inputs), and [Visual Studio](stream-analytics-tools-for-visual-studio-install.md) to create, edit, and test Stream Analytics job inputs.
+You can use the [Azure portal](stream-analytics-quick-create-portal.md#configure-job-input), [Azure PowerShell](/powershell/module/az.streamanalytics/New-azStreamAnalyticsInput), [.NET API](/dotnet/api/microsoft.azure.management.streamanalytics.inputsoperationsextensions), [REST API](/rest/api/streamanalytics/2020-03-01/inputs), [Visual Studio](stream-analytics-tools-for-visual-studio-install.md), and [Visual Studio Code](./quick-create-visual-studio-code.md) to create, edit, and test Stream Analytics job inputs.
+
+> [!NOTE]
+> We strongly recommend using [**Stream Analytics tools for Visual Studio Code**](./quick-create-visual-studio-code.md) for best local development experience. There are known feature gaps in Stream Analytics tools for Visual Studio 2019 (version 2.6.3000.0) and it won't be improved going forward.
## Stream and reference inputs As data is pushed to a data source, it's consumed by the Stream Analytics job and processed in real time. Inputs are divided into two types: data stream inputs and reference data inputs.
stream-analytics Stream Analytics Define Inputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-inputs.md
Stream Analytics supports compression across all data stream input sources. Supp
You can use the [Azure portal](stream-analytics-quick-create-portal.md), [Visual Studio](stream-analytics-quick-create-vs.md), and [Visual Studio Code](quick-create-visual-studio-code.md) to add and view or edit existing inputs on your streaming job. You can also test input connections and test queries from sample data from the Azure portal, [Visual Studio](stream-analytics-vs-tools-local-run.md), and [Visual Studio Code](visual-studio-code-local-run.md). When you write a query, you list the input in the FROM clause. You can get the list of available inputs from the **Query** page in the portal. If you wish to use multiple inputs, you can `JOIN` them or write multiple `SELECT` queries.
+> [!NOTE]
+> We strongly recommend using [**Stream Analytics tools for Visual Studio Code**](./quick-create-visual-studio-code.md) for best local development experience. There are known feature gaps in Stream Analytics tools for Visual Studio 2019 (version 2.6.3000.0) and it won't be improved going forward.
## Stream data from Event Hubs
stream-analytics Stream Analytics Define Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-outputs.md
Last updated 01/14/2022
An Azure Stream Analytics job consists of an input, query, and an output. There are several output types to which you can send transformed data. This article lists the supported Stream Analytics outputs. When you design your Stream Analytics query, refer to the name of the output by using the [INTO clause](/stream-analytics-query/into-azure-stream-analytics). You can use a single output per job, or multiple outputs per streaming job (if you need them) by adding multiple INTO clauses to the query.
-To create, edit, and test Stream Analytics job outputs, you can use the [Azure portal](stream-analytics-quick-create-portal.md#configure-job-output), [Azure PowerShell](stream-analytics-quick-create-powershell.md#configure-output-to-the-job), [.NET API](/dotnet/api/microsoft.azure.management.streamanalytics.ioutputsoperations), [REST API](/rest/api/streamanalytics/), and [Visual Studio](stream-analytics-quick-create-vs.md).
+To create, edit, and test Stream Analytics job outputs, you can use the [Azure portal](stream-analytics-quick-create-portal.md#configure-job-output), [Azure PowerShell](stream-analytics-quick-create-powershell.md#configure-output-to-the-job), [.NET API](/dotnet/api/microsoft.azure.management.streamanalytics.ioutputsoperations), [REST API](/rest/api/streamanalytics/), [Visual Studio](stream-analytics-quick-create-vs.md), and [Visual Studio Code](./quick-create-visual-studio-code.md).
+
+> [!NOTE]
+> We strongly recommend using [**Stream Analytics tools for Visual Studio Code**](./quick-create-visual-studio-code.md) for best local development experience. There are known feature gaps in Stream Analytics tools for Visual Studio 2019 (version 2.6.3000.0) and it won't be improved going forward.
Some outputs types support [partitioning](#partitioning), and [output batch sizes](#output-batch-size) vary to optimize throughput. The following table shows features that are supported for each output type:
stream-analytics Stream Analytics Previews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-previews.md
You can use the job diagram while testing your query locally to examine the inte
## Explore jobs in Visual Studio Code Stream Analytics Explorer in Visual Studio Code Extension gives developers a lightweight experience for managing their Stream Analytics jobs. In the Stream Analytics Explorer, you can easily manage your jobs, view job diagram, and debug in Job Monitor.-
-## Debug query steps in Visual Studio
-
-You can easily preview the intermediate row set on a data diagram when doing local testing in Azure Stream Analytics tools for Visual Studio.
--
-## Live data testing in Visual Studio
-
-Visual Studio tools for Azure Stream Analytics enhance the local testing feature that allows you to test you queries against live event streams from cloud sources such as Event Hub or IoT hub. Learn how to [Test live data locally using Azure Stream Analytics tools for Visual Studio](stream-analytics-live-data-local-testing.md).
--
stream-analytics Stream Analytics Tools For Visual Studio Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-tools-for-visual-studio-install.md
Visual Studio 2019 and Visual Studio 2017 support Azure Data Lake and Stream Ana
For more information on using the tools, see [Quickstart: Create an Azure Stream Analytics job by using Visual Studio](stream-analytics-quick-create-vs.md).
+> [!NOTE]
+> We strongly recommend using [**Stream Analytics tools for Visual Studio Code**](./quick-create-visual-studio-code.md) for best local development experience. There are known feature gaps in Stream Analytics tools for Visual Studio 2019 (version 2.6.3000.0) and it won't be improved going forward.
+ ## Install Visual Studio Enterprise (Ultimate/Premium), Professional, and Community editions support the tools. Express edition and Visual Studio for Mac don't support them.
stream-analytics Visual Studio Code Custom Deserializer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/visual-studio-code-custom-deserializer.md
Title: Create custom .NET deserializers for Azure Stream Analytics cloud jobs using Visual Studio Code
+ Title: Tutorial - Create custom .NET deserializers for Azure Stream Analytics cloud jobs using Visual Studio Code
description: This tutorial demonstrates how to create a custom .NET deserializer for an Azure Stream Analytics cloud job using Visual Studio Code.
Last updated 12/22/2020
-# Create custom .NET deserializers for Azure Stream Analytics in Visual Studio Code
+# Tutorial: Custom .NET deserializers for Azure Stream Analytics in Visual Studio Code
Azure Stream Analytics has [built-in support for three data formats](stream-analytics-parsing-json.md): JSON, CSV, and Avro. With custom .NET deserializers, you can read data from other formats such as [Protocol Buffer](https://developers.google.com/protocol-buffers/), [Bond](https://github.com/Microsoft/bond) and other user defined formats for cloud jobs.
-## Custom .NET deserializers in Visual Studio Code
+This tutorial demonstrates how to create, test, and debug a custom .NET deserializer for an Azure Stream Analytics cloud job using Visual Studio Code. To learn how to create .NET deserializers in Visual Studio, see [Create .NET deserializers for Azure Stream Analytics jobs in Visual Studio](custom-deserializer.md).
-You can create, test and debug a custom .NET deserializer for an Azure Stream Analytics cloud job using Visual Studio Code.
+In this tutorial, you learn how to:
-### Prerequisites
+> [!div class="checklist"]
+> * Create a custom deserializer for protocol buffer.
+> * Create an Azure Stream Analytics job in Visual Studio.
+> * Configure your Stream Analytics job to use the custom deserializer.
+> * Run your Stream Analytics job locally to test and debug the custom deserializer.
+
+## Prerequisites
* Install [.NET core SDK](https://dotnet.microsoft.com/download) and restart Visual Studio Code. * Use this [quickstart](quick-create-visual-studio-code.md) to learn how to create a Stream Analytics job using Visual Studio Code.
-### Create a custom deserializer
+## Create a custom deserializer
1. Open a terminal and run following command to create a .NET class library in Visual Studio Code for your custom deserializer called **ProtobufDeserializer**.
You can create, test and debug a custom .NET deserializer for an Azure Stream An
4. Build the **ProtobufDeserializer** project.
-### Add an Azure Stream Analytics project
+## Add an Azure Stream Analytics project
-1. Open Visual Studio Code and select **Ctrl+Shift+P** to open the command palette. Then enter ASA and select **ASA: Create New Project**. Name it **ProtobufCloudDeserializer**.
+Open Visual Studio Code and select **Ctrl+Shift+P** to open the command palette. Then enter ASA and select **ASA: Create New Project**. Name it **ProtobufCloudDeserializer**.
-### Configure a Stream Analytics job
+## Configure a Stream Analytics job
1. Double-click **JobConfig.json**. Use the default configurations, except for the following settings:
You can create, test and debug a custom .NET deserializer for an Azure Stream An
|-|| |Select local file path|Click CodeLens to select < The file path for the downloaded sample protobuf input file>|
-### Execute the Stream Analytics job
+## Execute the Stream Analytics job
1. Open **ProtobufCloudDeserializer.asaql** and select **Run Locally** from CodeLens then choose **Use Local Input** from the dropdown list.
You can create, test and debug a custom .NET deserializer for an Azure Stream An
You have successfully implemented a custom deserializer for your Stream Analytics job! In this tutorial, you tested the custom deserializer locally with local input data. You can also test it [using live data input in the cloud](visual-studio-code-local-run-live-input.md). For running the job in the cloud, you would properly configure the input and output. Then submit the job to Azure from Visual Studio Code to run your job in the cloud using the custom deserializer you just implemented.
-### Debug your deserializer
+## Debug your deserializer
You can debug your .NET deserializer locally the same way you debug standard .NET code.
synapse-analytics Synapse Workspace Synapse Rbac Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-synapse-rbac-roles.md
Previously updated : 3/07/2022 Last updated : 03/31/2022
The following table describes the built-in roles and the scopes at which they ca
|Synapse Artifact Publisher|Create, read, update, and delete access to published code artifacts and their outputs. Doesn't include permission to run code or pipelines, or to grant access. </br></br>_Can read published artifacts and publish artifacts</br>Can view saved notebook, Spark job, and pipeline output_|Workspace |Synapse Artifact User|Read access to published code artifacts and their outputs. Can create new artifacts but can't publish changes or run code without additional permissions.|Workspace |Synapse Compute Operator |Submit Spark jobs and notebooks and view logs.  Includes canceling Spark jobs submitted by any user. Requires additional use credential permissions on the workspace system identity to run pipelines, view pipeline runs and outputs. </br></br>_Can submit and cancel jobs, including jobs submitted by others</br>Can view Spark pool logs_|Workspace</br>Spark pool</br>Integration runtime|
-|Synapse Monitoring Operator |Read published code artifacts, including logs and outputs for notebooks and pipeline runs. Includes ability to list and view details of serverless SQL pools, Apache Spark pools, Data Explorer pools, and Integration runtimes. Requires additional permissions to run/cancel pipelines, Spark notebooks, and Spark jobs.|Workspace
|Synapse Credential User|Runtime and configuration-time use of secrets within credentials and linked services in activities like pipeline runs. To run pipelines, this role is required, scoped to the workspace system identity. </br></br>_Scoped to a credential, permits access to data via a linked service that is protected by the credential (also requires compute use permission) </br>Allows execution of pipelines protected by the workspace system identity credential(with additional compute use permission)_|Workspace </br>Linked Service</br>Credential |Synapse Linked Data Manager|Creation and management of managed private endpoints, linked services, and credentials. Can create managed private endpoints that use linked services protected by credentials|Workspace| |Synapse User|List and view details of SQL pools, Apache Spark pools, Integration runtimes, and published linked services and credentials. Doesn't include other published code artifacts.  Can create new artifacts but can't run or publish without additional permissions. </br></br>_Can list and read Spark pools, Integration runtimes._|Workspace, Spark pool</br>Linked service </br>Credential|
Synapse Administrator|workspaces/read</br>workspaces/roleAssignments/write, dele
|Synapse Artifact Publisher|workspaces/read</br>workspaces/artifacts/read</br>workspaces/notebooks/write, delete</br>workspaces/sparkJobDefinitions/write, delete</br>workspaces/sqlScripts/write, delete</br>workspaces/kqlScripts/write, delete</br>workspaces/dataFlows/write, delete</br>workspaces/pipelines/write, delete</br>workspaces/triggers/write, delete</br>workspaces/datasets/write, delete</br>workspaces/libraries/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action| |Synapse Artifact User|workspaces/read</br>workspaces/artifacts/read</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action| |Synapse Compute Operator |workspaces/read</br>workspaces/bigDataPools/useCompute/action</br>workspaces/bigDataPools/viewLogs/action</br>workspaces/integrationRuntimes/useCompute/action</br>workspaces/integrationRuntimes/viewLogs/action|
-|Synapse Monitoring Operator |workspaces/read</br>workspaces/artifacts/read</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action</br>workspaces/integrationRuntimes/viewLogs/action</br>workspaces/bigDataPools/viewLogs/action|
|Synapse Credential User|workspaces/read</br>workspaces/linkedServices/useSecret/action</br>workspaces/credentials/useSecret/action| |Synapse Linked Data Manager|workspaces/read</br>workspaces/managedPrivateEndpoint/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete| |Synapse User|workspaces/read|
The following table lists Synapse actions and the built-in roles that permit the
Action|Role --|--
-workspaces/read|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Compute Operator</br>Synapse Monitoring Operator </br>Synapse Credential User</br>Synapse Linked Data Manager</br>Synapse User
+workspaces/read|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Compute Operator </br>Synapse Credential User</br>Synapse Linked Data Manager</br>Synapse User
workspaces/roleAssignments/write, delete|Synapse Administrator workspaces/managedPrivateEndpoint/write, delete|Synapse Administrator</br>Synapse Linked Data Manager workspaces/bigDataPools/useCompute/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Compute Operator
-workspaces/bigDataPools/viewLogs/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Compute Operator</br>Synapse Monitoring Operator
+workspaces/bigDataPools/viewLogs/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Compute Operator
workspaces/integrationRuntimes/useCompute/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator
-workspaces/integrationRuntimes/viewLogs/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator</br>Synapse Monitoring Operator
-workspaces/artifacts/read|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Monitoring Operator
+workspaces/integrationRuntimes/viewLogs/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator
+workspaces/artifacts/read|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User
workspaces/notebooks/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher workspaces/sparkJobDefinitions/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher workspaces/sqlScripts/write, delete|Synapse Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher
workspaces/datasets/write, delete|Synapse Administrator</br>Synapse Contributor<
workspaces/libraries/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher workspaces/linkedServices/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Linked Data Manager workspaces/credentials/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Linked Data Manager
-workspaces/notebooks/viewOutputs/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Monitoring Operator
-workspaces/pipelines/viewOutputs/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Monitoring Operator
+workspaces/notebooks/viewOutputs/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User
+workspaces/pipelines/viewOutputs/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User
workspaces/linkedServices/useSecret/action|Synapse Administrator</br>Synapse Credential User workspaces/credentials/useSecret/action|Synapse Administrator</br>Synapse Credential User
The table below lists Synapse RBAC scopes and the roles that can be assigned at
Scope|Roles --|--
-Workspace |Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Compute Operator </br>Synapse Monitoring Operator </br>Synapse Credential User</br>Synapse Linked Data Manager</br>Synapse User
+Workspace |Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Compute Operator </br>Synapse Credential User</br>Synapse Linked Data Manager</br>Synapse User
Apache Spark pool | Synapse Administrator </br>Synapse Contributor </br> Synapse Compute Operator Integration runtime | Synapse Administrator </br>Synapse Contributor </br> Synapse Compute Operator Linked service |Synapse Administrator </br>Synapse Credential User
synapse-analytics Synapse Workspace Understand What Role You Need https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md
Previously updated : 03/23/2022 Last updated : 03/31/2022
Commit changes to a KQL script to the Git repo|Requires Git permissions on the r
APACHE SPARK POOLS| Create an Apache Spark pool|Azure Owner or Contributor on the workspace| Monitor Apache Spark applications| Synapse User|read
-View the logs for notebook and job execution |Synapse Monitoring Operator|
+View the logs for notebook and job execution |Synapse Compute Operator|
Cancel any notebook or Spark job running on an Apache Spark pool|Synapse Compute Operator on the Apache Spark pool.|bigDataPools/useCompute Create a notebook or job definition|Synapse User, or </br>Azure Owner, Contributor, or Reader on the workspace</br> *Additional permissions are required to run, publish, or commit changes*|read</br></br></br></br></br>
-List and open a published notebook or job definition, including reviewing saved outputs|Synapse Artifact User, Synapse Monitoring Operator on the workspace|artifacts/read
+List and open a published notebook or job definition, including reviewing saved outputs|Synapse Artifact User, Synapse Artifact Publisher, Synapse Contributor on the workspace|artifacts/read
Run a notebook and review its output, or submit a Spark job|Synapse Apache Spark Administrator, Synapse Compute Operator on the selected Apache Spark pool|bigDataPools/useCompute Publish or delete a notebook or job definition (including output) to the service|Artifact Publisher on the workspace, Synapse Apache Spark Administrator|notebooks/write, delete Commit changes to a notebook or job definition to the Git repo|Git permissions|none PIPELINES, INTEGRATION RUNTIMES, DATAFLOWS, DATASETS & TRIGGERS| Create, update, or delete an Integration runtime|Azure Owner or Contributor on the workspace|
-Monitor Integration runtime status|Synapse Monitoring Operator|read, integrationRuntimes/viewLogs
-Review pipeline runs|Synapse Monitoring Operator|read, pipelines/viewOutputs
+Monitor Integration runtime status|Synapse Compute Operator|read, integrationRuntimes/viewLogs
+Review pipeline runs|Synapse Artifact Publisher/Synapse Contributor|read, pipelines/viewOutputs
Create a pipeline |Synapse User</br>*Additional Synapse permissions are required to debug, add triggers, publish, or commit changes*|read Create a dataflow or dataset |Synapse User</br>*Additional Synapse permissions are required to publish, or commit changes*|read
-List and open a published pipeline |Synapse Artifact User, Synapse Monitoring Operator | artifacts/read
+List and open a published pipeline |Synapse Artifact User | artifacts/read
Preview dataset data|Synapse User + Synapse Credential User on the WorkspaceSystemIdentity| Debug a pipeline using the default Integration runtime|Synapse User + Synapse Credential User on the WorkspaceSystemIdentity credential|read, </br>credentials/useSecret Create a trigger, including trigger now (requires permission to execute the pipeline)|Synapse User + Synapse Credential User on the WorkspaceSystemIdentity|read, credentials/useSecret/action
synapse-analytics Develop Openrowset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-openrowset.md
Last updated 03/23/2022 -+ # How to use OPENROWSET using serverless SQL pool in Azure Synapse Analytics
synapse-analytics Sql Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/sql-authentication.md
CREATE LOGIN Mary WITH PASSWORD = '<strong_password>';
CREATE LOGIN [Mary@domainname.net] FROM EXTERNAL PROVIDER; ```
-Once the login exists, you can create users in the individual databases within the serverless SQL pool endpoint and grant required permissions to these users. To create a use, you can use the following syntax:
+Once the login exists, you can create users in the individual databases within the serverless SQL pool endpoint and grant required permissions to these users. To create a user, you can use the following syntax:
```sql CREATE USER Mary FROM LOGIN Mary;
virtual-machines Capacity Reservation Associate Virtual Machine Scale Set Flex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set-flex.md
Virtual Machine Scale Sets have two modes:
To learn more about these modes, go to [Virtual Machine Scale Sets Orchestration Modes](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md).
-This content applies to the flexible orchestration mode. For uniform orchestration mode, go to [Associate a virtual machine scale set with flexible orchestration to a Capacity Reservation group](capacity-reservation-associate-virtual-machine-scale-set.md)
+This content applies to the flexible orchestration mode. For uniform orchestration mode, go to [Associate a virtual machine scale set with uniform orchestration to a Capacity Reservation group](capacity-reservation-associate-virtual-machine-scale-set.md)
> [!IMPORTANT]
virtual-machines Generation 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generation-2.md
Generation 2 VMs use the new UEFI-based boot architecture rather than the BIOS-b
## Generation 2 VM sizes Azure now offers generation 2 support for the following selected VM series:+ | VM Series | Generation 1 | Generation 2 | |--|--|--| |[Av2-series](av2-series.md) | :heavy_check_mark: | :x: |
Azure now offers generation 2 support for the following selected VM series:
|[DCsv2-series](dcv2-series.md) | :x: | :heavy_check_mark: | |[Dv2-series](dv2-dsv2-series.md) | :heavy_check_mark: | :x: | |[DSv2-series](dv2-dsv2-series.md) | :heavy_check_mark: | :heavy_check_mark: |
-[Dv3-series](dv3-dsv3-series.md) | :heavy_check_mark: | :x: |
+|[Dv3-series](dv3-dsv3-series.md) | :heavy_check_mark: | :x: |
|[Dsv3-series](dv3-dsv3-series.md) | :heavy_check_mark: | :heavy_check_mark: |
-[Dv4-series](dv4-dsv4-series.md) | :heavy_check_mark: | :heavy_check_mark: |
+|[Dv4-series](dv4-dsv4-series.md) | :heavy_check_mark: | :heavy_check_mark: |
|[Dsv4-series](dv4-dsv4-series.md) | :heavy_check_mark: | :heavy_check_mark: |
-[Dav4-series](dav4-dasv4-series.md) | :heavy_check_mark: | :x: |
+|[Dav4-series](dav4-dasv4-series.md) | :heavy_check_mark: | :x: |
|[Dasv4-series](dav4-dasv4-series.md) | :heavy_check_mark: | :heavy_check_mark: |
-|[Ddv4-series](ddv4-ddsv4-series.md) |:heavy_check_mark: | :heavy_check_mark: |
-|[Ddsv4-series](ddv4-ddsv4-series.md) |:heavy_check_mark: | :heavy_check_mark: |
+|[Ddv4-series](ddv4-ddsv4-series.md) | :heavy_check_mark: | :heavy_check_mark: |
+|[Ddsv4-series](ddv4-ddsv4-series.md) | :heavy_check_mark: | :heavy_check_mark: |
|[Dasv5-series](dasv5-dadsv5-series.md) | :heavy_check_mark: | :heavy_check_mark: | |[Dadsv5-series](dasv5-dadsv5-series.md) | :heavy_check_mark: | :heavy_check_mark: | |[DCasv5-series](dcasv5-dcadsv5-series.md) | :x: | :heavy_check_mark: |
Azure now offers generation 2 support for the following selected VM series:
|[Dsv5-series](dv5-dsv5-series.md) | :heavy_check_mark: | :heavy_check_mark: | |[Ddv5-series](ddv5-ddsv5-series.md) | :heavy_check_mark: | :heavy_check_mark: | |[Ddsv5-series](ddv5-ddsv5-series.md) | :heavy_check_mark: | :heavy_check_mark: |
-[Ev3-series](ev3-esv3-series.md) | :heavy_check_mark: | :x: |
+|[Ev3-series](ev3-esv3-series.md) | :heavy_check_mark: | :x: |
|[Esv3-series](ev3-esv3-series.md) | :heavy_check_mark: | :heavy_check_mark: |
-[Ev4-series](ev4-esv4-series.md) | :heavy_check_mark:| :x: |
+|[Ev4-series](ev4-esv4-series.md) | :heavy_check_mark:| :x: |
|[Esv4-series](ev4-esv4-series.md) | :heavy_check_mark:| :heavy_check_mark: | |[Eav4-series](eav4-easv4-series.md) | :heavy_check_mark: | :heavy_check_mark: | |[Easv4-series](eav4-easv4-series.md) | :heavy_check_mark: | :heavy_check_mark: |
virtual-machines Nv Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nv-series.md
Previously updated : 02/03/2020 Last updated : 03/29/2022
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets +
+> [!IMPORTANT]
+> NV and NV_Promo series Azure virtual machines (VMs) will be retired on August 31st, 2023. For more information, see the [NV and NV_Promo retirement information](nv-series-retirement.md). For how to migrate your workloads to other VM sizes, see the [NV and NV_Promo series migration guide](nv-series-migration-guide.md).
+>
+> This retirement announcement doesn't apply to NVv3 and NVv4 series VMs.
+ The NV-series virtual machines are powered by [NVIDIA Tesla M60](https://images.nvidia.com/content/tesla/pdf/188417-Tesla-M60-DS-A4-fnl-Web.pdf) GPUs and NVIDIA GRID technology for desktop accelerated applications and virtual desktops where customers are able to visualize their data or simulations. Users are able to visualize their graphics intensive workflows on the NV instances to get superior graphics capability and additionally run single precision workloads such as encoding and rendering. NV-series VMs are also powered by Intel Xeon E5-2690 v3 (Haswell) CPUs. Each GPU in NV instances comes with a GRID license. This license gives you the flexibility to use an NV instance as a virtual workstation for a single user, or 25 concurrent users can connect to the VM for a virtual application scenario.
virtual-machines Nvv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nvv4-series.md
# NVv4-series
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
The NVv4-series virtual machines are powered by [AMD Radeon Instinct MI25](https://www.amd.com/en/products/professional-graphics/instinct-mi25) GPUs and AMD EPYC 7V12(Rome) CPUs with a base frequency of 2.45GHz, all-cores peak frequency of 3.1GHz and single-core peak frequency of 3.3GHz. With NVv4-series Azure is introducing virtual machines with partial GPUs. Pick the right sized virtual machine for GPU accelerated graphics applications and virtual desktops starting at 1/8th of a GPU with 2 GiB frame buffer to a full GPU with 16 GiB frame buffer. NVv4 virtual machines currently support only Windows guest operating system.
For more information on disk types, see [What disk types are available in Azure?
## Next steps
-Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
+Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder.md
We will be using some pieces of information repeatedly, so we will create some v
```azurecli-interactive # Resource group name - we are using myImageBuilderRG in this example
-$imageResourceGroup=myWinImgBuilderRG
+$imageResourceGroup='myWinImgBuilderRG'
# Region location
-$location=WestUS2
+$location='WestUS2'
# Run output name
-$runOutputName=aibWindows
+$runOutputName='aibWindows'
# name of the image to be created
-$imageName=aibWinImage
+$imageName='aibWinImage'
``` Create a variable for your subscription ID.
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
In this section, you find documents about Microsoft Power BI integration into SA
## Change Log
+- March 30, 2022: Adding information that Red Hat Gluster Storage is being phased out [GlusterFS on Azure VMs on RHEL](./high-availability-guide-rhel-glusterfs.md)
- March 30, 2022: Correcting DNN support for older releases of SQL Server in [SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver](./dbms_guide_sqlserver.md) - March 28, 2022: Formatting changes and reorganizing ILB configuration instructions in: [HA for SAP HANA on Azure VMs on SLES](./sap-hana-high-availability.md), [HA for SAP HANA Scale-up with Azure NetApp Files on SLES](./sap-hana-high-availability-netapp-files-suse.md), [HA for SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md), [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [HA for SAP NW on SLES with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md), [HA for SAP NW on Azure VMs on SLES with ANF](./high-availability-guide-suse-netapp-files.md), [HA for SAP NW on Azure VMs on SLES for SAP applications](./high-availability-guide-suse.md), [HA for NFS on Azure VMs on SLES](./high-availability-guide-suse-nfs.md), [HA for SAP NNW on Azure VMs on SLES multi-SID guide](./high-availability-guide-suse-multi-sid.md), [HA for SAP NW on RHEL with NFS on Azure Files](./high-availability-guide-rhel-nfs-azure-files.md), [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md), [HA for SAP NW on Azure VMs on RHEL for SAP applications](./high-availability-guide-rhel.md) and [HA for SAP NW on Azure VMs on RHEL multi-SID guide](./high-availability-guide-rhel-multi-sid.md) - March 15, 2022: Corrected rsize and wsize mount option settings for ANF in [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload](./dbms_guide_ibm.md)
virtual-machines High Availability Guide Rhel Glusterfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-glusterfs.md
vm-windows Previously updated : 08/16/2018 Last updated : 03/30/2022
Read the following SAP Notes and papers first
* [High Availability Add-On Overview](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/index) * [High Availability Add-On Administration](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/index) * [High Availability Add-On Reference](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/index)
+ * [Red Hat Gluster Storage Life Cycle](https://access.redhat.com/support/policy/updates/rhs)
* Azure specific RHEL documentation: * [Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members](https://access.redhat.com/articles/3131341) * [Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft Azure](https://access.redhat.com/articles/3252491) ## Overview
-To achieve high availability, SAP NetWeaver requires shared storage. GlusterFS is configured in a separate cluster and can be used by multiple SAP systems.
+To achieve high availability, SAP NetWeaver requires shared storage. GlusterFS is configured in a separate cluster and can be used by multiple SAP systems. Be aware that Red Hat is phasing out Red Hat Gluster Storage. The configuration will be supported for SAP on Azure until it reaches end of life stage as defined in [Red Hat Gluster Storage Life Cycle](https://access.redhat.com/support/policy/updates/rhs).
+ ![SAP NetWeaver High Availability overview](./media/high-availability-guide-rhel-glusterfs/rhel-glusterfs.png)
Follow these steps to deploy the template:
### Deploy Linux manually via Azure portal
-You first need to create the virtual machines for this cluster. Afterwards, you create a load balancer and use the virtual machines in the backend pools. We recommend [standard load balancer](../../../load-balancer/load-balancer-overview.md).
+You first need to create the virtual machines for this cluster.
1. Create a Resource Group 1. Create a Virtual Network
virtual-network Create Custom Ip Address Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-cli.md
+
+ Title: Create a custom IP address prefix - Azure CLI
+
+description: Learn about how to create a custom IP address prefix using the Azure CLI
++++ Last updated : 03/31/2022++
+# Create a custom IP address prefix using the Azure CLI
+
+A custom IP address prefix enables you to bring your own IP ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
+
+The steps in this article detail the process to:
+
+* Prepare a range to provision
+
+* Provision the range for IP allocation
+
+* Enable the range to be advertised by Microsoft
+++
+- This tutorial requires version 2.28 or later of the Azure CLI (you can run az version to determine which you have). If using Azure Cloud Shell, the latest version is already installed.
+
+- Sign in to Azure CLI and ensure you've selected the subscription with which you want to use this feature using `az account`.
+
+- A customer owned IP range to provision in Azure
+ - A sample customer range (1.2.3.0/24) is used for this example. This range won't be validated by Azure. Replace the example range with yours
+
+> [!NOTE]
+> For problems encountered during the provisioning process, please see [Troubleshooting for custom IP prefix](manage-custom-ip-address-prefix.md#troubleshooting-and-faqs).
+
+## Pre-provisioning steps
+
+To utilize the Azure BYOIP feature, you must perform the following steps prior to the provisioning of your IP address range.
+
+### Requirements and prefix readiness
+
+* The address range must be owned by you and registered under your name with the [American Registry for Internet Numbers (ARIN)](https://www.arin.net/), the [Réseaux IP Européens Network Coordination Centre (RIPE NCC)](https://www.ripe.net/), or the [Asia Pacific Network Information Centre Regional Internet Registries (APNIC)](https://www.apnic.net/). If the range is registered under the Latin America and Caribbean Network Information Centre (LACNIC) or the African Network Information Centre (AFRINIC), contact the [Microsoft Azure BYOIP team](mailto:byoipazure@microsoft.com).
+
+* The address range must be no smaller than a /24 so it will be accepted by Internet Service Providers.
+
+* A Route Origin Authorization (ROA) document that authorizes Microsoft to advertise the address range must be filled out by the customer on the appropriate Routing Internet Registry website. ARIN, RIPE, and APNIC.
+
+ For this ROA:
+
+ * The Origin AS must be listed as 8075
+
+ * The validity end date (expiration date) needs to account for the time you intend to have the prefix advertised by Microsoft. Some RIRs don't present validity end date as an option and or choose the date for you.
+
+ * The prefix length should exactly match the prefixes that can be advertised by Microsoft. For example, if you plan to bring 1.2.3.0/24 and 2.3.4.0/23 to Microsoft, they should both be named.
+
+ * After the ROA is complete and submitted, allow at least 24 hours for it to become available to Microsoft.
+
+### Certificate readiness
+
+To authorize Microsoft to associate a prefix with a customer subscription, a public certificate must be compared against a signed message.
+
+The following steps show the steps required to prepare sample customer range (1.2.3.0/24) for provisioning.
+
+> [!NOTE]
+> Execute the following commands in PowerShell with OpenSSL installed.
+
+
+1. A [self-signed X509 certificate](https://en.wikipedia.org/wiki/Self-signed_certificate) must be created to add to the Whois/RDAP record for the prefix. For information about RDAP, see the [ARIN](https://www.arin.net/resources/registry/whois/rdap/), [RIPE](https://www.ripe.net/manage-ips-and-asns/db/registration-data-access-protocol-rdap), and [APNIC](https://www.apnic.net/about-apnic/whois_search/about/rdap/) sites.
+
+ An example utilizing the OpenSSL toolkit is shown below. The following commands generate an RSA key pair and create an X509 certificate using the key pair that expires in six months:
+
+ ```powershell
+ ./openssl genrsa -out byoipprivate.key 2048
+ Set-Content -Path byoippublickey.cer (./openssl req -new -x509 -key byoipprivate.key -days 180) -NoNewline
+ ```
+
+2. After the certificate is created, update the public comments section of the Whois/RDAP record for the prefix. To display for copying, including the BEGIN/END header/footer with dashes, use the command `cat byoippublickey.cer` You should be able to perform this procedure via your Routing Internet Registry.
+
+ Instructions for each registry are below:
+
+ * [ARIN](https://www.arin.net/resources/registry/manage/netmod/) - edit the "Comments" of the prefix record
+
+ * [RIPE](https://www.ripe.net/manage-ips-and-asns/db/support/updating-the-ripe-database) - edit the "Remarks" of the inetnum record
+
+ * [APNIC](https://www.apnic.net/manage-ip/using-whois/updating-whois/) - in order to edit the prefix record, contact helpdesk@apnic.net
+
+ * For ranges from either LACNIC or AFRINIC registries, create a support ticket with Microsoft.
+
+ After the public comments are filled out, the Whois/RDAP record should look like the example below. Ensure there aren't spaces or carriage returns. Include all dashes:
+
+ :::image type="content" source="./media/create-custom-ip-address-prefix-portal/certificate-example.png" alt-text="Screenshot of example certificate comment":::
+
+3. To create the message that will be passed to Microsoft, create a string that contains relevant information about your prefix and subscription. Sign this message with the key pair generated in the steps above. Use the format shown below, substituting your subscription ID, prefix to be provisioned, and expiration date matching the Validity Date on the ROA. Ensure the format is in that order.
+
+ Use the following command to create a signed message that will be passed to Microsoft for verification.
+
+ > [!NOTE]
+ > If the Validity End date was not included in the original ROA, pick a date that corresponds to the time you intend to have the prefix advertised by Azure.
+
+ ```powershell
+ $byoipauth="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|1.2.3.0/24|yyyymmdd"
+ Set-Content -Path byoipauth.txt -Value $byoipauth -NoNewline
+ ./openssl dgst -sha256 -sign byoipprivate.key -keyform PEM -out byoipauthsigned.txt byoipauth.txt
+ $byoipauthsigned=(./openssl enc -base64 -in byoipauthsigned.txt) -join ''
+ ```
+
+4. To view the contents of the signed message, enter the variable created from the signed message created previously and select **Enter** at the PowerShell prompt:
+
+ ```powershell
+ $byoipauthsigned
+ dIlwFQmbo9ar2GaiWRlSEtDSZoH00I9BAPb2ZzdAV2A/XwzrUdz/85rNkXybXw457//gHNNB977CQvqtFxqqtDaiZd9bngZKYfjd203pLYRZ4GFJnQFsMPFSeePa8jIFwGJk6JV4reFqq0bglJ3955dVz0v09aDVqjj5UJx2l3gmyJEeU7PXv4wF2Fnk64T13NESMeQk0V+IaEOt1zXgA+0dTdTLr+ab56pR0RZIvDD+UKJ7rVE7nMlergLQdpCx1FoCTm/quY3aiSxndEw7aQDW15+rSpy+yxV1iCFIrUa/4WHQqP4LtNs3FATvLKbT4dBcBLpDhiMR+j9MgiJymA==
+ ```
+
+## Provisioning steps
+
+The following steps display the procedure for provisioning a sample customer range (1.2.3.0/24) to the US West 2 region.
+
+> [!NOTE]
+> Clean up or delete steps aren't shown on this page given the nature of the resource. For information on removing a provisioned custom IP prefix, see [Manage custom IP prefix](manage-custom-ip-address-prefix.md).
+
+### Create a resource group and specify the prefix and authorization messages
+
+Create a resource group in the desired location for provisioning the BYOIP range.
+
+```azurecli-interactive
+ az group create \
+ --name myResourceGroup \
+ --location westus2
+```
+### Provision a custom IP address prefix
+The following command creates a custom IP prefix in the specified region and resource group. Specify the exact prefix in CIDR notation as a string to ensure there's no syntax error. For the `--authorization-message` parameter, use the variable **$byoipauth** that contains your subscription ID, prefix to be provisioned, and expiration date matching the Validity Date on the ROA. Ensure the format is in that order. Use the variable **$byoipauthsigned** for the `--signed-message` parameter created in the certificate readiness section.
+
+```azurecli-interactive
+ byoipauth="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|1.2.3.0/24|yyyymmdd"
+
+ az network public-ip prefix create \
+ --name myCustomIpPrefix \
+ --resource-group myResourceGroup \
+ --location westus2 \
+ --cidr ΓÇÿ1.2.3.0/24ΓÇÖ \
+ --authorization-message $byoipauth \
+ --signed-message $byoipauthsigned
+```
+The range will be pushed to the Azure IP Deployment Pipeline. The deployment process is asynchronous. To determine the status, execute the following command:
+
+ ```azurecli-interactive
+ az network custom-ip prefix show \
+ --name myCustomIpPrefix \
+ --resource-group myResourceGroup
+```
+Sample output is shown below, with some fields removed for clarity:
+
+```
+{
+ "cidr": "1.2.3.0/24",
+ "commissionedState": "Provisioning",
+ "id": "/subscriptions/xxxx/resourceGroups/myResourceGroup/providers/Microsoft.Network/customIPPrefixes/myCustomIpPrefix",
+ "location": "westus2",
+ "name": myCustomIpPrefix,
+ "resourceGroup": "myResourceGroup",
+}
+```
+
+The **CommissionedState** field should show the range as **Provisioning** initially, followed in the future by **Provisioned**.
+
+> [!NOTE]
+> The estimated time to complete the provisioning process is 30 minutes.
+
+> [!IMPORTANT]
+> After the custom IP prefix is in a **Provisioned** state, a child public IP prefix can be created. These public IP prefixes and any public IP addresses can be attached to networking resources. For example, virtual machine network interfaces or load balancer front ends. The IPs won't be advertised and therefore won't be reachable. For more information on a migration of an active prefix, see [Manage a custom IP prefix](manage-custom-ip-address-prefix.md).
+
+### Commission the custom IP address prefix
+
+When the custom IP prefix is in **Provisioned** state, the following command updates the prefix to begin the process of advertising the range from Azure.
+
+```azurecli-interactive
+az network custom-ip prefix update \
+ --name myCustomIpPrefix \
+ --resource-group myResourceGroup \
+ --state commission
+```
+
+As before, the operation is asynchronous. Use [az network custom-ip prefix show](/cli/azure/network/custom-ip/prefix#az_network_custom_ip_prefix_show) to retrieve the status. The **CommissionedState** field will initially show the prefix as **Commissioning**, followed in the future by **Commissioned**. The advertisement rollout isn't binary and the range will be partially advertised while still in **Commissioning**.
+
+> [!NOTE]
+> The estimated time to fully complete the commissioning process is 3-4 hours.
+
+> [!IMPORTANT]
+> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact.
+
+## Next steps
+
+- To learn about scenarios and benefits of using a custom IP prefix, see [Custom IP address prefix (BYOIP)](custom-ip-address-prefix.md)
+
+- For more information on managing a custom IP prefix, see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md)
+
+- To create a custom IP address prefix using the Azure CLI, see [Create custom IP address prefix using the Azure CLI](create-custom-ip-address-prefix-cli.md)
+
+- To create a custom IP address prefix using the Azure portal, see [Create a custom IP address prefix using the Azure portal](create-custom-ip-address-prefix-portal.md)
virtual-network Create Custom Ip Address Prefix Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-portal.md
+
+ Title: Create a custom IP address prefix - Azure portal
+
+description: Learn about how to onboard a custom IP address prefix using the Azure portal
++++ Last updated : 03/31/2022+++
+# Create a custom IP address prefix using the Azure portal
+
+A custom IP address prefix enables you to bring your own IP ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
+
+The steps in this article detail the process to:
+
+* Prepare a range to provision
+
+* Provision the range for IP allocation
+
+* Enable the range to be advertised by Microsoft
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+
+- A customer owned IP range to provision in Azure
+ - A sample customer range (1.2.3.0/24) is used for this example. This range won't be validated by Azure. Replace the example range with yours
+
+> [!NOTE]
+> For problems encountered during the provisioning process, please see [Troubleshooting for custom IP prefix](manage-custom-ip-address-prefix.md#troubleshooting-and-faqs).
+
+## Pre-provisioning steps
+
+To utilize the Azure BYOIP feature, you must perform the following steps prior to the provisioning of your IP address range.
+
+### Requirements and prefix readiness
+
+* The address range must be owned by you and registered under your name with the [American Registry for Internet Numbers (ARIN)](https://www.arin.net/), the [Réseaux IP Européens Network Coordination Centre (RIPE NCC)](https://www.ripe.net/), or the [Asia Pacific Network Information Centre Regional Internet Registries (APNIC)](https://www.apnic.net/). If the range is registered under the Latin America and Caribbean Network Information Centre (LACNIC) or the African Network Information Centre (AFRINIC), contact the [Microsoft Azure BYOIP team](mailto:byoipazure@microsoft.com).
+
+* The address range must be no smaller than a /24 so it will be accepted by Internet Service Providers.
+
+* A Route Origin Authorization (ROA) document that authorizes Microsoft to advertise the address range must be filled out by the customer on the appropriate Routing Internet Registry website. ARIN, RIPE, and APNIC.
+
+ For this ROA:
+
+ * The Origin AS must be listed as 8075
+
+ * The validity end date (expiration date) needs to account for the time you intend to have the prefix advertised by Microsoft. Some RIRs don't present validity end date as an option and or choose the date for you.
+
+ * The prefix length should exactly match the prefixes that can be advertised by Microsoft. For example, if you plan to bring 1.2.3.0/24 and 2.3.4.0/23 to Microsoft, they should both be named.
+
+ * After the ROA is complete and submitted, allow at least 24 hours for it to become available to Microsoft.
+
+### Certificate readiness
+
+To authorize Microsoft to associate a prefix with a customer subscription, a public certificate must be compared against a signed message.
+
+The following steps show the steps required to prepare sample customer range (1.2.3.0/24) for provisioning.
+
+> [!NOTE]
+> Execute the following commands in PowerShell with OpenSSL installed.
+
+
+1. A [self-signed X509 certificate](https://en.wikipedia.org/wiki/Self-signed_certificate) must be created to add to the Whois/RDAP record for the prefix. For information about RDAP, see the [ARIN](https://www.arin.net/resources/registry/whois/rdap/), [RIPE](https://www.ripe.net/manage-ips-and-asns/db/registration-data-access-protocol-rdap), and [APNIC](https://www.apnic.net/about-apnic/whois_search/about/rdap/) sites.
+
+ An example utilizing the OpenSSL toolkit is shown below. The following commands generate an RSA key pair and create an X509 certificate using the key pair that expires in six months:
+
+ ```powershell
+ ./openssl genrsa -out byoipprivate.key 2048
+ Set-Content -Path byoippublickey.cer (./openssl req -new -x509 -key byoipprivate.key -days 180) -NoNewline
+ ```
+
+2. After the certificate is created, update the public comments section of the Whois/RDAP record for the prefix. To display for copying, including the BEGIN/END header/footer with dashes, use the command `cat byoippublickey.cer` You should be able to perform this procedure via your Routing Internet Registry.
+
+ Instructions for each registry are below:
+
+ * [ARIN](https://www.arin.net/resources/registry/manage/netmod/) - edit the "Comments" of the prefix record
+
+ * [RIPE](https://www.ripe.net/manage-ips-and-asns/db/support/updating-the-ripe-database) - edit the "Remarks" of the inetnum record
+
+ * [APNIC](https://www.apnic.net/manage-ip/using-whois/updating-whois/) - in order to edit the prefix record, contact helpdesk@apnic.net
+
+ * For ranges from either LACNIC or AFRINIC registries, create a support ticket with Microsoft.
+
+ After the public comments are filled out, the Whois/RDAP record should look like the example below. Ensure there aren't spaces or carriage returns. Include all dashes:
+
+ :::image type="content" source="./media/create-custom-ip-address-prefix-portal/certificate-example.png" alt-text="Screenshot of example certificate comment":::
+
+3. To create the message that will be passed to Microsoft, create a string that contains relevant information about your prefix and subscription. Sign this message with the key pair generated in the steps above. Use the format shown below, substituting your subscription ID, prefix to be provisioned, and expiration date matching the Validity Date on the ROA. Ensure the format is in that order.
+
+ Use the following command to create a signed message that will be passed to Microsoft for verification.
+
+ > [!NOTE]
+ > If the Validity End date was not included in the original ROA, pick a date that corresponds to the time you intend to have the prefix advertised by Azure.
+
+ ```powershell
+ $byoipauth="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|1.2.3.0/24|yyyymmdd"
+ Set-Content -Path byoipauth.txt -Value $byoipauth -NoNewline
+ ./openssl dgst -sha256 -sign byoipprivate.key -keyform PEM -out byoipauthsigned.txt byoipauth.txt
+ $byoipauthsigned=(./openssl enc -base64 -in byoipauthsigned.txt) -join ''
+ ```
+
+4. To view the contents of the signed message, enter the variable created from the signed message created previously and select **Enter** at the PowerShell prompt:
+
+ ```powershell
+ $byoipauthsigned
+ dIlwFQmbo9ar2GaiWRlSEtDSZoH00I9BAPb2ZzdAV2A/XwzrUdz/85rNkXybXw457//gHNNB977CQvqtFxqqtDaiZd9bngZKYfjd203pLYRZ4GFJnQFsMPFSeePa8jIFwGJk6JV4reFqq0bglJ3955dVz0v09aDVqjj5UJx2l3gmyJEeU7PXv4wF2Fnk64T13NESMeQk0V+IaEOt1zXgA+0dTdTLr+ab56pR0RZIvDD+UKJ7rVE7nMlergLQdpCx1FoCTm/quY3aiSxndEw7aQDW15+rSpy+yxV1iCFIrUa/4WHQqP4LtNs3FATvLKbT4dBcBLpDhiMR+j9MgiJymA==
+ ```
+
+## Provisioning steps
+
+The following steps display the procedure for provisioning a sample customer range (1.2.3.0/24) to the US West 2 region.
+
+> [!NOTE]
+> Clean up or delete steps aren't shown on this page given the nature of the resource. For information on removing a provisioned custom IP prefix, see [Manage custom IP prefix](manage-custom-ip-address-prefix.md).
+
+## Sign in to Azure
+
+Sign in to the [Azure portal](https://portal.azure.com).
+
+## Create and provision a custom IP address prefix
+
+1. In the search box at the top of the portal, enter **Custom IP**.
+
+2. In the search results, select **Custom IP Prefixes**.
+
+3. Select **+ Create**.
+
+4. In **Create a custom IP prefix**, enter or select the following information in the **Basics** tab:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription |
+ | Resource group | Select **Create new**. </br> Enter **myResourceGroup**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter **myCustomIPPrefix**. |
+ | Region | Select **West US 2**. |
+ | Availability Zones | Select **Zone-redundant**. |
+ | IPv4 Prefix (CIDR) | Enter **1.2.3.0/24**. |
+ | ROA expiration date | Enter your ROA expiration date in the **yyyymmdd** format. |
+ | Signed message | Paste in the output of **$byoipauthsigned** from the earlier section. |
+
+ :::image type="content" source="./media/create-custom-ip-address-prefix-portal/create-custom-ip-prefix.png" alt-text="Screenshot of create custom IP prefix page in Azure portal.":::
+
+5. Select the **Review + create** tab or the blue **Review + create** button at the bottom of the page.
+
+6. Select **Create**.
+
+The range will be pushed to the Azure IP Deployment Pipeline. The deployment process is asynchronous. You can check the status by reviewing the **Commissioned state** field for the custom IP prefix.
+
+> [!NOTE]
+> The estimated time to complete the provisioning process is 30 minutes.
+
+> [!IMPORTANT]
+> After the custom IP prefix is in a "Provisioned" state, a child public IP prefix can be created. These public IP prefixes and any public IP addresses can be attached to networking resources. For example, virtual machine network interfaces or load balancer front ends. The IPs won't be advertised and therefore won't be reachable. For more information on a migration of an active prefix, see [Manage a custom IP prefix](manage-custom-ip-address-prefix.md).
+
+## Create a public IP prefix from custom IP prefix
+
+When you create a prefix, you must create static IP addresses from the prefix. In this section, you'll create a static IP address from the prefix you created earlier.
+
+1. In the search box at the top of the portal, enter **Custom IP**.
+
+2. In the search results, select **Custom IP Prefixes**.
+
+3. In **Custom IP Prefixes**, select **myCustomIPPrefix**.
+
+4. In **Overview** of **myCustomIPPrefix**, select **+ Add a public IP prefix**.
+
+5. Enter or select the following information in the **Basics** tab of **Create a public IP prefix**.
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroup**. |
+ | **Instance details** | |
+ | Name | Enter **myPublicIPPrefix**. |
+ | Region | Select **West US 2**. The region of the public IP prefix must match the region of the custom IP prefix. |
+ | IP version | Select **IPv4**. |
+ | Prefix ownership | Select **Custom prefix**. |
+ | Custom IP prefix | Select **myCustomIPPrefix**. |
+ | Prefix size | Select a prefix size. The size can be as large as the custom IP prefix. |
+
+6. Select **Review + create**, and then **Create** on the following page.
+
+10. Repeat steps 1-5 to return to the **Overview** page for **myCustomIPPrefix**. You'll see **myPublicIPPrefix** listed under the **Associated public IP prefixes** section. You can now allocate standard SKU public IP addresses from this prefix. For more information, see [Create a static public IP address from a prefix](manage-public-ip-address-prefix.md#create-a-static-public-ip-address-from-a-prefix).
+
+## Commission the custom IP address prefix
+
+When the custom IP prefix is in **Provisioned** state, update the prefix to begin the process of advertising the range from Azure.
+
+1. In the search box at the top of the portal, enter **Custom IP**.
+
+2. In the search results, select **Custom IP Prefixes**.
+
+3. In **Custom IP Prefixes**, select **myCustomIPPrefix**.
+
+4. In **Overview** of **myCustomIPPrefix**, select **Commission**.
+
+The operation is asynchronous. You can check the status by reviewing the **Commissioned state** field for the custom IP prefix. The status which will initially show the prefix as **Commissioning**, followed in the future by **Commissioned**. The advertisement rollout isn't binary and the range will be partially advertised while still in the **Commissioning** status.
+
+> [!NOTE]
+> The estimated time to fully complete the commissioning process is 3-4 hours.
+
+> [!IMPORTANT]
+> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact.
+
+## Next steps
+
+- To learn about scenarios and benefits of using a custom IP prefix, see [Custom IP address prefix (BYOIP)](custom-ip-address-prefix.md)
+
+- For more information on managing a custom IP prefix, see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md)
+
+- To create a custom IP address prefix using the Azure CLI, see [Create custom IP address prefix using the Azure CLI](create-custom-ip-address-prefix-cli.md)
+
+- To create a custom IP address prefix using PowerShell, see [Create a custom IP address prefix using Azure PowerShell](create-custom-ip-address-prefix-powershell.md)
virtual-network Create Custom Ip Address Prefix Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-powershell.md
+
+ Title: Create a custom IP address prefix - Azure PowerShell
+
+description: Learn about how to create a custom IP address prefix using Azure PowerShell
++++ Last updated : 03/31/2022++
+# Create a custom IP address prefix using Azure PowerShell
+
+A custom IP address prefix enables you to bring your own IP ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
+
+The steps in this article detail the process to:
+
+* Prepare a range to provision
+
+* Provision the range for IP allocation
+
+* Enable the range to be advertised by Microsoft
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure PowerShell installed locally or Azure Cloud Shell.
+- Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
+- Ensure your Az. Network module is 4.3.0 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module -Name "Az. Network" if necessary.
+- A customer owned IP range to provision in Azure
+ - A sample customer range (1.2.3.0/24) is used for this example. This range won't be validated by Azure. Replace the example range with yours
+
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+
+> [!NOTE]
+> For problems encountered during the provisioning process, please see [Troubleshooting for custom IP prefix](manage-custom-ip-address-prefix.md#troubleshooting-and-faqs).
+
+## Pre-provisioning steps
+
+To utilize the Azure BYOIP feature, you must perform the following steps prior to the provisioning of your IP address range.
+
+### Requirements and prefix readiness
+
+* The address range must be owned by you and registered under your name with the [American Registry for Internet Numbers (ARIN)](https://www.arin.net/), the [Réseaux IP Européens Network Coordination Centre (RIPE NCC)](https://www.ripe.net/), or the [Asia Pacific Network Information Centre Regional Internet Registries (APNIC)](https://www.apnic.net/). If the range is registered under the Latin America and Caribbean Network Information Centre (LACNIC) or the African Network Information Centre (AFRINIC), contact the [Microsoft Azure BYOIP team](mailto:byoipazure@microsoft.com).
+
+* The address range must be no smaller than a /24 so it will be accepted by Internet Service Providers.
+
+* A Route Origin Authorization (ROA) document that authorizes Microsoft to advertise the address range must be filled out by the customer on the appropriate Routing Internet Registry website. ARIN, RIPE, and APNIC.
+
+ For this ROA:
+
+ * The Origin AS must be listed as 8075
+
+ * The validity end date (expiration date) needs to account for the time you intend to have the prefix advertised by Microsoft. Some RIRs don't present validity end date as an option and or choose the date for you.
+
+ * The prefix length should exactly match the prefixes that can be advertised by Microsoft. For example, if you plan to bring 1.2.3.0/24 and 2.3.4.0/23 to Microsoft, they should both be named.
+
+ * After the ROA is complete and submitted, allow at least 24 hours for it to become available to Microsoft.
+
+### Certificate readiness
+
+To authorize Microsoft to associate a prefix with a customer subscription, a public certificate must be compared against a signed message.
+
+The following steps show the steps required to prepare sample customer range (1.2.3.0/24) for provisioning.
+
+> [!NOTE]
+> Execute the following commands in PowerShell with OpenSSL installed.
+
+
+1. A [self-signed X509 certificate](https://en.wikipedia.org/wiki/Self-signed_certificate) must be created to add to the Whois/RDAP record for the prefix. For information about RDAP, see the [ARIN](https://www.arin.net/resources/registry/whois/rdap/), [RIPE](https://www.ripe.net/manage-ips-and-asns/db/registration-data-access-protocol-rdap), and [APNIC](https://www.apnic.net/about-apnic/whois_search/about/rdap/) sites.
+
+ An example utilizing the OpenSSL toolkit is shown below. The following commands generate an RSA key pair and create an X509 certificate using the key pair that expires in six months:
+
+ ```powershell
+ ./openssl genrsa -out byoipprivate.key 2048
+ Set-Content -Path byoippublickey.cer (./openssl req -new -x509 -key byoipprivate.key -days 180) -NoNewline
+ ```
+
+2. After the certificate is created, update the public comments section of the Whois/RDAP record for the prefix. To display for copying, including the BEGIN/END header/footer with dashes, use the command `cat byoippublickey.cer` You should be able to perform this procedure via your Routing Internet Registry.
+
+ Instructions for each registry are below:
+
+ * [ARIN](https://www.arin.net/resources/registry/manage/netmod/) - edit the "Comments" of the prefix record
+
+ * [RIPE](https://www.ripe.net/manage-ips-and-asns/db/support/updating-the-ripe-database) - edit the "Remarks" of the inetnum record
+
+ * [APNIC](https://www.apnic.net/manage-ip/using-whois/updating-whois/) - in order to edit the prefix record, contact helpdesk@apnic.net
+
+ * For ranges from either LACNIC or AFRINIC registries, create a support ticket with Microsoft.
+
+ After the public comments are filled out, the Whois/RDAP record should look like the example below. Ensure there aren't spaces or carriage returns. Include all dashes:
+
+ :::image type="content" source="./media/create-custom-ip-address-prefix-portal/certificate-example.png" alt-text="Screenshot of example certificate comment":::
+
+3. To create the message that will be passed to Microsoft, create a string that contains relevant information about your prefix and subscription. Sign this message with the key pair generated in the steps above. Use the format shown below, substituting your subscription ID, prefix to be provisioned, and expiration date matching the Validity Date on the ROA. Ensure the format is in that order.
+
+ Use the following command to create a signed message that will be passed to Microsoft for verification.
+
+ > [!NOTE]
+ > If the Validity End date was not included in the original ROA, pick a date that corresponds to the time you intend to have the prefix advertised by Azure.
+
+ ```powershell
+ $byoipauth="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|1.2.3.0/24|yyyymmdd"
+ Set-Content -Path byoipauth.txt -Value $byoipauth -NoNewline
+ ./openssl dgst -sha256 -sign byoipprivate.key -keyform PEM -out byoipauthsigned.txt byoipauth.txt
+ $byoipauthsigned=(./openssl enc -base64 -in byoipauthsigned.txt) -join ''
+ ```
+
+4. To view the contents of the signed message, enter the variable created from the signed message created previously and select **Enter** at the PowerShell prompt:
+
+ ```powershell
+ $byoipauthsigned
+ dIlwFQmbo9ar2GaiWRlSEtDSZoH00I9BAPb2ZzdAV2A/XwzrUdz/85rNkXybXw457//gHNNB977CQvqtFxqqtDaiZd9bngZKYfjd203pLYRZ4GFJnQFsMPFSeePa8jIFwGJk6JV4reFqq0bglJ3955dVz0v09aDVqjj5UJx2l3gmyJEeU7PXv4wF2Fnk64T13NESMeQk0V+IaEOt1zXgA+0dTdTLr+ab56pR0RZIvDD+UKJ7rVE7nMlergLQdpCx1FoCTm/quY3aiSxndEw7aQDW15+rSpy+yxV1iCFIrUa/4WHQqP4LtNs3FATvLKbT4dBcBLpDhiMR+j9MgiJymA==
+ ```
+
+## Provisioning steps
+
+The following steps display the procedure for provisioning a sample customer range (1.2.3.0/24) to the US West 2 region.
+
+> [!NOTE]
+> Clean up or delete steps aren't shown on this page given the nature of the resource. For information on removing a provisioned custom IP prefix, see [Manage custom IP prefix](manage-custom-ip-address-prefix.md).
+
+### Create a resource group and specify the prefix and authorization messages
+
+Create a resource group in the desired location for provisioning the BYOIP range.
+
+ ```azurepowershell-interactive
+$rg =@{
+ Name = 'myResourceGroup'
+ Location = 'WestUS2'
+}
+New-AzResourceGroup @rg
+```
+
+### Provision a custom IP address prefix
+
+The following command creates a custom IP prefix in the specified region and resource group. Specify the exact prefix in CIDR notation as a string to ensure there's no syntax error. For the `-AuthorizationMessage` parameter, substitute your subscription ID, prefix to be provisioned, and expiration date matching the Validity Date on the ROA. Ensure the format is in that order. Use the variable **$byoipauthsigned** for the `-SignedMessage` parameter created in the certificate readiness section.
+
+ ```azurepowershell-interactive
+$prefix =@{
+ Name = 'myCustomIPPrefix'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'WestUS2'
+ CIDR = '1.2.3.0/24'
+ AuthorizationMessage = 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|1.2.3.0/24|yyyymmdd'
+ SignedMessage = $byoipauthsigned
+}
+$myCustomIpPrefix = New-AzCustomIPPrefix @prefix -Zone 1,2,3
+```
+
+The range will be pushed to the Azure IP Deployment Pipeline. The deployment process is asynchronous. To determine the status, execute the following command:
+
+ ```azurepowershell-interactive
+Get-AzCustomIpPrefix -ResourceId $myCustomIpPrefix.Id
+```
+Sample output is shown below, with some fields removed for clarity:
+
+```
+Name : myCustomIpPrefix
+ResourceGroupName : myResourceGroup
+Location : westus2
+Id : /subscriptions/xxxx/resourceGroups/myResourceGroup/providers/Microsoft.Network/customIPPrefixes/MyCustomIPPrefix
+Cidr : 1.2.3.0/24
+Zones : {1, 2, 3}
+CommissionedState : Provisioning
+```
+
+The **CommissionedState** field should show the range as **Provisioning** initially, followed in the future by **Provisioned**.
+
+> [!NOTE]
+> The estimated time to complete the provisioning process is 30 minutes.
+
+> [!IMPORTANT]
+> After the custom IP prefix is in a **Provisioned** state, a child public IP prefix can be created. These public IP prefixes and any public IP addresses can be attached to networking resources. For example, virtual machine network interfaces or load balancer front ends. The IPs won't be advertised and therefore won't be reachable. For more information on a migration of an active prefix, see [Manage a custom IP prefix](manage-custom-ip-address-prefix.md).
+
+### Commission the custom IP address prefix
+
+When the custom IP prefix is in the **Provisioned** state, the following command updates the prefix to begin the process of advertising the range from Azure.
+
+```azurepowershell-interactive
+Update-AzCustomIpPrefix -ResourceId $myCustomIPPrefix.Id -Commission
+```
+
+As before, the operation is asynchronous. Use [Get-AzCustomIpPrefix](/powershell/module/az.network/get-azcustomipprefix) to retrieve the status. The **CommissionedState** field will initially show the prefix as **Commissioning**, followed in the future by **Commissioned**. The advertisement rollout isn't binary and the range will be partially advertised while still in **Commissioning**.
+
+> [!NOTE]
+> The estimated time to fully complete the commissioning process is 3-4 hours.
+
+> [!IMPORTANT]
+> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact.
+
+## Next steps
+
+- To learn about scenarios and benefits of using a custom IP prefix, see [Custom IP address prefix (BYOIP)](custom-ip-address-prefix.md)
+
+- For more information on managing a custom IP prefix, see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md)
+
+- To create a custom IP address prefix using the Azure CLI, see [Create custom IP address prefix using the Azure CLI](create-custom-ip-address-prefix-cli.md)
+
+- To create a custom IP address prefix using the Azure portal, see [Create a custom IP address prefix using the Azure portal](create-custom-ip-address-prefix-portal.md)
virtual-network Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/custom-ip-address-prefix.md
+
+ Title: Custom IP address prefix (BYOIP)
+
+description: Learn about what an Azure custom IP address prefix is and how it enables customers to utilize their own ranges in Azure.
+++++ Last updated : 03/31/2022++
+# Custom IP address prefix (BYOIP)
+
+A custom IP address prefix is a contiguous range of IP addresses owned by an external customer and provisioned into a subscription. Microsoft is permitted to advertise the range. Addresses from a custom IP address prefix can be used in the same way as Azure owned public IP address prefixes. Addresses from a custom IP address prefix can be associated to Azure resources, interact with internal/private IPs and virtual networks, and reach external destinations outbound from the Azure Wide Area Network.
+
+## Benefits
+
+* Customers can retain their IP ranges (BYOIP) to maintain established reputation and continue to pass through externally controlled allowlists
+
+* Public IP address prefixes and standard SKU public IPs can be derived from custom IP address prefixes. These IPs can be used in the same way as Azure owned public IPs
+
+## Bring an IP prefix to Azure
+
+It's a three phase process to bring an IP prefix to Azure:
+
+* Validation
+
+* Provision
+
+* Commission
++
+### Validation
+
+A public IP address range that's brought to Azure must be owned by you and registered with a Routing Internet Registry such as [ARIN](https://www.arin.net/) or [RIPE](https://www.ripe.net/). When you bring an IP range to Azure, it remains under your ownership. You must authorize Microsoft to advertise the range. Your ownership of the range and its association with your Azure subscription are also verified. Some of these steps will be done outside of Azure.
+
+### Provision
+
+After the previous steps are completed, the public IP range can complete the **Provisioning** phase. The range will be created as a custom IP prefix resource in your subscription. Public IP prefixes and public IPs can be derived from your range and associated to Azure resources. The IPs won't be advertised at this point and not reachable.
+
+### Commission
+
+When ready, you can issue the command to have your range advertised from Azure and enter the **Commissioning** phase. The range will be advertised first from the Azure region where the custom IP prefix is located, and then by Microsoft's Wide Area Network (WAN) to the Internet. The specific region where the range was provisioned will be posted publicly on [Microsoft's IP Range GeoLocation page](https://www.microsoft.com/download/details.aspx?id=53601).
+
+## Limitations
+
+* A custom IP prefix must be associated with a single Azure region
+
+* The minimum size of an IP range is /24
+
+* IPv6 is currently not supported for custom IP prefixes
+
+* In regions with [availability zones](../../availability-zones/az-overview.md), a custom IP prefix must be specified as either zone-redundant or assigned to a specific zone. It can't be created with no zone specified in these regions. All IPs from the prefix must have the same zonal properties
+
+* The advertisements of IPs from a custom IP prefix over Azure ExpressRoute aren't currently supported
+
+* Once provisioned, custom IP prefix ranges can't be moved to another subscription. Custom IP address prefix ranges can't be moved within resource groups in a single subscription. It's possible to derive a public IP prefix from a custom IP prefix in another subscription with the proper permissions
+
+* Any IP addresses utilized from a custom IP prefix currently count against the standard public IP quota for a subscription and region. Contact Azure support to have quotas increased when required
+
+## Pricing
+
+* There's no charge to provision or use custom IP prefixes. There's no charge for all public IP prefixes and public IP addresses that are derived from custom IP prefixes
+
+* All traffic destined to a custom IP prefix range is charged the [internet egress rate](https://azure.microsoft.com/pricing/details/bandwidth/). Customers traffic to a custom IP prefix address from within Azure are charged internet egress for the source region of their traffic. Egress traffic from a custom IP address prefix range is charged the equivalent rate as an Azure public IP from the same region
+
+## Next steps
+
+- To create a custom IP address prefix using the Azure portal, see [Create custom IP address prefix using the Azure portal](create-custom-ip-address-prefix-portal.md)
+
+- To create a custom IP address prefix using PowerShell, see [Create a custom IP address prefix using Azure PowerShell](create-custom-ip-address-prefix-powershell.md)
+
+- For more information about the management of a custom IP address prefix, see [Manage a custom IP address prefix](create-custom-ip-address-prefix-powershell.md)
virtual-network Manage Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/manage-custom-ip-address-prefix.md
+
+ Title: Manage a custom IP address prefix
+
+description: Learn about custom IP address prefixes and how to manage and delete them.
+++++ Last updated : 03/31/2022++
+# Manage a custom IP address prefix
+
+A custom IP address prefix is a contiguous range of IP addresses owned by an external customer and provisioned into a subscription. The range is owned by the customer and Microsoft is permitted to advertise the range. For more information, see [Custom IP address prefix overview](custom-ip-address-prefix.md).
+
+This article explains how to:
+
+* Create public IP prefixes from provisioned custom IP prefixes
+
+* Migrate active IP prefixes from outside Microsoft
+
+* View information about a custom IP prefix
+
+* Decommission a custom IP prefix
+
+* Deprovision/delete a custom IP prefix
+
+For information on provisioning an IP address, see [Create a custom IP address prefix - Azure portal](create-custom-ip-address-prefix-portal.md), [Create a custom IP address prefix - Azure PowerShell](create-custom-ip-address-prefix-powershell.md), or [Create a custom IP address prefix - Azure CLI](create-custom-ip-address-prefix-cli.md).
+
+## Create a public IP prefix from a custom IP prefix
+
+When a custom IP prefix is in **Provisioned**, **Commissioning**, or **Commissioned** state, a linked public IP prefix can be created. Either as a subset of the custom IP prefix range or the entire range.
+
+> [!NOTE]
+> A public IP prefix can be derived from a custom IP prefix in another subscription with the appropriate permissions.
++
+Use the following CLI and PowerShell commands to create public IP prefixes with the `--custom-ip-prefix-name` (CLI) and `-CustomIpPrefix` (PowerShell) parameters that point to an existing custom IP prefix.
+
+|Tool|Command|
+|||
+|CLI|[az network public-ip prefix create](/cli/azure/network/public-ip/prefix#az_network_public_ip_prefix_create)|
+|PowerShell|[New-AzPublicIpPrefix](/powershell/module/az.network/new-azpublicipprefix)|
+
+Once created, the IPs in the child public IP prefix can be associated with resources like any other standard SKU static public IPs. To learn more about using IPs from a public IP prefix, including selection of a specific IP from the range, see [Create a static public IP address from a prefix](manage-public-ip-address-prefix.md#create-a-static-public-ip-address-from-a-prefix).
+
+## Migration of active prefixes from outside Microsoft
+
+If the provisioned range is being advertised to the Internet by another network, it's important to plan the migration to Azure to avoid unplanned downtime. Regardless of the method used, make the transition during a maintenance window.
+
+**Method 1: Create public IP prefixes and public IP addresses from the prefixes when the custom IP prefix is in a "Provisioned" state**.
+
+* The public IPs can be associated to networking resources but won't be advertised and won't be reachable. When the command to update the custom IP prefix to the **Commissioned** state is executed, the IPs will advertise from Microsoft's network. Any advertisement of this same range from a location other than Microsoft could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. The advertisement should be disabled once the Azure infrastructure has been verified as operational.
+
+**Method 2: Create public IP prefixes and public IP addresses from the prefixes using Microsoft ranges. Deploy an infrastructure in your subscription and verify it's operational**.
+
+* Create a second set of mirrored public IP prefixes and public IP addresses from the prefixes when the custom IP prefix is in a **Provisioned** state. Add the provisioned IPs to the existing infrastructure. For example, add another network interface to a virtual machine or another frontend for a load balancer. Perform a change to the desired IPs before issuing the command to move the custom IP prefix to the **Commissioned** state.
+
+* Alternatively, the ranges can be commissioned first and then changed. This process won't work for all resource types with public IPs. In those cases, a new resource with the provisioned public IP must be created.
+
+## View a custom IP prefix
+
+To view a custom IP prefix, the following commands can be used in Azure CLI and Azure PowerShell. All public IP prefixes created under the custom IP prefix will be displayed.
+
+**Commands**
+
+|Tool|Command|
+|||
+|CLI|[az network custom-ip prefix list](/cli/azure/network/public-ip/prefix#az_network_custom_ip_prefix_list) to list custom IP prefixes<br>[az network custom-ip prefix show](/cli/azure/network/public-ip/prefix#az_network_custom_ip_prefix_show) to show settings and any derived public IP prefixes<br>
+|PowerShell|[Get-AzCustomIpPrefix](/powershell/module/az.network/get-azcustomipprefix) to retrieve a custom IP prefix object and view its settings and any derived public IP prefixes|
+
+## Decommission a custom IP prefix
+
+A custom IP prefix must be decommissioned to turn off advertisements.
+
+> [!NOTE]
+> All public IP prefixes created from an provisioned custom IP prefix must be deleted before a custom IP prefix can be decommissioned.
+>
+> The estimated time to fully complete the decommissioning process is 3-4 hours.
+
+The following commands can be used in Azure CLI and Azure PowerShell to begin the process to stop advertising the range from Azure. The operation is asynchronous, use view commands to retrieve the status. The **CommissionedState** field will initially show the prefix as **Decommissioning**, followed by **Provisioned** as it transitions to the earlier state. Advertisement removal is a gradual process, and the range will be partially advertised while still in **Decommissioning**.
+
+**Commands**
+
+|Tool|Command|
+|||
+|Azure portal|Use the **Decommission** option in the Overview section of a Custom IP Prefix |
+|CLI|[az network custom-ip prefix update](/cli/azure/network/public-ip/prefix#az-network-custom-ip-prefix-update) with the flag to `-Decommission` |
+|PowerShell|[Update-AzCustomIpPrefix](/powershell/module/az.network/update-azcustomipprefix) with the `--state` flag set to decommission |
+
+Alternatively, a custom IP prefix can be decommissioned via the Azure portal using the **Decommission** button in the **Overview** section of the custom IP prefix.
+
+## Deprovision/Delete a custom IP prefix
+
+To fully remove a custom IP prefix, it must be deprovisioned and then deleted.
+
+> [!NOTE]
+> If there is a requirement to migrate an provisioned range from one region to the other, the original custom IP prefix must be fully removed from the fist region before a new custom IP prefix with the same address range can be created in another region.
+>
+> The estimated time to complete the deprovisioning process can range from 30 minutes to 13 hours.
+
+The following commands can be used in Azure CLI and Azure PowerShell to deprovision and remove the range from Microsoft. The deprovisioning operation is asynchronous. You can use the view commands to retrieve the status. The **CommissionedState** field will initially show the prefix as **Deprovisioning**, followed by **Deprovisioned** as it transitions to the earlier state. When the range is in the **Deprovisioned** state, it can be deleted by using the commands to remove.
+
+**Commands**
+
+|Tool|Command|
+|||
+|Azure portal|Use the **Deprovision** option in the Overview section of a Custom IP Prefix |
+|CLI|[az network custom-ip prefix update](/cli/azure/network/public-ip/prefix#az-network-custom-ip-prefix-update) with the flag to `-Deprovision` <br>[az network custom-ip prefix delete](/cli/azure/network/public-ip/prefix#az-network-custom-ip-prefix-delete) to remove|
+|PowerShell|[Update-AzCustomIpPrefix](/powershell/module/az.network/update-azcustomipprefix) with the `--state` flag set to deprovision<br>[Remove-AzCustomIpPrefix](/powershell/module/az.network/update-azcustomipprefix) to remove|
+
+Alternatively, a custom IP prefix can be decommissioned via the Azure portal using the **Deprovision** button in the **Overview** section of the custom IP prefix, and then deleted using the **Delete** button in the same section.
+
+## Permissions
+
+For permissions to manage public IP address prefixes, your account must be assigned to the [network contributor](../../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role or to a [custom](../../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) role.
+
+| Action | Name |
+| | - |
+| Microsoft.Network/customIPPrefixes/read | Read a custom IP address prefix |
+| Microsoft.Network/customIPPrefixes/write | Create or update a custom IP address prefix |
+| Microsoft.Network/customIPPrefixes/delete | Delete a custom IP address prefix |
+| Microsoft.Network/customIPPrefixes/join/action | Create a public IP prefix from a custom IP address prefix |
+
+## Troubleshooting and FAQs
+
+This section provides answers for common questions about custom IP prefix resources and the provisioning and removal processes.
+
+### A "ValidationFailed" error is returned after a new custom IP prefix creation
+
+A quick failure of provisioning is likely due to a prefix validation error. A prefix validation error indicates we're unable to verify your ownership of the range. A validation error can also indicate that we can't verify Microsoft permission to advertise the range, and or the association of the range with the given subscription. To view the specific error, you can use the **JSON view** of a custom IP prefix resource in the **Overview** section to see the **failedReason** field. The JSON view displays the Route Origin Authorization, the signed message on the prefix records, and other details of the submission. You should delete the custom IP prefix resource and create a new one with the correct information.
+
+### After updating a custom IP prefix to advertise, it transitions to a "CommissioningFailed" status
+
+If a custom IP prefix is unable to be fully advertised, it moves to a **CommissioningFailed** status. In these instances, it's recommended to execute the command to update the range to commissioned status again.
+
+### IΓÇÖm unable to decommission a custom IP prefix
+
+Before you decommission a custom IP prefix, ensure it has no public IP prefixes or public IP addresses.
+
+### How can I migrate a range from one region to another
+
+To migrate a custom IP prefix, it must first be deprovisioned from one region. A new custom IP prefix with the same CIDR can then be created in another region.
+
+## Next steps
+
+- To learn about scenarios and benefits of using a custom IP prefix, see [Custom IP address prefix (BYOIP)](custom-ip-address-prefix.md)
+
+- To create a custom IP address prefix using the Azure portal, see [Create custom IP address prefix using the Azure portal](create-custom-ip-address-prefix-portal.md)
+
+- To create a custom IP address prefix using PowerShell, see [Create a custom IP address prefix using Azure PowerShell](create-custom-ip-address-prefix-powershell.md)