Updates from: 08/27/2022 01:15:18
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication Sample React Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-react-spa-app.md
In the sample folder, open the *config.json* file. This file contains informatio
|Section |Key |Value | ||||
-|credentials|tenantName| The first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example: `contoso`.|
+|credentials|tenantName| Your Azure AD B2C [domain/tenant name](tenant-management.md#get-your-tenant-name). For example: `contoso.ommicrosoft.com`.|
|credentials|clientID| The web API application ID from step [2.1](#21-register-the-web-api-application). In the [earlier diagram](#app-registration), it's the application with **App ID: 2**.| |credentials| issuer| (Optional) The token issuer `iss` claim value. Azure AD B2C by default returns the token in the following format: `https://<your-tenant-name>.b2clogin.com/<your-tenant-ID>/v2.0/`. Replace `<your-tenant-name>` with the first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). Replace `<your-tenant-ID>` with your [Azure AD B2C tenant ID](tenant-management.md#get-your-tenant-id). | |policies|policyName|The user flow or custom policy that you created in [step 1](#step-1-configure-your-user-flow). If your application uses multiple user flows or custom policies, specify only one. For example, use the sign-up or sign-in user flow.|
Your final configuration file should look like the following JSON:
```json { "credentials": {
- "tenantName": "<your-tenant-name>",
+ "tenantName": "<your-tenant-name>.ommicrosoft.com",
"clientID": "<your-webapi-application-ID>", "issuer": "https://<your-tenant-name>.b2clogin.com/<your-tenant-ID>/v2.0/" },
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-operations.md
Note, the [list](/graph/api/authentication-list-phonemethods) operation returns
An email address that can be used by a [username sign-in account](sign-in-options.md#username-sign-in) to reset the password. For more information, see [Azure AD authentication methods API](/graph/api/resources/emailauthenticationmethod). -- [Add](/graph/api/emailauthenticationmethod-post)-- [List](/graph/api/emailauthenticationmethod-list)
+- [Add](/graph/api/authentication-post-emailmethods)
+- [List](/graph/api/authentication-list-emailmethods)
- [Get](/graph/api/emailauthenticationmethod-get) - [Update](/graph/api/emailauthenticationmethod-update) - [Delete](/graph/api/emailauthenticationmethod-delete)
active-directory On Premises Scim Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-scim-provisioning.md
Previously updated : 07/05/2022 Last updated : 08/25/2022
The Azure Active Directory (Azure AD) provisioning service supports a [SCIM 2.0]
- An Azure AD tenant with Azure AD Premium P1 or Premium P2 (or EMS E3 or E5). [!INCLUDE [active-directory-p1-license.md](../../../includes/active-directory-p1-license.md)] - Administrator role for installing the agent. This task is a one-time effort and should be an Azure account that's either a hybrid administrator or a global administrator. - Administrator role for configuring the application in the cloud (application administrator, cloud application administrator, global administrator, or a custom role with permissions).
+- A computer with at least 3 GB of RAM, to host a provisioning agent. The computer should have Windows Server 2016 or a later version of Windows Server, with connectivity to the target application, and with outbound connectivity to login.microsoftonline.com, other Microsoft Online Services and Azure domains. An example is a Windows Server 2016 virtual machine hosted in Azure IaaS or behind a proxy.
## Deploying Azure AD provisioning agent The Azure AD Provisioning agent can be deployed on the same server hosting a SCIM enabled application, or a seperate server, providing it has line of sight to the application's SCIM endpoint. A single agent also supports provision to multiple applications hosted locally on the same server or seperate hosts, again as long as each SCIM endpoint is reachable by the agent.
Once the agent is installed, no further configuration is necesary on-prem, and a
12. Go to the **Provisioning** pane, and select **Start provisioning**. 13. Monitor using the [provisioning logs](../../active-directory/reports-monitoring/concept-provisioning-logs.md).
+The following video provides an overview of on-premises provisoning.
+> [!VIDEO https://www.youtube.com/embed/QdfdpaFolys]
+ ## Additional requirements * Ensure your [SCIM](https://techcommunity.microsoft.com/t5/identity-standards-blog/provisioning-with-scim-getting-started/ba-p/880010) implementation meets the [Azure AD SCIM requirements](use-scim-to-provision-users-and-groups.md).
active-directory Concept Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/concept-attributes.md
description: This article describes the Azure AD schema, the attributes that the
documentationcenter: '' -+ editor: ''
To view the schema and verify it, follow these steps.
## Next steps - [What is provisioning?](what-is-provisioning.md)-- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)
+- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)
active-directory Concept How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/concept-how-it-works.md
Title: 'Azure AD Connect cloud sync deep dive - how it works'
description: This topic provides deep dive information on how cloud sync works. -+
active-directory How To Accidental Deletes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-accidental-deletes.md
Title: 'Azure AD Connect cloud sync accidental deletes'
description: This topic describes how to use the accidental delete feature to prevent deletions. -+
active-directory How To Attribute Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-attribute-mapping.md
Title: 'Attribute mapping in Azure AD Connect cloud sync'
description: This article describes how to use the cloud sync feature of Azure AD Connect to map attributes. -+
To test your attribute mapping, you can use [on-demand provisioning](how-to-on-d
- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md) - [Writing expressions for attribute mappings](reference-expressions.md) - [How to use expression builder with cloud sync](how-to-expression-builder.md)-- [Attributes synchronized to Azure Active Directory](../hybrid/reference-connect-sync-attributes-synchronized.md?context=azure%2factive-directory%2fcloud-provisioning%2fcontext%2fcp-context/hybrid/reference-connect-sync-attributes-synchronized.md)
+- [Attributes synchronized to Azure Active Directory](../hybrid/reference-connect-sync-attributes-synchronized.md?context=azure%2factive-directory%2fcloud-provisioning%2fcontext%2fcp-context/hybrid/reference-connect-sync-attributes-synchronized.md)
active-directory How To Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-automatic-upgrade.md
description: This article describes the built-in automatic upgrade feature in th
documentationcenter: '' -+ editor: ''
active-directory How To Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-configure.md
Title: 'Azure AD Connect cloud sync new agent configuration'
description: This article describes how to install cloud sync. -+
active-directory How To Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-expression-builder.md
Title: 'Use the expression builder with Azure AD Connect cloud sync'
description: This article describes how to use the expression builder with cloud sync. -+
active-directory How To Gmsa Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-gmsa-cmdlets.md
Title: 'Azure AD Connect cloud provisioning agent gMSA PowerShell cmdlets'
description: Learn how to use the Azure AD Connect cloud provisioning agent gMSA powershell cmdlets. -+
active-directory How To Inbound Synch Ms Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-inbound-synch-ms-graph.md
Title: 'How to programmatically configure cloud sync using MS Graph API'
description: This topic describes how to enable inbound synchronization using just the Graph API -+
active-directory How To Install Pshell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-install-pshell.md
Title: 'Install the Azure AD Connect cloud provisioning agent using a command-li
description: Learn how to install the Azure AD Connect cloud provisioning agent by using PowerShell cmdlets. -+
active-directory How To Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-install.md
Title: 'Install the Azure AD Connect provisioning agent'
description: Learn how to install the Azure AD Connect provisioning agent and how to configure it in the Azure portal. -+
active-directory How To Manage Registry Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-manage-registry-options.md
description: This article describes how to manage registry options in the Azure
documentationcenter: '' -+ editor: ''
active-directory How To Map Usertype https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-map-usertype.md
Title: 'Use map UserType with Azure AD Connect cloud sync'
description: This article describes how to map the UserType attribute with cloud sync. -+
active-directory How To On Demand Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-on-demand-provision.md
Title: 'On-demand provisioning in Azure AD Connect cloud sync'
description: This article describes how to use the cloud sync feature of Azure AD Connect to test configuration changes. -+
This process enables you to trace the attribute transformation as it moves throu
- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md) - [Install Azure AD Connect cloud sync](how-to-install.md)
-
+
active-directory How To Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-prerequisites.md
Title: 'Prerequisites for Azure AD Connect cloud sync in Azure AD'
description: This article describes the prerequisites and hardware requirements you need for cloud sync. -+
active-directory How To Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-sso.md
Title: 'How to use Single Sign-on with cloud sync'
description: This article describes how to install and use sso with cloud sync. -+
active-directory How To Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-transformation.md
Title: Azure AD Connect cloud sync transformations
description: This article describes how to use transformations to alter the default attribute mappings. -+ Last updated 12/02/2019 ms.prod: windows-server-threshold
active-directory How To Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-troubleshoot.md
Title: Azure AD Connect cloud sync troubleshooting
description: This article describes how to troubleshoot problems that might arise with the cloud provisioning agent. -+ Last updated 10/13/2021 ms.prod: windows-server-threshold
active-directory Plan Cloud Sync Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/plan-cloud-sync-topologies.md
Title: Azure AD Connect cloud sync supported topologies and scenarios
description: Learn about various on-premises and Azure Active Directory (Azure AD) topologies that use Azure AD Connect cloud sync. -+
active-directory Reference Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/reference-error-codes.md
Title: Azure AD Connect cloud sync error codes and descriptions
description: reference article for cloud sync error codes -+
The following is a list of error codes and their description
## Next steps - [What is provisioning?](what-is-provisioning.md)-- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)
+- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)
active-directory Reference Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/reference-expressions.md
Title: Azure AD Connect cloud sync expressions and function reference
description: reference -+
active-directory Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/reference-powershell.md
Title: 'AADCloudSyncTools PowerShell module for Azure AD Connect cloud sync'
description: This article describes how to install the Azure AD Connect cloud provisioning agent. -+
active-directory Reference Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/reference-version-history.md
Title: 'Azure AD Connect cloud provisioning agent: Version release history | Mic
description: This article lists all releases of Azure AD Connect cloud provisioning agent and describes new features and fixed issues -+
# Azure AD Connect cloud provisioning agent: Version release history
active-directory Tutorial Basic Ad Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-basic-ad-azure.md
Title: Tutorial - Basic Active Directory on-premises and Azure AD environment.
description: Learn how to create a basic AD and Azure AD environment. -+
Now you have an environment that can be used for existing tutorials and to test
## Next steps - [What is provisioning?](what-is-provisioning.md)-- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)
+- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)
active-directory Tutorial Existing Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-existing-forest.md
Title: Tutorial - Integrate an existing forest and a new forest with a single Az
description: Learn how to add cloud sync to an existing hybrid identity environment. -+
active-directory Tutorial Pilot Aadc Aadccp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-pilot-aadc-aadccp.md
Title: Tutorial - Pilot Azure AD Connect cloud sync for an existing synced AD fo
description: Learn how to pilot cloud sync for a test Active Directory forest that is already synced using Azure Active Directory (Azure AD) Connect sync. -+
active-directory Tutorial Single Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-single-forest.md
Title: Tutorial - Integrate a single forest with a single Azure AD tenant
description: This topic describes the pre-requisites and the hardware requirements cloud sync. -+
active-directory What Is Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/what-is-cloud-sync.md
Title: 'What is Azure AD Connect cloud sync? | Microsoft Docs'
description: Describes Azure AD Connect cloud sync. -+
active-directory What Is Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/what-is-provisioning.md
Title: 'What is identity provisioning with Azure AD? | Microsoft Docs'
description: Describes overview of identity provisioning. -+
This has been accomplished by Azure AD Connect sync, Azure AD Connect cloud prov
## Next steps - [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)-- [Install cloud provisioning](how-to-install.md)
+- [Install cloud provisioning](how-to-install.md)
active-directory Reply Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reply-url.md
Title: Redirect URI (reply URL) restrictions | Azure AD
+ Title: Redirect URI (reply URL) restrictions
description: A description of the restrictions and limitations on redirect URI (reply URL) format enforced by the Microsoft identity platform. - Previously updated : 09/03/2021---+ Last updated : 08/25/2022+ -++ # Redirect URI (reply URL) restrictions and limitations
You can use a maximum of 256 characters for each redirect URI you add to an app
* Always add redirect URIs to the application object only. * Do not add redirect URI values to a service principal because these values could be removed when the service principal object syncs with the application object. This could happen due to any update operation which triggers a sync between the two objects.
+## Query parameter support in redirect URIs
+
+Query parameters are **allowed** in redirect URIs for applications that *only* sign in users with work or school accounts.
+
+Query parameters are **not allowed** in redirect URIs for any app registration configured to sign in users with personal Microsoft accounts like Outlook.com (Hotmail), Messenger, OneDrive, MSN, Xbox Live, or Microsoft 365.
+
+| App registration sign-in audience | Supports query parameters in redirect URI |
+||-|
+| Accounts in this organizational directory only (Contoso only - Single tenant) | :::image type="icon" source="media/common/yes.png" border="false"::: |
+| Accounts in any organizational directory (Any Azure AD directory - Multitenant) | :::image type="icon" source="media/common/yes.png" border="false"::: |
+| Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox) | :::image type="icon" source="media/common/no.png" border="false"::: |
+| Personal Microsoft accounts only | :::image type="icon" source="media/common/no.png" border="false"::: |
+ ## Supported schemes **HTTPS**: The HTTPS scheme (`https://`) is supported for all HTTP-based redirect URIs.
active-directory V2 Oauth2 Implicit Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-implicit-grant-flow.md
Previously updated : 07/19/2021 Last updated : 08/18/2022
# Microsoft identity platform and implicit grant flow
-The Microsoft identity platform supports the OAuth 2.0 Implicit Grant flow as described in the [OAuth 2.0 Specification](https://tools.ietf.org/html/rfc6749#section-4.2). The defining characteristic of the implicit grant is that tokens (ID tokens or access tokens) are returned directly from the /authorize endpoint instead of the /token endpoint. This is often used as part of the [authorization code flow](v2-oauth2-auth-code-flow.md), in what is called the "hybrid flow" - retrieving the ID token on the /authorize request along with an authorization code.
+The Microsoft identity platform supports the OAuth 2.0 implicit grant flow as described in the [OAuth 2.0 Specification](https://tools.ietf.org/html/rfc6749#section-4.2). The defining characteristic of the implicit grant is that tokens (ID tokens or access tokens) are returned directly from the /authorize endpoint instead of the /token endpoint. This is often used as part of the [authorization code flow](v2-oauth2-auth-code-flow.md), in what is called the "hybrid flow" - retrieving the ID token on the /authorize request along with an authorization code.
[!INCLUDE [suggest-msal-from-protocols](includes/suggest-msal-from-protocols.md)]
The Microsoft identity platform supports the OAuth 2.0 Implicit Grant flow as de
## Prefer the auth code flow
-With the plans for [third party cookies to be removed from browsers](reference-third-party-cookies-spas.md), the **implicit grant flow is no longer a suitable authentication method**. The [silent SSO features](#getting-access-tokens-silently-in-the-background) of the implicit flow do not work without third party cookies, causing applications to break when they attempt to get a new token. We strongly recommend that all new applications use the [authorization code flow](v2-oauth2-auth-code-flow.md) that now supports single page apps in place of the implicit flow, and that [existing single page apps begin migrating to the authorization code flow](migrate-spa-implicit-to-auth-code.md) as well.
+With the plans for [removing third party cookies from browsers](reference-third-party-cookies-spas.md), the **implicit grant flow is no longer a suitable authentication method**. The [silent single sign-on (SSO) features](#acquire-access-tokens-silently) of the implicit flow do not work without third party cookies, causing applications to break when they attempt to get a new token. We strongly recommend that all new applications use the [authorization code flow](v2-oauth2-auth-code-flow.md) that now supports single-page apps in place of the implicit flow. Existing single-page apps should also [migrate to the authorization code flow](migrate-spa-implicit-to-auth-code.md).
## Suitable scenarios for the OAuth2 implicit grant
-The implicit grant is only reliable for the initial, interactive portion of your sign in flow, where the lack of [third party cookies](reference-third-party-cookies-spas.md) cannot impact your application. This limitation means you should use it exclusively as part of the hybrid flow, where your application requests a code as well as a token from the authorization endpoint. This ensures that your application receives a code that can be redeemed for a refresh token, thus ensuring your app's login session remains valid over time.
+The implicit grant is only reliable for the initial, interactive portion of your sign-in flow, where the lack of [third party cookies](reference-third-party-cookies-spas.md) doesn't impact your application. This limitation means you should use it exclusively as part of the hybrid flow, where your application requests a code as well as a token from the authorization endpoint. In a hybrid flow, your application receives a code that can be redeemed for a refresh token, thus ensuring your app's login session remains valid over time.
## Protocol diagram
-The following diagram shows what the entire implicit sign-in flow looks like and the sections that follow describe each step in more detail.
+The following diagram shows what the entire implicit sign-in flow looks like and the sections that follow describe each step in detail.
![Diagram showing the implicit sign-in flow](./media/v2-oauth2-implicit-grant-flow/convergence-scenarios-implicit.svg)
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
&nonce=678910 ```
-> [!TIP]
-> To test signing in using the implicit flow, click <a href="https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=6731de76-14a6-49ae-97bc-6eba6914391e&response_type=id_token&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F&scope=openid&response_mode=fragment&state=12345&nonce=678910" target="_blank">https://login.microsoftonline.com/common/oauth2/v2.0/authorize...</a> After signing in, your browser should be redirected to `https://localhost/myapp/` with an `id_token` in the address bar.
->
- | Parameter | Type | Description | | | | | | `tenant` | required |The `{tenant}` value in the path of the request can be used to control who can sign into the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more detail, see [protocol basics](active-directory-v2-protocols.md#endpoints).Critically, for guest scenarios where you sign a user from one tenant into another tenant, you *must* provide the tenant identifier to correctly sign them into the resource tenant.|
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| `nonce` | required |A value included in the request, generated by the app, that will be included in the resulting id_token as a claim. The app can then verify this value to mitigate token replay attacks. The value is typically a randomized, unique string that can be used to identify the origin of the request. Only required when an id_token is requested. | | `prompt` | optional |Indicates the type of user interaction that is required. The only valid values at this time are 'login', 'none', 'select_account', and 'consent'. `prompt=login` will force the user to enter their credentials on that request, negating single-sign on. `prompt=none` is the opposite - it will ensure that the user isn't presented with any interactive prompt whatsoever. If the request can't be completed silently via single-sign on, the Microsoft identity platform will return an error. `prompt=select_account` sends the user to an account picker where all of the accounts remembered in the session will appear. `prompt=consent` will trigger the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app. | | `login_hint` | optional | You can use this parameter to pre-fill the username and email address field of the sign-in page for the user, if you know the username ahead of time. Often, apps use this parameter during reauthentication, after already extracting the `login_hint` [optional claim](active-directory-optional-claims.md) from an earlier sign-in. |
-| `domain_hint` | optional |If included, it will skip the email-based discovery process that user goes through on the sign in page, leading to a slightly more streamlined user experience. This parameter is commonly used for Line of Business apps that operate in a single tenant, where they will provide a domain name within a given tenant, forwarding the user to the federation provider for that tenant. Note that this hint prevents guests from signing into this application, and limits the use of cloud credentials like FIDO. |
+| `domain_hint` | optional |If included, it will skip the email-based discovery process that user goes through on the sign-in page, leading to a slightly more streamlined user experience. This parameter is commonly used for Line of Business apps that operate in a single tenant, where they'll provide a domain name within a given tenant, forwarding the user to the federation provider for that tenant. This hint prevents guests from signing into this application, and limits the use of cloud credentials like FIDO. |
At this point, the user will be asked to enter their credentials and complete the authentication. The Microsoft identity platform will also ensure that the user has consented to the permissions indicated in the `scope` query parameter. If the user has consented to **none** of those permissions, it will ask the user to consent to the required permissions. For more info, see [permissions, consent, and multi-tenant apps](v2-permissions-and-consent.md).
code=0.AgAAktYV-sfpYESnQynylW_UKZmH-C9y_G1A
| Parameter | Description | | | |
-| `code` | Included if `response_type` includes `code`. This is an authorization code suitable for use in the [authorization code flow](v2-oauth2-auth-code-flow.md). |
+| `code` | Included if `response_type` includes `code`. It's an authorization code suitable for use in the [authorization code flow](v2-oauth2-auth-code-flow.md). |
| `access_token` |Included if `response_type` includes `token`. The access token that the app requested. The access token shouldn't be decoded or otherwise inspected, it should be treated as an opaque string. | | `token_type` |Included if `response_type` includes `token`. Will always be `Bearer`. | | `expires_in`|Included if `response_type` includes `token`. Indicates the number of seconds the token is valid, for caching purposes. |
-| `scope` |Included if `response_type` includes `token`. Indicates the scope(s) for which the access_token will be valid. May not include all of the scopes requested, if they were not applicable to the user (in the case of Azure AD-only scopes being requested when a personal account is used to log in). |
+| `scope` |Included if `response_type` includes `token`. Indicates the scope(s) for which the access_token will be valid. May not include all the requested scopes if they weren't applicable to the user. For example, Azure AD-only scopes requested when logging in using a personal account. |
| `id_token` | A signed JSON Web Token (JWT). The app can decode the segments of this token to request information about the user who signed in. The app can cache the values and display them, but it shouldn't rely on them for any authorization or security boundaries. For more information about id_tokens, see the [`id_token reference`](id-tokens.md). <br> **Note:** Only provided if `openid` scope was requested and `response_type` included `id_tokens`. | | `state` |If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. |
error=access_denied
| `error` |An error code string that can be used to classify types of errors that occur, and can be used to react to errors. | | `error_description` |A specific error message that can help a developer identify the root cause of an authentication error. |
-## Getting access tokens silently in the background
+## Acquire access tokens silently
> [!Important]
-> This part of the implicit flow is unlikely to work for your application as it's used across different browsers due to the [removal of third party cookies by default](reference-third-party-cookies-spas.md). While this still currently works in Chromium-based browsers that are not in Incognito, developers should reconsider using this part of the flow. In browsers that do not support third party cookies, you will recieve an error indicating that no users are signed in, as the login page's session cookies were removed by the browser.
+> This part of the implicit flow is unlikely to work for your application as it's used across different browsers due to the [removal of third party cookies by default](reference-third-party-cookies-spas.md). While this still currently works in Chromium-based browsers that are not in Incognito, developers should reconsider using this part of the flow. In browsers that do not support third party cookies, you will receive an error indicating that no users are signed in, as the login page's session cookies were removed by the browser.
-Now that you've signed the user into your single-page app, you can silently get access tokens for calling web APIs secured by Microsoft identity platform, such as the [Microsoft Graph](https://developer.microsoft.com/graph). Even if you already received a token using the `token` response_type, you can use this method to acquire tokens to additional resources without having to redirect the user to sign in again.
+Now that you've signed the user into your single-page app, you can silently get access tokens for calling web APIs secured by Microsoft identity platform, such as the [Microsoft Graph](https://developer.microsoft.com/graph). Even if you already received a token using the `token` response_type, you can use this method to acquire tokens to additional resources without redirecting the user to sign in again.
In the normal OpenID Connect/OAuth flow, you would do this by making a request to the Microsoft identity platform `/token` endpoint. You can make the request in a hidden iframe to get new tokens for other web APIs:
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
For details on the query parameters in the URL, see [send the sign in request](#send-the-sign-in-request). > [!TIP]
-> Try copy & pasting the below request into a browser tab! (Don't forget to replace the `login_hint` values with the correct value for your user)
+> Try copy & pasting the request below into a browser tab! (Don't forget to replace the `login_hint` values with the correct value for your user)
> >`https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=6731de76-14a6-49ae-97bc-6eba6914391e&response_type=token&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F&scope=https%3A%2F%2Fgraph.microsoft.com%2Fuser.read&response_mode=fragment&state=12345&nonce=678910&prompt=none&login_hint={your-username}` >
access_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZnl0aEV1Q..
| `access_token` |Included if `response_type` includes `token`. The access token that the app requested, in this case for the Microsoft Graph. The access token shouldn't be decoded or otherwise inspected, it should be treated as an opaque string. | | `token_type` | Will always be `Bearer`. | | `expires_in` | Indicates the number of seconds the token is valid, for caching purposes. |
-| `scope` | Indicates the scope(s) for which the access_token will be valid. May not include all of the scopes requested, if they were not applicable to the user (in the case of Azure AD-only scopes being requested when a personal account is used to log in). |
+| `scope` | Indicates the scope(s) for which the access_token will be valid. May not include all of the scopes requested, if they weren't applicable to the user (in the case of Azure AD-only scopes being requested when a personal account is used to log in). |
| `id_token` | A signed JSON Web Token (JWT). Included if `response_type` includes `id_token`. The app can decode the segments of this token to request information about the user who signed in. The app can cache the values and display them, but it shouldn't rely on them for any authorization or security boundaries. For more information about id_tokens, see the [`id_token` reference](id-tokens.md). <br> **Note:** Only provided if `openid` scope was requested. | | `state` |If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. |
The implicit grant does not provide refresh tokens. Both `id_token`s and `access
In browsers that do not support third party cookies, this will result in an error indicating that no user is signed in.
-## Send a sign out request
+## Send a sign-out request
-The OpenID Connect `end_session_endpoint` allows your app to send a request to the Microsoft identity platform to end a user's session and clear cookies set by the Microsoft identity platform . To fully sign a user out of a web application, your app should end its own session with the user (usually by clearing a token cache or dropping cookies), and then redirect the browser to:
+The OpenID Connect `end_session_endpoint` allows your app to send a request to the Microsoft identity platform to end a user's session and clear cookies set by the Microsoft identity platform. To fully sign a user out of a web application, your app should end its own session with the user (usually by clearing a token cache or dropping cookies), and then redirect the browser to:
``` https://login.microsoftonline.com/{tenant}/oauth2/v2.0/logout?post_logout_redirect_uri=https://localhost/myapp/
active-directory 7 Secure Access Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/7-secure-access-conditional-access.md
Previously updated : 01/25/2022 Last updated : 08/26/2022
active-directory Multi Tenant Common Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-common-considerations.md
Previously updated : 10/19/2021 Last updated : 08/26/2022
active-directory Multi Tenant Common Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-common-solutions.md
Previously updated : 09/25/2021 Last updated : 08/26/2022
active-directory Multi Tenant User Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-user-management-scenarios.md
Previously updated : 09/25/2021 Last updated : 08/26/2022
active-directory Protect M365 From On Premises Attacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/protect-m365-from-on-premises-attacks.md
Previously updated : 04/29/2022 Last updated : 08/26/2022
active-directory Recover From Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recover-from-deletions.md
Previously updated : 04/20/2022 Last updated : 08/26/2022 -+
For more information on how to avoid unwanted deletions, see the following artic
* Business continuity and disaster planning * Document known good states
-* Monitoring and data retention
+* Monitoring and data retention
active-directory Recover From Misconfigurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recover-from-misconfigurations.md
Previously updated : 04/20/2022 Last updated : 08/26/2022 -+
active-directory Recoverability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recoverability-overview.md
Previously updated : 04/20/2022 Last updated : 08/26/2022
active-directory Resilience Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-overview.md
Previously updated : 04/29/2022 Last updated : 08/26/2022
active-directory Service Accounts Introduction Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-introduction-azure.md
Previously updated : 04/21/2022 Last updated : 08/26/2022
active-directory Service Accounts On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-on-premises.md
Previously updated : 04/21/2022 Last updated : 08/26/2022
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
Title: Archive for What's new in Azure Active Directory? | Microsoft Docs description: The What's new release notes in the Overview section of this content set contain six months of activity. After six months, the items are removed from the main article and put into this archive article. --++
active-directory Access Reviews Downloadable Review History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-downloadable-review-history.md
description: Using Azure Active Directory access reviews, you can download a rev
documentationcenter: '' -+ na
active-directory Access Reviews External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-external-users.md
description: Use Access Reviews to extend of remove access from members of partn
documentationcenter: '' -+ na
active-directory Access Reviews Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-overview.md
description: Using Azure Active Directory access reviews, you can control group
documentationcenter: '' -+ editor: markwahl-msft
active-directory Complete Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/complete-access-review.md
description: Learn how to complete an access review of group members or applicat
documentationcenter: '' -+ editor: markwahl-msft
active-directory Conditional Access Exclusion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/conditional-access-exclusion.md
description: Learn how to use Azure Active Directory (Azure AD) access reviews t
documentationcenter: '' -+ editor: markwahl-msft
active-directory Deploy Access Reviews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/deploy-access-reviews.md
description: Planning guide for a successful access reviews deployment.
documentationCenter: '' -+ editor:
active-directory Entitlement Management Access Package Approval Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-approval-policy.md
description: Learn how to change approval and requestor information settings for
documentationCenter: '' -+ editor:
active-directory Entitlement Management Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-assignments.md
description: Learn how to view, add, and remove assignments for an access packag
documentationCenter: '' -+ editor:
active-directory Entitlement Management Access Package Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-create.md
description: Learn how to create a new access package of resources you want to s
documentationCenter: '' -+ editor:
active-directory Entitlement Management Access Package Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-edit.md
description: Learn how to hide or delete an access package in Azure Active Direc
documentationCenter: '' -+ editor:
active-directory Entitlement Management Access Package First https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-first.md
description: Step-by-step tutorial for how to create your first access package u
documentationCenter: '' -+ editor: markwahl-msft
active-directory Entitlement Management Access Package Incompatible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-incompatible.md
description: Learn how to configure separation of duties enforcement for request
documentationCenter: '' -+ editor:
active-directory Entitlement Management Access Package Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-lifecycle-policy.md
description: Learn how to change requestor information & lifecycle settings for
documentationCenter: '' -+ editor:
active-directory Entitlement Management Access Package Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-request-policy.md
description: Learn how to change request settings for an access package in Azure
documentationCenter: '' -+ editor:
active-directory Entitlement Management Access Package Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-requests.md
description: Learn how to view requests and remove for an access package in Azur
documentationCenter: '' -+ editor:
active-directory Entitlement Management Access Package Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-resources.md
description: Learn how to change the resource roles for an existing access packa
documentationCenter: '' -+ editor:
active-directory Entitlement Management Access Package Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-settings.md
description: Learn how to share link to request an access package in Azure Activ
documentationCenter: '' -+ editor:
active-directory Entitlement Management Access Reviews Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-create.md
description: Learn how to set up an access review in a policy for entitlement ma
documentationCenter: '' -+ editor:
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-catalog-create.md
description: Learn how to create a new container of resources and access package
documentationCenter: '' -+ editor: HANKI
active-directory Entitlement Management Delegate Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate-catalog.md
description: Learn how to delegate access governance from IT administrators to c
documentationCenter: '' -+ editor: markwahl-msft
active-directory Entitlement Management Delegate Managers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate-managers.md
description: Learn how to delegate access governance from IT administrators to a
documentationCenter: '' -+ editor: markwahl-msft
active-directory Entitlement Management Delegate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate.md
description: Learn how to delegate access governance from IT administrators to d
documentationCenter: '' -+ editor: markwahl-msft
active-directory Entitlement Management External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-external-users.md
description: Learn about the settings you can specify to govern access for exter
documentationCenter: '' -+ editor: markwahl-msft
active-directory Entitlement Management Logic Apps Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logic-apps-integration.md
description: Learn how to configure and use custom Logic Apps in Azure Active Di
documentationCenter: '' -+ editor:
active-directory Entitlement Management Logs And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logs-and-reporting.md
description: Learn how to archive logs and create reports with Azure Monitor in
documentationCenter: '' -+ editor:
active-directory Entitlement Management Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-organization.md
description: Learn how to allow people outside your organization to request acce
documentationCenter: '' -+ editor: markwahl-msft
active-directory Entitlement Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-overview.md
description: Get an overview of Azure Active Directory entitlement management an
documentationCenter: '' -+ editor: markwahl-msft
active-directory Entitlement Management Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-process.md
description: Learn about the request process for an access package and when emai
documentationCenter: '' -+ editor: mamtakumar
active-directory Entitlement Management Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reports.md
description: Learn how to view the user assignments report and audit logs in Azu
documentationCenter: '' -+ editor: jocastel-MSFT
active-directory Entitlement Management Reprocess Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-assignments.md
description: Learn how to reprocess assignments for an access package in Azure A
documentationCenter: '' -+ editor:
active-directory Entitlement Management Reprocess Access Package Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-requests.md
description: Learn how to reprocess a request for an access package in Azure Act
documentationCenter: '' -+ editor:
active-directory Entitlement Management Request Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-request-access.md
description: Learn how to use the My Access portal to request access to an acces
documentationCenter: '' -+ editor: mamtakumar
active-directory Entitlement Management Request Approve https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-request-approve.md
description: Learn how to use the My Access portal to approve or deny requests t
documentationCenter: '' -+ editor: mamtakumar
active-directory Entitlement Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-scenarios.md
description: Learn the high-level steps you should follow for common scenarios i
documentationCenter: '' -+ editor: markwahl-msft
active-directory Entitlement Management Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-troubleshoot.md
description: Learn about some items you should check to help you troubleshoot Az
documentationCenter: '' -+ editor: markwahl-msft
active-directory Identity Governance Applications Define https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-define.md
description: Azure Active Directory Identity Governance allows you to balance yo
documentationcenter: '' -+ editor: markwahl-msft
active-directory Identity Governance Applications Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-deploy.md
description: Azure Active Directory Identity Governance allows you to balance yo
documentationcenter: '' -+ editor: markwahl-msft
active-directory Identity Governance Applications Existing Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-existing-users.md
description: Planning for a successful access reviews campaign for a particular
documentationCenter: '' -+ editor:
active-directory Identity Governance Applications Integrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-integrate.md
description: Azure Active Directory Identity Governance allows you to balance yo
documentationcenter: '' -+ editor: markwahl-msft
active-directory Identity Governance Applications Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-prepare.md
description: Azure Active Directory Identity Governance allows you to balance yo
documentationcenter: '' -+ editor: markwahl-msft
active-directory Identity Governance Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-automation.md
description: Learn how to write PowerShell scripts in Azure Automation to intera
documentationCenter: '' -+ editor:
active-directory Identity Governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-overview.md
description: Azure Active Directory Identity Governance allows you to balance yo
documentationcenter: '' -+ editor: markwahl-msft
active-directory Manage Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-access-review.md
description: Learn how to manage user and guest access as membership of a group
documentationcenter: '' -+ editor: markwahl-msft
active-directory Manage User Access With Access Reviews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-user-access-with-access-reviews.md
description: Learn how to manage users' access as membership of a group or assig
documentationcenter: '' -+ editor: markwahl-msft
active-directory Perform Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/perform-access-review.md
Title: Review access to groups & applications in access reviews - Azure AD
description: Learn how to review access of group members or application access in Azure Active Directory access reviews. -+ editor: markwahl-msft
active-directory Review Recommendations Access Reviews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/review-recommendations-access-reviews.md
Title: Review recommendations for Access reviews - Azure AD
description: Learn how to review access of group members with review recommendations in Azure Active Directory access reviews. -+ editor: markwahl-msft
active-directory Review Your Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/review-your-access.md
Title: Review your access to groups & apps in access reviews - Azure AD
description: Learn how to review your own access to groups or applications in Azure Active Directory access reviews. -+ editor: markwahl-msft
active-directory Self Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/self-access-review.md
Title: Review your access to resources in access reviews - Azure AD
description: Learn how to review your own access to resources in Azure Active Directory access reviews. -+ editor: markwahl-msft
active-directory Choose Ad Authn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/choose-ad-authn.md
keywords:
Last updated 01/05/2022+
active-directory Cloud Governed Management For On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-governed-management-for-on-premises.md
Title: 'Azure AD Cloud Governed Management for On-Premises Workloads - Azure'
description: This topic describes cloud governed management for on-premises workloads. -+ na
active-directory Concept Adsync Service Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/concept-adsync-service-account.md
description: This topic describes the ADSync service account and provides best p
documentationcenter: '' -+ editor: ''
active-directory Concept Azure Ad Connect Sync Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/concept-azure-ad-connect-sync-architecture.md
description: This topic describes the architecture of Azure AD Connect sync and
documentationcenter: '' -+ editor: '' ms.assetid: 465bcbe9-3bdd-4769-a8ca-f8905abf426d
active-directory Concept Azure Ad Connect Sync Declarative Provisioning Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/concept-azure-ad-connect-sync-declarative-provisioning-expressions.md
description: Explains the declarative provisioning expressions.
documentationcenter: '' -+ editor: '' ms.assetid: e3ea53c8-3801-4acf-a297-0fb9bb1bf11d
active-directory Concept Azure Ad Connect Sync Declarative Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/concept-azure-ad-connect-sync-declarative-provisioning.md
description: Explains the declarative provisioning configuration model in Azure
documentationcenter: '' -+ editor: '' ms.assetid: cfbb870d-be7d-47b3-ba01-9e78121f0067
active-directory Concept Azure Ad Connect Sync Default Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/concept-azure-ad-connect-sync-default-configuration.md
description: This article describes the default configuration in Azure AD Connec
documentationcenter: '' -+ editor: '' ms.assetid: ed876f22-6892-4b9d-acbe-6a2d112f1cd1
active-directory Concept Azure Ad Connect Sync User And Contacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/concept-azure-ad-connect-sync-user-and-contacts.md
description: Explains users, groups, and contacts in Azure AD Connect sync.
documentationcenter: '' -+ ms.assetid: 8d204647-213a-4519-bd62-49563c421602
active-directory Four Steps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/four-steps.md
Title: Four steps to a strong identity foundation - Azure AD
description: This topic describes four steps hybrid identity customers can take to build a strong identity foundation. -+ na
active-directory How To Connect Adconnectivitytools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-adconnectivitytools.md
Title: 'Azure AD Connect: What is the ADConnectivityTool PowerShell Module | Mic
description: This document introduces the new ADConnectivity PowerShell module and how it can be used to help troubleshoot. -+
active-directory How To Connect Azure Ad Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-azure-ad-trust.md
description: Operational details of Azure AD trust handling by Azure AD connect.
documentationcenter: '' -+ ms.assetid: 2593b6c6-dc3f-46ef-8e02-a8e2dc4e9fb9
When you federate your AD FS with Azure AD, it is critical that the federation c
If you are using cloud Azure MFA, for multi factor authentication, with federated users, we highly recommend enabling additional security protection. This security protection prevents bypassing of cloud Azure MFA when federated with Azure AD. When enabled, for a federated domain in your Azure AD tenant, it ensures that a bad actor cannot bypass Azure MFA by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, `federatedIdpMfaBehavior`.For additional information see [Best practices for securing Active Directory Federation Services](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#enable-protection-to-prevent-by-passing-of-cloud-azure-mfa-when-federated-with-azure-ad) ## Next steps
-* [Manage and customize Active Directory Federation Services using Azure AD Connect](how-to-connect-fed-management.md)
+* [Manage and customize Active Directory Federation Services using Azure AD Connect](how-to-connect-fed-management.md)
active-directory How To Connect Azureadaccount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-azureadaccount.md
description: This topic documents how to restore the Azure AD Connector account.
documentationcenter: '' -+ editor: '' ms.assetid: 6077043a-27f1-4304-a44b-81dc46620f24
active-directory How To Connect Configure Ad Ds Connector Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md
Title: 'Azure AD Connect: Configure AD DS Connector Account Permissions | Micro
description: This document details how to configure the AD DS Connector account with the new ADSyncConfig PowerShell module -+
active-directory How To Connect Create Custom Sync Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-create-custom-sync-rule.md
description: Learn how to use the synchronization rule editor to edit or create
documentationcenter: '' -+ editor: curtand
active-directory How To Connect Device Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-device-options.md
description: This document details device options available in Azure AD Connect
documentationcenter: '' -+ editor: billmath ms.assetid: c0ff679c-7ed5-4d6e-ac6c-b2b6392e7892
active-directory How To Connect Device Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-device-writeback.md
description: This document details how to enable device writeback using Azure AD
documentationcenter: '' -+ editor: curtand- ms.assetid: c0ff679c-7ed5-4d6e-ac6c-b2b6392e7892
active-directory How To Connect Emergency Ad Fs Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-emergency-ad-fs-certificate-rotation.md
Title: Emergency Rotation of the AD FS certificates | Microsoft Docs description: This article explains how to revoke and update AD FS certificates immediately. -+
active-directory How To Connect Fed Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-compatibility.md
description: This page has non-Microsoft identity providers that can be used to
documentationcenter: '' -+ editor: curtand ms.assetid: 22c8693e-8915-446d-b383-27e9587988ec
active-directory How To Connect Fed Group Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-group-claims.md
description: Get information on how to configure group claims for use with Azure
documentationcenter: '' -+
active-directory How To Connect Fed Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-management.md
keywords: AD FS, ADFS, AD FS management, AAD Connect, Connect, sign-in, AD FS cu
documentationcenter: '' -+ editor: '' ms.assetid: 2593b6c6-dc3f-46ef-8e02-a8e2dc4e9fb9
active-directory How To Connect Fed O365 Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-o365-certs.md
description: This article explains to Microsoft 365 users how to resolve issues
documentationcenter: '' -+ editor: curtand ms.assetid: 543b7dc1-ccc9-407f-85a1-a9944c0ba1be
active-directory How To Connect Fed Saml Idp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-saml-idp.md
Title: 'Azure AD Connect: Use a SAML 2.0 Identity Provider for Single Sign On -
description: This document describes using a SAML 2.0 compliant Idp for single sign on. -+
active-directory How To Connect Fed Sha256 Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-sha256-guidance.md
keywords: SHA1,SHA256,M365,federation,aadconnect,adfs,ad fs,change sha,federatio
documentationcenter: '' -+ editor: '' ms.assetid: cf6880e2-af78-4cc9-91bc-b64de4428bbd
active-directory How To Connect Fed Single Adfs Multitenant Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-single-adfs-multitenant-federation.md
keywords: federate, ADFS, AD FS, multiple tenants, single AD FS, one ADFS, multi
documentationcenter: '' -+ editor: '' ms.assetid:
active-directory How To Connect Fed Ssl Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-ssl-update.md
Title: Azure AD Connect - Update the TLS/SSL certificate for an AD FS farm | Microsoft Docs description: This document details the steps to update the TLS/SSL certificate of an AD FS farm by using Azure AD Connect. -+ editor: billmath ms.assetid: 7c781f61-848a-48ad-9863-eb29da78f53c
active-directory How To Connect Fed Whatis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-whatis.md
description: This page is the central location for all documentation regarding A
documentationcenter: '' -+ editor: '' ms.assetid: f9107cf5-0131-499a-9edf-616bf3afef4d
active-directory How To Connect Fix Default Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fix-default-rules.md
Title: 'How to fix modified default rules - Azure AD Connect | Microsoft Docs'
description: Learn how to fix modified default rules that come with Azure AD Connect. -+ editor: curtand
active-directory How To Connect Group Writeback Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-disable.md
Title: 'Disable group writeback in Azure AD Connect'
description: This article describes how to disable Group Writeback in Azure AD Connect. -+
To disable or roll back group writeback via PowerShell, do the following:
- [Azure AD Connect group writeback](how-to-connect-group-writeback-v2.md) - [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md) - [Enable Azure AD Connect group writeback](how-to-connect-group-writeback-enable.md)
-
+
active-directory How To Connect Group Writeback Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-enable.md
Title: 'Enable Azure AD Connect group writeback'
description: This article describes how to enable Group Writeback in Azure AD Connect. -+
When configuring group writeback, there will be a checkbox at the bottom of the
-
+
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
Title: 'Azure AD Connect: Group Writeback'
description: This article describes Group Writeback in Azure AD Connect. -+
While this release has undergone extensive testing, you may still encounter issu
- [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md) - [Enable Azure AD Connect group writeback](how-to-connect-group-writeback-enable.md)-- [Disable Azure AD Connect group writeback](how-to-connect-group-writeback-disable.md)
+- [Disable Azure AD Connect group writeback](how-to-connect-group-writeback-disable.md)
active-directory How To Connect Health Ad Fs Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-ad-fs-sign-in.md
description: This document describes how to integrate AD FS sign-ins with the Az
documentationcenter: '' -+
active-directory How To Connect Health Adds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-adds.md
description: This is the Azure AD Connect Health page that will discuss how to m
documentationcenter: '' -+ editor: curtand ms.assetid: 19e3cf15-f150-46a3-a10c-2990702cd700
active-directory How To Connect Health Adfs Risky Ip Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-adfs-risky-ip-workbook.md
documentationcenter: '' -+
active-directory How To Connect Health Adfs Risky Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-adfs-risky-ip.md
documentationcenter: '' -+
active-directory How To Connect Health Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-adfs.md
documentationcenter: '' -+ editor: curtand ms.assetid: dc0e53d8-403e-462a-9543-164eaa7dd8b3
active-directory How To Connect Health Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-agent-install.md
description: This Azure AD Connect Health article describes agent installation f
documentationcenter: '' -+ editor: curtand ms.assetid: 1cc8ae90-607d-4925-9c30-6770a4bd1b4e
active-directory How To Connect Health Alert Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-alert-catalog.md
description: This document shows the catalog of all alerts in Azure AD Connect H
documentationcenter: '' -+ editor: ''
active-directory How To Connect Health Data Freshness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-data-freshness.md
description: This document describes the cause of "Health service data is not up
documentationcenter: '' -+ editor: ''
active-directory How To Connect Health Data Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-data-retrieval.md
description: This page describes how to retrieve data from Azure AD Connect Heal
documentationcenter: '' -+
To retrieve accounts that were flagged with AD FS Bad Password attempts, use the
* [Azure AD Connect Health Agent Installation](how-to-connect-health-agent-install.md) * [Azure AD Connect Health Operations](how-to-connect-health-operations.md) * [Azure AD Connect Health FAQ](reference-connect-health-faq.yml)
-* [Azure AD Connect Health Version History](reference-connect-health-version-history.md)
+* [Azure AD Connect Health Version History](reference-connect-health-version-history.md)
active-directory How To Connect Health Diagnose Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-diagnose-sync-errors.md
description: This document describes the diagnosis process of duplicated attribu
documentationcenter: '' -+ editor: billmath
active-directory How To Connect Health Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-operations.md
description: This article describes additional operations that can be performed
documentationcenter: '' -+ ms.assetid: 86cc3840-60fb-43f9-8b2a-8598a9df5c94
active-directory How To Connect Health Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-sync.md
description: This is the Azure AD Connect Health page that will discuss how to m
documentationcenter: '' -+ ms.assetid: 1dfbeaba-bda2-4f68-ac89-1dbfaf5b4015
active-directory How To Connect Import Export Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-import-export-config.md
Title: How to import and export Azure AD Connect configuration settings
description: This article describes frequently asked questions for cloud provisioning. -+
active-directory How To Connect Install Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-automatic-upgrade.md
description: This topic describes the built-in automatic upgrade feature in Azur
documentationcenter: '' -+ editor: '' ms.assetid: 6b395e8f-fa3c-4e55-be54-392dd303c472
Here is a list of the most common messages you find. It does not list all, but t
|UpgradeNotSupportedAADHealthUploadDisabled|Health data uploads have been disabled from the portal| ## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory How To Connect Install Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-custom.md
keywords: what is Azure AD Connect, install Active Directory, required components for Azure AD documentationcenter: '' -+ ms.assetid: 6d42fb79-d9cf-48da-8445-f482c4c536af
active-directory How To Connect Install Existing Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-existing-database.md
description: This topic describes how to use an existing ADSync database.
documentationcenter: '' -+ editor: '' ms.assetid:
active-directory How To Connect Install Existing Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-existing-tenant.md
description: This topic describes how to use Connect when you have an existing A
+ Last updated 01/21/2022
active-directory How To Connect Install Express https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-express.md
Title: 'Azure AD Connect: Getting Started using express settings | Microsoft Doc
description: Learn how to download, install and run the setup wizard for Azure AD Connect. -+ editor: curtand ms.assetid: b6ce45fd-554d-4f4d-95d1-47996d561c9f
active-directory How To Connect Install Move Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-move-db.md
Title: 'Move Azure AD Connect database from SQL Server Express to SQL Server. |
description: This document describes how to move the Azure AD Connect database from the local SQL Server Express server to a remote SQL Server. -+
active-directory How To Connect Install Multiple Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-multiple-domains.md
description: This document describes setting up and configuring multiple top lev
documentationcenter: '' -+ editor: curtand ms.assetid: 5595fb2f-2131-4304-8a31-c52559128ea4
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
description: This article describes the prerequisites and the hardware requireme
documentationcenter: '' -+ editor: '' ms.assetid: 91b88fda-bca6-49a8-898f-8d906a661f07
active-directory How To Connect Install Roadmap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-roadmap.md
Title: 'Azure AD Connect and Azure AD Connect Health installation roadmap. | Mic
description: This document provides an overview of the installation options and paths available for installing Azure AD Connect and Connect Health. -+ na
active-directory How To Connect Install Select Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-select-installation.md
description: This topic walks you through how to select the installation type to
documentationcenter: '' -+ editor: '' ms.assetid:
active-directory How To Connect Install Sql Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-sql-delegation.md
Title: 'Install Azure AD Connect using SQL delegated administrator permissions |
description: This topic describes an update to Azure AD Connect that allows for installation using an account that only has SQL dbo permissions. documentationcenter: '' -+ editor: '' ms.assetid:
active-directory How To Connect Installation Wizard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-installation-wizard.md
keywords: The Azure AD Connect installation wizard lets you configure maintenanc
documentationcenter: '' -+ editor: '' ms.assetid: d800214e-e591-4297-b9b5-d0b1581cc36a
active-directory How To Connect Migrate Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-migrate-groups.md
Title: 'Azure AD Connect: Migrate groups from one forest to another'
description: This article describes the steps needed to successfully migrate groups from one forest to another for Azure AD Connect. -+
active-directory How To Connect Modify Group Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-modify-group-writeback.md
Title: 'Modify group writeback in Azure AD Connect'
description: This article describes how to modify the default behavior for group writeback in Azure AD Connect. -+
Prior to re-enabling for writeback, or restoring from soft delete in Azure AD, t
- [Azure AD Connect group writeback](how-to-connect-group-writeback-v2.md) - [Enable Azure AD Connect group writeback](how-to-connect-group-writeback-enable.md) - -- [Disable Azure AD Connect group writeback](how-to-connect-group-writeback-disable.md)
+- [Disable Azure AD Connect group writeback](how-to-connect-group-writeback-disable.md)
active-directory How To Connect Monitor Federation Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-monitor-federation-changes.md
description: This article explains how to monitor changes to your federation con
documentationcenter: '' -+
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-password-hash-synchronization.md
description: Provides information about how password hash synchronization works
documentationcenter: '' -+ ms.assetid: 05f16c3e-9d23-45dc-afca-3d0fa9dbf501
active-directory How To Connect Post Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-post-installation.md
description: Learn how to extend the default configuration and operational tasks
documentationcenter: '' -+ editor: curtand ms.assetid: c18bee36-aebf-4281-b8fc-3fe14116f1a5
active-directory How To Connect Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-preview.md
description: This topic describes in more detail features which are in preview i
documentationcenter: '' -+ editor: '' ms.assetid: c75cd8cf-3eff-4619-bbca-66276757cc07
active-directory How To Connect Pta Current Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-current-limitations.md
keywords: Azure AD Connect Pass-through Authentication, install Active Directory, required components for Azure AD, SSO, Single Sign-on documentationcenter: '' -+ ms.assetid: 9f994aca-6088-40f5-b2cc-c753a4f41da7
active-directory How To Connect Pta Disable Do Not Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-disable-do-not-configure.md
Title: 'Disable pass-through authentication by using Azure AD Connect or PowerSh
description: This article describes how to disable pass-through authentication by using the Azure AD Connect Do Not Configure feature or by using PowerShell. -+
active-directory How To Connect Pta How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-how-it-works.md
keywords: Azure AD Connect Pass-through Authentication, install Active Directory, required components for Azure AD, SSO, Single Sign-on documentationcenter: '' -+ ms.assetid: 9f994aca-6088-40f5-b2cc-c753a4f41da7
active-directory How To Connect Pta Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-quick-start.md
keywords: Azure AD Connect Pass-through Authentication, install Active Directory, required components for Azure AD, SSO, Single Sign-on documentationcenter: '' -+ ms.assetid: 9f994aca-6088-40f5-b2cc-c753a4f41da7
active-directory How To Connect Pta Security Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-security-deep-dive.md
keywords: Azure AD Connect Pass-through Authentication, install Active Directory, required components for Azure AD, SSO, Single Sign-on documentationcenter: '' -+ na
active-directory How To Connect Pta Upgrade Preview Authentication Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-upgrade-preview-authentication-agents.md
keywords: Azure AD Connect Pass-through Authentication, install Active Directory, required components for Azure AD, SSO, Single Sign-on documentationcenter: '' -+ ms.assetid: 9f994aca-6088-40f5-b2cc-c753a4f41da7
active-directory How To Connect Pta User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-user-privacy.md
keywords: Azure AD Connect Pass-through Authentication, GDPR, required components for Azure AD, SSO, Single Sign-on documentationcenter: '' -+ ms.assetid: 9f994aca-6088-40f5-b2cc-c753a4f41da7
If audit logging is enabled, this product may generate security logs for your Do
## Next steps * [Review the Microsoft Privacy policy on Trust Center](https://www.microsoft.com/trustcenter)
-* [**Troubleshoot**](tshoot-connect-pass-through-authentication.md) - Learn how to resolve common issues with the feature.
+* [**Troubleshoot**](tshoot-connect-pass-through-authentication.md) - Learn how to resolve common issues with the feature.
active-directory How To Connect Pta https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta.md
keywords: what is Azure AD Connect Pass-through Authentication, install Active Directory, required components for Azure AD, SSO, Single Sign-on documentationcenter: '' -+ ms.assetid: 9f994aca-6088-40f5-b2cc-c753a4f41da7
active-directory How To Connect Selective Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-selective-password-hash-synchronization.md
Title: 'Selective Password Hash Synchronization for Azure AD Connect'
description: This article describes how to setup and configure selective password hash synchronization to use with Azure AD Connect. -+
active-directory How To Connect Single Object Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-single-object-sync.md
Title: 'Azure AD Connect Single Object Sync '
description: Learn how to synchronize one object from Active Directory to Azure AD for troubleshooting. -+
active-directory How To Connect Sso How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sso-how-it-works.md
keywords: what is Azure AD Connect, install Active Directory, required components for Azure AD, SSO, Single Sign-on documentationcenter: '' -+ ms.assetid: 9f994aca-6088-40f5-b2cc-c753a4f41da7
active-directory How To Connect Sso Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sso-quick-start.md
keywords: what is Azure AD Connect, install Active Directory, required components for Azure AD, SSO, Single Sign-on documentationcenter: '' -+ ms.assetid: 9f994aca-6088-40f5-b2cc-c753a4f41da7
active-directory How To Connect Sso User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sso-user-privacy.md
keywords: what is Azure AD Connect, GDPR, required components for Azure AD, SSO, Single Sign-on documentationcenter: '' -+ ms.assetid: 9f994aca-6088-40f5-b2cc-c753a4f41da7
If audit logging is enabled, this product may generate security logs for your Do
* [Review the Microsoft Privacy policy on Trust Center](https://www.microsoft.com/trustcenter) - [**Troubleshoot**](tshoot-connect-sso.md) - Learn how to resolve common issues with the feature.
- - [**UserVoice**](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789) - For filing new feature requests.
+ - [**UserVoice**](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789) - For filing new feature requests.
active-directory How To Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sso.md
keywords: what is Azure AD Connect, install Active Directory, required components for Azure AD, SSO, Single Sign-on documentationcenter: '' -+ ms.assetid: 9f994aca-6088-40f5-b2cc-c753a4f41da7
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
Title: 'Azure AD Connect: Cloud authentication via Staged Rollout | Microsoft Docs' description: This article explains how to migrate from federated authentication, to cloud authentication, by using a Staged Rollout. -+
active-directory How To Connect Sync Best Practices Changing Default Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-best-practices-changing-default-configuration.md
description: Provides best practices for changing the default configuration of A
documentationcenter: '' -+ editor: '' ms.assetid: 7638a031-1635-4942-94c3-fce8f09eed5e
active-directory How To Connect Sync Change Addsacct Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-change-addsacct-pass.md
keywords: AD DS account, Active Directory account, password documentationcenter: '' -+ editor: '' ms.assetid: 76b19162-8b16-4960-9e22-bd64e6675ecc
active-directory How To Connect Sync Change Serviceacct Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-change-serviceacct-pass.md
keywords: Azure AD sync service account, password documentationcenter: '' -+ editor: '' ms.assetid: 76b19162-8b16-4960-9e22-bd64e6675ecc
active-directory How To Connect Sync Change The Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-change-the-configuration.md
Title: 'Azure AD Connect sync: Make a change to the default configuration'
description: Walks you through how to make a change to the configuration in Azure AD Connect sync. -+ ms.assetid: 7b9df836-e8a5-4228-97da-2faec9238b31
active-directory How To Connect Sync Configure Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-configure-filtering.md
description: Explains how to configure filtering in Azure AD Connect sync.
documentationcenter: '' -+ editor: '' ms.assetid: 880facf6-1192-40e9-8181-544c0759d506
active-directory How To Connect Sync Endpoint Api V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-endpoint-api-v2.md
Title: 'Azure AD Connect sync V2 endpoint | Microsoft Docs'
description: This document covers updates to the Azure AD connect sync v2 endpoints API. -+ editor: ''
active-directory How To Connect Sync Feature Directory Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-feature-directory-extensions.md
description: This topic describes the directory extensions feature in Azure AD C
documentationcenter: '' -+ editor: '' ms.assetid: 995ee876-4415-4bb0-a258-cca3cbb02193
One of the more useful scenarios is to use these attributes in dynamic security
## Next steps Learn more about the [Azure AD Connect sync](how-to-connect-sync-whatis.md) configuration.
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory How To Connect Sync Feature Preferreddatalocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-feature-preferreddatalocation.md
description: Describes how to put your Microsoft 365 user resources close to the
+ Last updated 01/21/2022
active-directory How To Connect Sync Feature Prevent Accidental Deletes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-feature-prevent-accidental-deletes.md
description: This topic describes how to prevent accidental deletes in Azure AD
documentationcenter: '' -+ editor: '' ms.assetid: 6b852cb4-2850-40a1-8280-8724081601f7
active-directory How To Connect Sync Feature Scheduler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-feature-scheduler.md
description: This topic describes the built-in scheduler feature in Azure AD Con
documentationcenter: '' -+ editor: '' ms.assetid: 6b1a598f-89c0-4244-9b20-f4aaad5233cf
active-directory How To Connect Sync Recycle Bin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-recycle-bin.md
keywords: AD Recycle Bin, accidental deletion, source anchor documentationcenter: '' -+ editor: '' ms.assetid: afec4207-74f7-4cdd-b13a-574af5223a90
active-directory How To Connect Sync Service Manager Ui Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-service-manager-ui-connectors.md
description: Understand the Connectors tab in the Synchronization Service Manage
documentationcenter: '' -+ editor: '' ms.assetid: 60f1d979-8e6d-4460-aaab-747fffedfc1e
active-directory How To Connect Sync Service Manager Ui Mvdesigner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-service-manager-ui-mvdesigner.md
description: Understand the Metaverse Designer tab in the Synchronization Servic
documentationcenter: '' -+ editor: '' ms.assetid: abaa9eb2-f105-42d1-b00a-2a63129a8ffb
active-directory How To Connect Sync Service Manager Ui Mvsearch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-service-manager-ui-mvsearch.md
description: Understand the Metaverse Search tab in the Synchronization Service
documentationcenter: '' -+ editor: '' ms.assetid: 20234dd4-3328-4817-b7ff-268f953d376d
active-directory How To Connect Sync Service Manager Ui Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-service-manager-ui-operations.md
description: Understand the Operations tab in the Synchronization Service Manage
documentationcenter: '' -+ editor: '' ms.assetid: 97a26565-618f-4313-8711-5925eeb47cdc
active-directory How To Connect Sync Service Manager Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-service-manager-ui.md
description: Understand Synchronization Service Manager for Azure AD Connect.
documentationcenter: '' -+ editor: '' ms.assetid: 5847c33f-aaa2-48f9-abe6-78c4a87a3b7c
active-directory How To Connect Sync Staging Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-staging-server.md
description: This topic describes operational tasks for Azure AD Connect sync an
documentationcenter: '' -+ editor: '' ms.assetid: b29c1790-37a3-470f-ab69-3cee824d220d
active-directory How To Connect Sync Technical Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-technical-concepts.md
description: Explains the technical concepts of Azure AD Connect sync.
documentationcenter: '' -+ editor: '' ms.assetid: 731cfeb3-beaf-4d02-aef4-b02a8f99fd11
active-directory How To Connect Sync Whatis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-whatis.md
description: Explains how Azure AD Connect sync works and how to customize.
documentationcenter: '' -+ editor: '' ms.assetid: ee4bf802-045b-4da0-986e-90aba2de58d6
The sync service consists of two components, the on-premises **Azure AD Connect
| [Functions Reference](reference-connect-sync-functions-reference.md) |Lists all functions available in declarative provisioning. | ## Additional Resources
-* [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md)
+* [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md)
active-directory How To Connect Syncservice Duplicate Attribute Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-syncservice-duplicate-attribute-resiliency.md
description: New behavior of how to handle objects with UPN or ProxyAddress conf
documentationcenter: '' -+ editor: '' ms.assetid: 537a92b7-7a84-4c89-88b0-9bce0eacd931
active-directory How To Connect Syncservice Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-syncservice-features.md
description: Describes service side features for Azure AD Connect sync service.
documentationcenter: '' -+ editor: '' ms.assetid: 213aab20-0a61-434a-9545-c4637628da81
active-directory How To Connect Syncservice Shadow Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-syncservice-shadow-attributes.md
+ Last updated 09/29/2021
active-directory How To Connect Uninstall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-uninstall.md
Title: Uninstall Azure AD Connect
description: This document describes how to uninstall Azure AD Connect. -+
active-directory How To Dirsync Upgrade Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-dirsync-upgrade-get-started.md
description: Learn how to upgrade from DirSync to Azure AD Connect. This article
documentationcenter: '' -+ editor: '' ms.assetid: baf52da7-76a8-44c9-8e72-33245790001c
active-directory How To Upgrade Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-upgrade-previous-version.md
description: Explains the different methods to upgrade to the latest release of
documentationcenter: '' -+ editor: '' ms.assetid: 31f084d8-2b89-478c-9079-76cf92e6618f
active-directory Howto Troubleshoot Upn Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/howto-troubleshoot-upn-changes.md
Last updated 03/13/2020
-+
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
Previously updated : 04/15/2022 Last updated : 08/26/2022 -+
active-directory Plan Connect Design Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-design-concepts.md
description: This topic details certain implementation design areas
documentationcenter: '' -+ editor: '' ms.assetid: 4114a6c0-f96a-493c-be74-1153666ce6c9
active-directory Plan Connect Performance Factors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-performance-factors.md
Title: Factors influencing the performance of Azure AD Connect
description: This document explains the how various factors influence the Azure AD Connect provisioning engine. These factors will help organizations to plan their Azure AD Connect deployment to make sure it meets their sync requirements. -+ tags: azuread
To optimize the performance of your Azure AD Connect implementation, consider th
- Monitor your [Azure AD Connect sync health](how-to-connect-health-agent-install.md) in Azure AD. ## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory Plan Connect Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-topologies.md
description: This topic details supported and unsupported topologies for Azure A
documentationcenter: '' -+ editor: '' ms.assetid: 1034c000-59f2-4fc8-8137-2416fa5e4bfe
To learn how to install Azure AD Connect for these scenarios, see [Custom instal
Learn more about the [Azure AD Connect sync](how-to-connect-sync-whatis.md) configuration.
-Learn more about [integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory Plan Connect User Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-user-signin.md
description: Azure AD Connect user sign-in for custom settings.
documentationcenter: '' -+ editor: curtand ms.assetid: 547b118e-7282-4c7f-be87-c035561001df
On the **User sign-in** page, select the desired user sign-in.
## Next steps - Learn more about [integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).-- Learn more about [Azure AD Connect design concepts](plan-connect-design-concepts.md).
+- Learn more about [Azure AD Connect design concepts](plan-connect-design-concepts.md).
active-directory Plan Connect Userprincipalname https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-userprincipalname.md
Last updated 06/26/2018
-+
Azure AD Tenant user object:
## Next Steps - [Integrate your on-premises directories with Azure Active Directory](whatis-hybrid-identity.md)-- [Custom installation of Azure AD Connect](how-to-connect-install-custom.md)
+- [Custom installation of Azure AD Connect](how-to-connect-install-custom.md)
active-directory Plan Hybrid Identity Design Considerations Accesscontrol Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-accesscontrol-requirements.md
description: Covers the pillars of identity, and identifying access requirements
documentationcenter: '' -+ editor: '' ms.assetid: e3b3b984-0d15-4654-93be-a396324b9f5e
active-directory Plan Hybrid Identity Design Considerations Business Needs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-business-needs.md
description: Identify the companyΓÇÖs business needs that will lead you to defin
documentationcenter: '' -+ editor: '' ms.assetid: de690978-84ef-41ad-9dfe-785722d343a1
active-directory Plan Hybrid Identity Design Considerations Contentmgt Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-contentmgt-requirements.md
description: Provides insight into how to determine the content management requi
documentationcenter: '' -+ editor: '' ms.assetid: dd1ef776-db4d-4ab8-9761-2adaa5a4f004
When planning your hybrid identity solution ensure that the following questions
[Determine access control requirements](plan-hybrid-identity-design-considerations-accesscontrol-requirements.md) ## See Also
-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
+[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
active-directory Plan Hybrid Identity Design Considerations Data Protection Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-data-protection-strategy.md
description: You define the data protection strategy for your hybrid identity so
documentationcenter: '' -+ editor: '' ms.assetid: e76fd1f4-340a-492a-84d9-e05f3b7cc396
active-directory Plan Hybrid Identity Design Considerations Dataprotection Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-dataprotection-requirements.md
description: When planning your hybrid identity solution, identify the data prot
documentationcenter: '' -+ editor: '' ms.assetid: 40dc4baa-fe82-4ab6-a3e4-f36fa9dcd0df
active-directory Plan Hybrid Identity Design Considerations Directory Sync Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-directory-sync-requirements.md
description: Identify what requirements are needed for synchronizing all the use
documentationcenter: '' -+ editor: '' ms.assetid: 593eaa71-17eb-4c16-8c98-43cc62987e65
You also need to determine the security requirements and constraints directory s
[Determine multi-factor authentication requirements](plan-hybrid-identity-design-considerations-multifactor-auth-requirements.md) ## See also
-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
+[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
active-directory Plan Hybrid Identity Design Considerations Hybrid Id Management Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-hybrid-id-management-tasks.md
description: Azure AD checks the specific conditions you pick when authenticatin
documentationcenter: '' -+ editor: '' ms.assetid: 65f80aea-0426-4072-83e1-faf5b76df034
active-directory Plan Hybrid Identity Design Considerations Identity Adoption Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-identity-adoption-strategy.md
description: With Conditional Access control, Azure AD checks the specific condi
documentationcenter: '' -+ editor: '' ms.assetid: b92fa5a9-c04c-4692-b495-ff64d023792c
active-directory Plan Hybrid Identity Design Considerations Incident Response Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-incident-response-requirements.md
description: Determine monitoring and reporting capabilities for the hybrid iden
documentationcenter: '' -+ editor: '' ms.assetid: a3d2a459-599b-4b67-8e51-7369ee25082d
During damage control and risk reduction-phase, it is important to quickly reduc
[Define data protection strategy](plan-hybrid-identity-design-considerations-data-protection-strategy.md) ## See Also
-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
+[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
active-directory Plan Hybrid Identity Design Considerations Lifecycle Adoption Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-lifecycle-adoption-strategy.md
description: Helps define the hybrid identity management tasks according to the
documentationcenter: '' -+ editor: '' ms.assetid: 420b6046-bd9b-4fce-83b0-72625878ae71
Review the following table to compare the synchronization options:
> ## See Also
-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
+[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
active-directory Plan Hybrid Identity Design Considerations Multifactor Auth Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-multifactor-auth-requirements.md
description: With Conditional Access control, Azure AD verifies the specific con
documentationcenter: '' -+ editor: '' ms.assetid: 9c59fda9-47d0-4c7e-b3e7-3575c29beabe
active-directory Plan Hybrid Identity Design Considerations Nextsteps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-nextsteps.md
description: A synopsis and next steps after you have read the Hybrid Identity d
documentationcenter: '' -+ editor: '' ms.assetid: 02d48768-ea9e-4bfe-ae54-b54c4bd0a789
Monitoring the following resources often provides the latest news and updates on
* [Microsoft Endpoint Configuration Manager blog](https://techcommunity.microsoft.com/t5/Configuration-Manager-Blog/bg-p/ConfigurationManagerBlog) ## See also
-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
+[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
active-directory Plan Hybrid Identity Design Considerations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-overview.md
description: Overview and content map of Hybrid Identity design considerations g
documentationcenter: '' -+ editor: '' ms.assetid: 100509c4-0b83-4207-90c8-549ba8372cf7
active-directory Plan Hybrid Identity Design Considerations Tools Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-tools-comparison.md
description: This is page provides a comprehensive table that compares the vario
documentationcenter: '' -+ ms.assetid: 1e62a4bd-4d55-4609-895e-70131dedbf52
Over the years the directory integration tools have grown and evolved.
To learn more about the differences between Azure AD Connect sync and Azure AD Connect cloud provisioning, see the article [What is Azure AD Connect cloud provisioning?](../cloud-sync/what-is-cloud-sync.md). For more information on deployment options with multiple HR sources or directories, then see the article [parallel and combined identity infrastructure options](../fundamentals/azure-active-directory-parallel-identity-options.md). ## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory Reference Connect Accounts Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-accounts-permissions.md
description: This topic describes the accounts used and created and permissions
documentationcenter: '' -+ editor: '' ms.assetid: b93e595b-354a-479d-85ec-a95553dd9cc2
If you did not read the documentation on [Integrating your on-premises identitie
|After installation | [Verify the installation and assign licenses](how-to-connect-post-installation.md)| ## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory Reference Connect Adconnectivitytools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-adconnectivitytools.md
Title: 'Azure AD Connect: ADConnectivityTools PowerShell Reference | Microsoft Docs' description: This document provides reference information for the ADConnectivityTools.psm1 PowerShell module. -+ Last updated 05/31/2019
active-directory Reference Connect Adsync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-adsync.md
Title: 'Azure AD Connect: ADSync PowerShell Reference | Microsoft Docs' description: This document provides reference information for the ADSync.psm1 PowerShell module. -+ Last updated 11/30/2020
The following documentation provides reference information for the ADSync.psm1 P
## Next Steps - [What is hybrid identity?](./whatis-hybrid-identity.md)-- [What is Azure AD Connect and Connect Health?](whatis-azure-ad-connect.md)
+- [What is Azure AD Connect and Connect Health?](whatis-azure-ad-connect.md)
active-directory Reference Connect Adsyncconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-adsyncconfig.md
Title: 'Azure AD Connect: ADSyncConfig PowerShell Reference | Microsoft Docs' description: This document provides reference information for the ADSyncConfig.psm1 PowerShell module. -+ Last updated 01/24/2019
active-directory Reference Connect Adsynctools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-adsynctools.md
Title: 'Azure AD Connect: ADSyncTools PowerShell Reference | Microsoft Docs' description: This document provides reference information for the ADSyncTools.psm1 PowerShell module. -+ Last updated 11/30/2020
active-directory Reference Connect Dirsync Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-dirsync-deprecated.md
description: Describes how to upgrade from DirSync and Azure AD Sync to Azure AD
documentationcenter: '' -+ editor: '' ms.assetid: bd68fb88-110b-4d76-978a-233e15590803
active-directory Reference Connect Germany https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-germany.md
keywords: introduction to Azure AD Connect, Azure AD Connect overview, what is A
documentationcenter: '' -+ editor: '' ms.assetid: 2bcb0caf-5d97-46cb-8c32-bda66cc22dad
active-directory Reference Connect Government Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-government-cloud.md
Title: 'Azure AD Connect: Hybrid identity considerations for Azure Government cl
description: Special considerations for deploying Azure AD Connect with the Azure Government cloud. -+
If you have overridden the `AuthNegotiateDelegateWhitelist` or `AuthServerWh
## Next steps - [Pass-through Authentication](how-to-connect-pta-quick-start.md#step-1-check-the-prerequisites)-- [Single Sign-On](how-to-connect-sso-quick-start.md#step-1-check-the-prerequisites)
+- [Single Sign-On](how-to-connect-sso-quick-start.md#step-1-check-the-prerequisites)
active-directory Reference Connect Health User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-health-user-privacy.md
description: This document describes user privacy with Azure AD Connect Health.
documentationcenter: '' -+ editor: ''
active-directory Reference Connect Health Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-health-version-history.md
description: This document describes the releases for Azure AD Connect Health an
documentationcenter: '' -+ editor: curtand ms.assetid: 8dd4e998-747b-4c52-b8d3-3900fe77d88f
active-directory Reference Connect Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-instances.md
description: This page documents special considerations for Azure AD instances.
documentationcenter: '' -+ editor: '' ms.assetid: f340ea11-8ff5-4ae6-b09d-e939c76355a3
active-directory Reference Connect Msexchuserholdpolicies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-msexchuserholdpolicies.md
description: This topic describes attribute behavior of the msExchUserHoldPolici
documentationcenter: '' -+ na
active-directory Reference Connect Ports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-ports.md
description: This page is a technical reference page for ports that are required
documentationcenter: '' -+ editor: curtand ms.assetid: de97b225-ae06-4afc-b2ef-a72a3643255b
active-directory Reference Connect Pta Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-pta-version-history.md
Title: 'Azure AD Pass-through Authentication: Version release history | Microsof
description: This article lists all releases of the Azure AD Pass-through Authentication agent -+ ms.assetid: ef2797d7-d440-4a9a-a648-db32ad137494
active-directory Reference Connect Sync Attributes Synchronized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-sync-attributes-synchronized.md
description: Lists the attributes that are synchronized to Azure Active Director
documentationcenter: '' -+ editor: '' ms.assetid: c2bb36e0-5205-454c-b9b6-f4990bcedf51
active-directory Reference Connect Sync Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-sync-functions-reference.md
description: Reference of declarative provisioning expressions in Azure AD Conne
documentationcenter: '' -+ editor: '' ms.assetid: 4f525ca0-be0e-4a2e-8da1-09b6b567ed5f
active-directory Reference Connect Tls Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-tls-enforcement.md
description: Learn how to force your Azure AD Connect server to use only Transpo
documentationcenter: '' -+ editor: ''
active-directory Reference Connect User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-user-privacy.md
description: This document describes how to obtain GDPR compliancy with Azure AD
documentationcenter: '' -+ editor: ''
Use the following steps to schedule the script to run every 48 hours.
## Next steps * [Review the Microsoft Privacy policy on Trust Center](https://www.microsoft.com/trustcenter)
-* [Azure AD Connect Health and User Privacy](reference-connect-health-user-privacy.md)
+* [Azure AD Connect Health and User Privacy](reference-connect-health-user-privacy.md)
active-directory Reference Connect Version History Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history-archive.md
Title: 'Azure AD Connect: Version release history archive | Microsoft Docs'
description: This article lists all archived releases of Azure AD Connect and Azure AD Sync -+ ms.assetid:
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history.md
description: This article lists all releases of Azure AD Connect and Azure AD Sy
ms.assetid: ef2797d7-d440-4a9a-a648-db32ad137494 + Last updated 7/6/2022
active-directory Tshoot Connect Attribute Not Syncing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-attribute-not-syncing.md
description: This topic provides steps for how to troubleshoot issues with attri
documentationcenter: '' -+ editor: curtand
Before investigating attribute syncing issues, letΓÇÖs understand the **Azure AD
## Next Steps - [Azure AD Connect sync](how-to-connect-sync-whatis.md).-- [What is hybrid identity?](whatis-hybrid-identity.md).
+- [What is hybrid identity?](whatis-hybrid-identity.md).
active-directory Tshoot Connect Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-connectivity.md
description: Explains how to troubleshoot connectivity issues with Azure AD Conn
documentationcenter: '' -+ editor: '' ms.assetid: 3aa41bb5-6fcb-49da-9747-e7a3bd780e64
active-directory Tshoot Connect Install Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-install-issues.md
description: This topic provides steps for how to troubleshoot issues with insta
documentationcenter: '' -+ editor: curtand
However, if you donΓÇÖt meet the express installation criteria and must do the c
## Next steps - [Azure AD Connect sync](how-to-connect-sync-whatis.md).-- [What is hybrid identity?](whatis-hybrid-identity.md)
+- [What is hybrid identity?](whatis-hybrid-identity.md)
active-directory Tshoot Connect Largeobjecterror Usercertificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-largeobjecterror-usercertificate.md
description: This topic provides the remediation steps for LargeObject errors ca
documentationcenter: '' -+ editor: '' ms.assetid: 146ad5b3-74d9-4a83-b9e8-0973a19828d9
active-directory Tshoot Connect Object Not Syncing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-object-not-syncing.md
description: Troubleshoot an object that is not syncing with Azure Active Direct
documentationcenter: '' -+ editor: '' ms.assetid:
From the **Connectors** tab you can also go to the [connector space object](#con
## Next steps - Learn more about [Azure AD Connect sync](how-to-connect-sync-whatis.md).-- Learn more about [hybrid identity](whatis-hybrid-identity.md).
+- Learn more about [hybrid identity](whatis-hybrid-identity.md).
active-directory Tshoot Connect Objectsync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-objectsync.md
description: This topic provides steps for how to troubleshoot issues with objec
documentationcenter: '' -+ editor: curtand
active-directory Tshoot Connect Pass Through Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-pass-through-authentication.md
keywords: Troubleshoot Azure AD Connect Pass-through Authentication, install Active Directory, required components for Azure AD, SSO, Single Sign-on documentationcenter: '' -+ ms.assetid: 9f994aca-6088-40f5-b2cc-c753a4f41da7
active-directory Tshoot Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-password-hash-synchronization.md
description: This article provides information about how to troubleshoot passwor
documentationcenter: '' -+ editor: '' ms.assetid:
active-directory Tshoot Connect Recover From Localdb 10Gb Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-recover-from-localdb-10gb-limit.md
description: This topic describes how to recover Azure AD Connect Synchronizatio
documentationcenter: '' -+ editor: '' ms.assetid: 41d081af-ed89-4e17-be34-14f7e80ae358
active-directory Tshoot Connect Source Anchor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-source-anchor.md
Title: 'Azure AD Connect: Troubleshoot Source Anchor Issues during Installation
description: This topic provides steps for how to troubleshoot issues with the source anchor during installation. -+
active-directory Tshoot Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-sso.md
description: This topic describes how to troubleshoot Azure Active Directory Sea
-+ ms.assetid: 9f994aca-6088-40f5-b2cc-c753a4f41da7
active-directory Tshoot Connect Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-sync-errors.md
description: This article explains how to troubleshoot errors that occur during
documentationcenter: '' -+ ms.assetid: 2209d5ce-0a64-447b-be3a-6f06d47995f8
To resolve this issue:
* [Locate Active Directory objects in Active Directory Administrative Center](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd560661(v=ws.10)) * [Query Azure AD for an object by using Azure AD PowerShell](/previous-versions/azure/jj151815(v=azure.100)) * [End-to-end troubleshooting of Azure AD Connect objects and attributes](/troubleshoot/azure/active-directory/troubleshoot-aad-connect-objects-attributes)
-* [Azure AD Troubleshooting](/troubleshoot/azure/active-directory/welcome-azure-ad)
+* [Azure AD Troubleshooting](/troubleshoot/azure/active-directory/welcome-azure-ad)
active-directory Tshoot Connect Tshoot Sql Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-tshoot-sql-connectivity.md
description: Explains how to troubleshoot SQL connectivity issues that occur wit
documentationcenter: '' -+ na
active-directory Tutorial Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-federation.md
description: Demonstrates how to setup a hybrid identity environment using feder
documentationcenter: '' -+ na
active-directory Tutorial Passthrough Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-passthrough-authentication.md
Title: 'Tutorial: Integrate a single AD forest to Azure using PTA'
description: Demonstrates how to setup a hybrid identity environment using pass-through authentication. -+
You have now successfully setup a hybrid identity environment that you can use t
- [Hardware and prerequisites](how-to-connect-install-prerequisites.md) - [Customized settings](how-to-connect-install-custom.md)-- [Pass-through authentication](how-to-connect-pta.md)
+- [Pass-through authentication](how-to-connect-pta.md)
active-directory Tutorial Password Hash Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-password-hash-sync.md
description: Demonstrates how to setup a hybrid identity environment using passw
documentationcenter: '' -+ na
You have now successfully setup a hybrid identity environment that you can use t
- [Hardware and prerequisites](how-to-connect-install-prerequisites.md) - [Express settings](how-to-connect-install-express.md)-- [Password hash synchronization](how-to-connect-password-hash-synchronization.md)|
+- [Password hash synchronization](how-to-connect-password-hash-synchronization.md)|
active-directory Tutorial Phs Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-phs-backup.md
description: Demonstrates how to turn on password hash sync as a backup and for
documentationcenter: '' -+
active-directory What Is Inter Directory Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/what-is-inter-directory-provisioning.md
Title: 'What is inter-directory provisioning with Azure Active Directory? | Micr
description: Describes overview of identity inter-directory provisioning. -+
active-directory Whatis Aadc Admin Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-aadc-admin-agent.md
Title: 'What is the Azure AD Connect Admin Agent - Azure AD Connect | Microsoft
description: Describes the tools used to synchronize and monitor your on-premises environment with Azure AD. -+
active-directory Whatis Azure Ad Connect V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-azure-ad-connect-v2.md
Title: 'What is Azure AD Connect v2.0? | Microsoft Docs'
description: Learn about the next version of Azure AD Connect. -+
active-directory Whatis Azure Ad Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-azure-ad-connect.md
Title: 'What is Azure AD Connect and Connect Health. | Microsoft Docs'
description: Learn about the tools used to synchronize and monitor your on-premises environment with Azure AD. -+
active-directory Whatis Fed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-fed.md
Title: 'What is federation with Azure AD? | Microsoft Docs'
description: Describes federation with Azure AD. -+ na
You can federate your on-premises environment with Azure AD and use this federat
- [What is federation?](whatis-fed.md) - [What is single-sign on?](how-to-connect-sso.md) - [How federation works](how-to-connect-fed-whatis.md)-- [Federation with PingFederate](how-to-connect-install-custom.md#configuring-federation-with-pingfederate)
+- [Federation with PingFederate](how-to-connect-install-custom.md#configuring-federation-with-pingfederate)
active-directory Whatis Hybrid Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-hybrid-identity.md
description: Hybrid identity is having a common user identity for authentication
keywords: introduction to Azure AD Connect, Azure AD Connect overview, what is Azure AD Connect, install active directory -+ ms.assetid: 59bd209e-30d7-4a89-ae7a-e415969825ea
Here are some common hybrid identity and access management scenarios with recomm
- [What is password hash synchronization (PHS)?](whatis-phs.md) - [What is pass-through authentication (PTA)?](how-to-connect-pta.md) - [What is federation?](whatis-fed.md) -- [What is single-sign on?](how-to-connect-sso.md)
+- [What is single-sign on?](how-to-connect-sso.md)
active-directory Whatis Phs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-phs.md
Title: 'What is password hash synchronization with Azure AD? | Microsoft Docs' description: Describes password hash synchronization. -+
For more information, see [What is hybrid identity?](whatis-hybrid-identity.md).
- [What is pass-through authentication (PTA)?](how-to-connect-pta.md) - [What is federation?](whatis-fed.md) - [What is single-sign on?](how-to-connect-sso.md)-- [How Password hash synchronization works](how-to-connect-password-hash-synchronization.md)
+- [How Password hash synchronization works](how-to-connect-password-hash-synchronization.md)
active-directory Azure Ad Pim Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/azure-ad-pim-approval-workflow.md
description: Learn how to approve or deny requests for Azure AD roles in Azure A
documentationcenter: '' -+ editor: ''
active-directory Azure Pim Resource Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/azure-pim-resource-rbac.md
description: View activity and audit history for Azure resource roles in Azure A
documentationcenter: '' -+ editor: ''
active-directory Concept Privileged Access Versus Role Assignable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-privileged-access-versus-role-assignable.md
description: Learn how to tell the difference between Privileged Access groups a
documentationcenter: '' -+ na
active-directory Groups Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-approval-workflow.md
description: Learn how to approve or deny requests for role-assignable groups in
documentationcenter: '' -+ na
active-directory Groups Assign Member Owner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-assign-member-owner.md
description: Learn how to assign eligible owners or members of a role-assignable
documentationcenter: '' -+ na
active-directory Groups Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-audit.md
description: View activity and audit history for privileged access group assignm
documentationcenter: '' -+ editor: ''
active-directory Groups Discover Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-discover-groups.md
description: Learn how to onboard role-assignable groups to manage as privileged
documentationcenter: '' -+ na
active-directory Groups Renew Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-renew-extend.md
description: Learn how to extend or renew role-assignable group assignments in A
documentationcenter: '' -+
active-directory Groups Role Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-role-settings.md
description: Learn how to configure role-assignable groups settings in Azure AD
documentationcenter: '' -+ na
active-directory Pim Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-apis.md
description: Information for understanding the APIs in Azure AD Privileged Ident
documentationcenter: '' -+ editor: ''
active-directory Pim Complete Azure Ad Roles And Resource Roles Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-complete-azure-ad-roles-and-resource-roles-review.md
description: Learn how to complete an access review of Azure resource and Azure
documentationcenter: '' -+ editor: ''
active-directory Pim Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-configure.md
description: Provides an overview of Azure AD Privileged Identity Management (PI
documentationcenter: '' -+ editor: ''
active-directory Pim Create Azure Ad Roles And Resource Roles Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md
description: Learn how to create an access review of Azure resource and Azure AD
documentationcenter: '' -+ editor: ''
active-directory Pim Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-deployment-plan.md
description: Learn how to deploy Privileged Identity Management (PIM) in your Az
documentationcenter: '' -+ editor: ''
active-directory Pim Email Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-email-notifications.md
description: Describes email notifications in Azure AD Privileged Identity Manag
documentationcenter: '' -+ na
active-directory Pim Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-getting-started.md
description: Learn how to enable and get started using Azure AD Privileged Ident
documentationcenter: '' -+ editor: ''
active-directory Pim How To Activate Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-activate-role.md
description: Learn how to activate Azure AD roles in Azure AD Privileged Identit
documentationcenter: '' -+ editor: ''
active-directory Pim How To Add Role To User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-add-role-to-user.md
description: Learn how to assign Azure AD roles in Azure AD Privileged Identity
documentationcenter: '' -+ editor: ''
active-directory Pim How To Change Default Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-change-default-settings.md
description: Learn how to configure Azure AD role settings in Azure AD Privilege
documentationcenter: '' -+ editor: ''
active-directory Pim How To Configure Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-configure-security-alerts.md
description: Configure security alerts for Azure AD roles Privileged Identity Ma
documentationcenter: '' -+ editor: ''
active-directory Pim How To Renew Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-renew-extend.md
description: Learn how to extend or renew Azure Active Directory role assignment
documentationcenter: '' -+ editor: ''
active-directory Pim How To Require Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-require-mfa.md
description: Learn how Azure AD Privileged Identity Management (PIM) validates m
documentationcenter: '' -+ editor: ''
active-directory Pim How To Use Audit Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-use-audit-log.md
description: Learn how to view the audit log history for Azure AD roles in Azure
documentationcenter: '' -+ editor: ''
active-directory Pim Perform Azure Ad Roles And Resource Roles Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-perform-azure-ad-roles-and-resource-roles-review.md
description: Learn how to review access of Azure resource and Azure AD roles in
documentationcenter: '' -+ editor: ''
active-directory Pim Resource Roles Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-approval-workflow.md
description: Learn how to approve or deny requests for Azure resource roles in A
documentationcenter: '' -+ na
active-directory Pim Resource Roles Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md
description: Learn how to assign Azure resource roles in Azure AD Privileged Ide
documentationcenter: '' -+ na
active-directory Pim Resource Roles Configure Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-alerts.md
description: Learn how to configure security alerts for Azure resource roles in
documentationcenter: '' -+ na
active-directory Pim Resource Roles Configure Role Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-role-settings.md
description: Learn how to configure Azure resource role settings in Azure AD Pri
documentationcenter: '' -+ na
active-directory Pim Resource Roles Custom Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-custom-role-policy.md
description: Learn how to use Azure custom roles in Azure AD Privileged Identity
documentationcenter: '' -+ na
active-directory Pim Resource Roles Discover Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-discover-resources.md
description: Learn how to discover Azure resources to manage in Azure AD Privile
documentationcenter: '' -+ na
active-directory Pim Resource Roles Overview Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-overview-dashboards.md
description: Describes how to use a resource dashboard to perform an access revi
documentationcenter: '' -+ editor: markwahl-msft
active-directory Pim Resource Roles Renew Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-renew-extend.md
description: Learn how to extend or renew Azure resource role assignments in Azu
documentationcenter: '' -+ editor: ''
active-directory Pim Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-roles.md
description: Describes the roles you cannot manage in Azure AD Privileged Identi
documentationcenter: '' -+ editor: ''
active-directory Pim Security Wizard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-security-wizard.md
description: Discovery and insights (formerly Security Wizard) help you convert
documentationcenter: '' -+ editor: ''
active-directory Pim Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-troubleshoot.md
description: Learn how to troubleshoot system errors with roles in Azure AD Priv
documentationcenter: '' -+ editor: ''
active-directory Powershell For Azure Ad Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/powershell-for-azure-ad-roles.md
description: Manage Azure AD roles using PowerShell cmdlets in Azure AD Privileg
documentationcenter: '' -+ editor: ''
active-directory Subscription Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/subscription-requirements.md
description: Describes the licensing requirements to use Azure AD Privileged Ide
documentationcenter: '' -+ editor: '' ms.assetid: 34367721-8b42-4fab-a443-a2e55cdbf33d
active-directory Concept Activity Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md
description: Introduction to Azure Active Directory activity logs in Azure Monit
documentationcenter: '' -+ editor: '' ms.assetid: 4b18127b-d1d0-4bdc-8f9c-6a4c991c5f75
na Previously updated : 03/11/2022 Last updated : 08/26/2022
active-directory Concept All Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-all-sign-ins.md
description: Overview of the sign-in logs in Azure Active Directory including ne
documentationcenter: '' -+ editor: '' ms.assetid: 4b18127b-d1d0-4bdc-8f9c-6a4c991c5f75
na Previously updated : 05/02/2022 Last updated : 08/26/2022
active-directory Concept Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-audit-logs.md
description: Overview of the audit logs in Azure Active Directory.
documentationcenter: '' -+ editor: '' ms.assetid: a1f93126-77d1-4345-ab7d-561066041161
na Previously updated : 04/30/2021 Last updated : 08/26/2022
active-directory Concept Provisioning Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
description: Overview of the provisioning logs in Azure Active Directory.
documentationcenter: '' -+ editor: '' ms.assetid: 4b18127b-d1d0-4bdc-8f9c-6a4c991c5f75
na Previously updated : 12/20/2021 Last updated : 08/26/2022
active-directory Concept Reporting Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-reporting-api.md
description: How to get started with the Azure Active Directory reporting API
documentationcenter: '' -+ editor: '' ms.assetid: 8813b911-a4ec-4234-8474-2eef9afea11e
na Previously updated : 01/21/2021 Last updated : 08/26/2022
active-directory Concept Sign In Diagnostics Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-in-diagnostics-scenarios.md
description: Lists the scenarios that are supported by the sign-in diagnostics f
documentationcenter: '' -+ editor: '' ms.assetid: e2b3d8ce-708a-46e4-b474-123792f35526
na Previously updated : 11/12/2021 Last updated : 08/26/2022
active-directory Concept Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-ins.md
description: Overview of the sign-in logs in Azure Active Directory.
documentationcenter: '' -+ editor: '' ms.assetid: 4b18127b-d1d0-4bdc-8f9c-6a4c991c5f75
na Previously updated : 05/02/2022 Last updated : 08/26/2022
active-directory Concept Usage Insights Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-usage-insights-report.md
description: Introduction to usage and insights report in the Azure Active Direc
documentationcenter: '' -+ editor: '' ms.assetid: 3fba300d-18fc-4355-9924-d8662f563a1f
na Previously updated : 05/27/2022 Last updated : 08/26/2022
active-directory Howto Access Activity Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-access-activity-logs.md
description: Learn how to choose the right method for accessing the activity log
documentationcenter: '' -+ editor: '' ms.assetid: ada19f69-665c-452a-8452-701029bf4252
na Previously updated : 11/24/2021 Last updated : 08/26/2022
active-directory Howto Analyze Activity Logs Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-analyze-activity-logs-log-analytics.md
description: Learn how to analyze Azure Active Directory activity logs using Azu
documentationcenter: '' -+ editor: '' ms.assetid: 4535ae65-8591-41ba-9a7d-b7f00c574426
na Previously updated : 08/19/2021 Last updated : 08/26/2022
active-directory Howto Configure Prerequisites For Reporting Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md
description: Learn about the prerequisites to access the Azure AD reporting API
documentationcenter: '' -+ editor: '' ms.assetid: ada19f69-665c-452a-8452-701029bf4252
na Previously updated : 08/21/2021 Last updated : 08/26/2022
active-directory Howto Download Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-download-logs.md
description: Learn how to download activity logs in Azure Active Directory.
documentationcenter: '' -+ editor: '' Previously updated : 02/25/2022 Last updated : 08/26/2022
active-directory Howto Find Activity Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-find-activity-reports.md
description: Learn where the Azure Active Directory user activity reports are in
documentationcenter: '' -+ editor: '' Previously updated : 11/13/2018 Last updated : 08/26/2022
active-directory Howto Install Use Log Analytics Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-install-use-log-analytics-views.md
description: Learn how to install and use the log analytics views for Azure Acti
documentationcenter: '' -+ editor: '' ms.assetid: 2290de3c-2858-4da0-b4ca-a00107702e26
na Previously updated : 04/18/2019 Last updated : 08/26/2022
active-directory Howto Integrate Activity Logs With Arcsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-arcsight.md
description: Learn how to integrate Azure Active Directory logs with ArcSight us
documentationcenter: '' -+ editor: '' ms.assetid: b37bef0d-982e-4e28-86b2-6c61ca524ae1
na Previously updated : 04/19/2019 Last updated : 08/26/2022
active-directory Howto Integrate Activity Logs With Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md
description: Learn how to integrate Azure Active Directory logs with Azure Monit
documentationcenter: '' -+ editor: '' ms.assetid: 2c3db9a8-50fa-475a-97d8-f31082af6593
na Previously updated : 07/09/2021 Last updated : 08/22/2022
active-directory Howto Integrate Activity Logs With Splunk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-splunk.md
description: Learn how to integrate Azure Active Directory logs with Splunk usin
documentationcenter: '' -+ editor: '' ms.assetid: 2c3db9a8-50fa-475a-97d8-f31082af6593
na Previously updated : 08/05/2021 Last updated : 08/22/2022
active-directory Howto Integrate Activity Logs With Sumologic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-sumologic.md
description: Learn how to integrate Azure Active Directory logs with SumoLogic u
documentationcenter: '' -+ editor: '' ms.assetid: 2c3db9a8-50fa-475a-97d8-f31082af6593
na Previously updated : 04/18/2019 Last updated : 08/22/2022
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
description: Learn about how to detect and handle user accounts in Azure AD that
documentationcenter: '' -+ editor: '' ms.assetid: ada19f69-665c-452a-8452-701029bf4252
na Previously updated : 12/17/2021 Last updated : 08/26/2022
active-directory Howto Troubleshoot Sign In Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-troubleshoot-sign-in-errors.md
description: Learn how to troubleshoot sign-in errors using Azure Active Directo
documentationcenter: '' -+ editor: '' Previously updated : 11/13/2018 Last updated : 08/26/2022
active-directory Howto Use Azure Monitor Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md
Title: Azure Monitor workbooks for reports | Microsoft Docs
description: Learn how to use Azure Monitor workbooks for Azure Active Directory reports. -+ ms.assetid: 4066725c-c430-42b8-a75b-fe2360699b82
Previously updated : 06/10/2022 Last updated : 08/26/2022
active-directory Overview Flagged Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-flagged-sign-ins.md
description: Provides a general overview of flagged sign-ins in Azure Active Dir
documentationcenter: '' -+ editor: '' ms.assetid: e2b3d8ce-708a-46e4-b474-123792f35526
na Previously updated : 03/02/2022 Last updated : 08/26/2022
active-directory Overview Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-monitoring.md
description: Provides a general overview of Azure Active Directory monitoring.
documentationcenter: '' -+ editor: '' ms.assetid: e2b3d8ce-708a-46e4-b474-123792f35526
na Previously updated : 04/18/2019 Last updated : 08/26/2022
active-directory Overview Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-recommendations.md
description: Provides a general overview of Azure Active Directory recommendatio
documentationcenter: '' -+ editor: '' ms.assetid: e2b3d8ce-708a-46e4-b474-123792f35526
na Previously updated : 03/01/2022 Last updated : 08/22/2022
active-directory Overview Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-reports.md
description: Provides a general overview of Azure Active Directory reports.
documentationcenter: '' -+ editor: '' ms.assetid: 6141a333-38db-478a-927e-526f1e7614f4
na Previously updated : 09/30/2020 Last updated : 08/22/2022
active-directory Overview Service Health Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-service-health-notifications.md
description: Learn how Service Health notifications provide you with a customiza
documentationcenter: '' -+ editor: '' ms.assetid: 1c5002e4-079e-4c28-a4e8-a5841942030a6
na Previously updated : 06/15/2022 Last updated : 08/26/2022
active-directory Overview Sign In Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-sign-in-diagnostics.md
description: Provides a general overview of the sign-in diagnostic in Azure Acti
documentationcenter: '' -+ editor: '' ms.assetid: e2b3d8ce-708a-46e4-b474-123792f35526
na Previously updated : 11/12/2021 Last updated : 08/26/2022
active-directory Plan Monitoring And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/plan-monitoring-and-reporting.md
Title: Plan reports & monitoring deployment - Azure AD
description: Describes how to plan and execute implementation of reporting and monitoring. -+ Previously updated : 11/13/2018 Last updated : 08/26/2022
active-directory Quickstart Access Log With Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-access-log-with-graph-api.md
Previously updated : 06/03/2021 Last updated : 08/26/2022 -+
active-directory Quickstart Analyze Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-analyze-sign-in.md
Previously updated : 06/03/2021 Last updated : 08/26/2021 -+
active-directory Quickstart Azure Monitor Route Logs To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md
description: Learn how to set up Azure Diagnostics to push Azure Active Director
documentationcenter: '' -+ editor: '' ms.assetid: 045f94b3-6f12-407a-8e9c-ed13ae7b43a3
na Previously updated : 05/05/2021 Last updated : 08/26/2022
active-directory Quickstart Filter Audit Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-filter-audit-log.md
Previously updated : 06/11/2021 Last updated : 08/26/2022 -+
active-directory Recommendation Integrate Third Party Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-integrate-third-party-apps.md
description: Learn why you should integrate third party apps with Azure AD
documentationcenter: '' -+ editor: '' ms.assetid: 9b88958d-94a2-4f4b-a18c-616f0617a24e
na Previously updated : 03/02/2022 Last updated : 08/26/2022
active-directory Recommendation Mfa From Known Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-mfa-from-known-devices.md
description: Learn why you should minimize MFA prompts from known devices in Azu
documentationcenter: '' -+ editor: '' ms.assetid: 9b88958d-94a2-4f4b-a18c-616f0617a24e
na Previously updated : 03/31/2022 Last updated : 08/26/2022
active-directory Recommendation Migrate Apps From Adfs To Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-apps-from-adfs-to-azure-ad.md
description: Learn why you should migrate apps from ADFS to Azure AD in Azure AD
documentationcenter: '' -+ editor: '' ms.assetid: 9b88958d-94a2-4f4b-a18c-616f0617a24e
na Previously updated : 03/31/2022 Last updated : 08/26/2022
active-directory Recommendation Migrate To Authenticator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-to-authenticator.md
description: Learn why you should migrate your users to the Microsoft authentica
documentationcenter: '' -+ editor: '' ms.assetid: 9b88958d-94a2-4f4b-a18c-616f0617a24e
na Previously updated : 03/02/2022 Last updated : 08/26/2022
active-directory Recommendation Turn Off Per User Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-turn-off-per-user-mfa.md
description: Learn why you should turn off per user MFA in Azure AD
documentationcenter: '' -+ editor: '' ms.assetid: 9b88958d-94a2-4f4b-a18c-616f0617a24e
na Previously updated : 03/31/2022 Last updated : 08/26/2022
active-directory Reference Audit Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-audit-activities.md
description: Get an overview of the audit activities that can be logged in your
documentationcenter: '' -+ editor: '' ms.assetid: a1f93126-77d1-4345-ab7d-561066041161
na Previously updated : 01/24/2019 Last updated : 08/26/2022
active-directory Reference Azure Ad Sla Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-azure-ad-sla-performance.md
description: Learn about the Azure AD SLA performance
documentationcenter: '' -+ editor: '' ms.assetid: 9b88958d-94a2-4f4b-a18c-616f0617a24e
na Previously updated : 03/15/2022 Last updated : 08/26/2022
For each month, we truncate the SLA attainment at three places after the decimal
| Month | 2021 | 2022 | | | | |
-| January | | 99.999% |
+| January | | 99.998% |
| February | 99.999% | 99.999% |
-| March | 99.568% | 99.999% |
+| March | 99.568% | 99.998% |
| April | 99.999% | 99.999% | | May | 99.999% | 99.999% | | June | 99.999% | 99.999% |
-| July | 99.999% | |
+| July | 99.999% | 99.999% |
| August | 99.999% | | | September | 99.999% | | | October | 99.999% | |
active-directory Reference Azure Monitor Sign Ins Log Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-azure-monitor-sign-ins-log-schema.md
description: Describe the Azure AD sign in log schema for use in Azure Monitor
documentationcenter: '' -+ editor: '' ms.assetid: 4b18127b-d1d0-4bdc-8f9c-6a4c991c5f75
na Previously updated : 12/17/2021 Last updated : 08/26/2022
active-directory Reference Basic Info Sign In Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-basic-info-sign-in-logs.md
description: Learn what the basic info in the sign-in logs is about.
documentationcenter: '' -+ editor: '' ms.assetid: 9b88958d-94a2-4f4b-a18c-616f0617a24e
na Previously updated : 05/05/2022 Last updated : 08/26/2022
active-directory Reference Powershell Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-powershell-reporting.md
description: Reference of the Azure AD PowerShell cmdlets for reporting.
documentationcenter: '' -+ editor: '' ms.assetid: a1f93126-77d1-4345-ab7d-561066041161
na Previously updated : 08/07/2020 Last updated : 08/26/2022
To install the public preview release, use the following.
Install-module AzureADPreview ```
-For more infromation on how to connect to Azure AD using PowerShell, please see the article [Azure AD PowerShell for Graph](/powershell/azure/active-directory/install-adv2).
+For more information on how to connect to Azure AD using PowerShell, please see the article [Azure AD PowerShell for Graph](/powershell/azure/active-directory/install-adv2).
With Azure Active Directory (Azure AD) reports, you can get details on activities around all the write operations in your direction (audit logs) and authentication data (sign-in logs). Although the information is available by using the MS Graph API, now you can retrieve the same data by using the Azure AD PowerShell cmdlets for reporting.
active-directory Reference Reports Data Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-reports-data-retention.md
description: Learn how long Azure stores the various types of reporting data.
documentationcenter: '' -+ editor: '' ms.assetid: 183e53b0-0647-42e7-8abe-3e9ff424de12
Previously updated : 11/05/2020 Last updated : 08/26/2022
You can retain the audit and sign-in activity data for longer than the default r
### Can I see last month's data after getting an Azure AD premium license?
-**No**, you can't. Azure stores up to seven days of activity data for a free version. This means, when you switch from a free to a to a premium version, you can only see up to 7 days of data.
+**No**, you can't. Azure stores up to seven days of activity data for a free version. This means, when you switch from a free to a premium version, you can only see up to 7 days of data.
active-directory Reference Reports Latencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-reports-latencies.md
description: Learn about the amount of time it takes for reporting events to sho
documentationcenter: '' -+ editor: '' ms.assetid: 9b88958d-94a2-4f4b-a18c-616f0617a24e
na Previously updated : 05/13/2019 Last updated : 08/26/2022
active-directory Troubleshoot Audit Data Verified Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/troubleshoot-audit-data-verified-domain.md
description: Provides you with information that will appear in the Azure Active
documentationcenter: '' -+ editor: '' na Previously updated : 07/22/2020 Last updated : 08/26/2022
active-directory Troubleshoot Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/troubleshoot-graph-api.md
description: Provides you with a resolution to errors while calling Azure Active
documentationcenter: '' -+ editor: '' ms.assetid: 0030c5a4-16f0-46f4-ad30-782e7fea7e40
na Previously updated : 11/13/2018 Last updated : 08/26/2022
This article lists the common error messages you may run into while accessing ac
### 500 HTTP internal server error while accessing Microsoft Graph V2 endpoint
-We do not currently support the Microsoft Graph v2 endpoint - make sure to access the activity logs using the Microsoft Graph v1 endpoint.
+We don't currently support the Microsoft Graph v2 endpoint - make sure to access the activity logs using the Microsoft Graph v1 endpoint.
### Error: Neither tenant is B2C or tenant doesn't have premium license Accessing sign-in reports requires an Azure Active Directory premium 1 (P1) license. If you see this error message while accessing sign-ins, make sure that your tenant is licensed with an Azure AD P1 license.
-### Error: User is not in the allowed roles
+### Error: User isn't in the allowed roles
If you see this error message while trying to access audit logs or sign-ins using the API, make sure that your account is part of the **Security Reader** or **Report Reader** role in your Azure Active Directory tenant. ### Error: Application missing AAD 'Read directory data' permission
-Please follow the steps in the [Prerequisites to access the Azure Active Directory reporting API](howto-configure-prerequisites-for-reporting-api.md) to ensure your application is running with the right set of permissions.
+Follow the steps in the [Prerequisites to access the Azure Active Directory reporting API](howto-configure-prerequisites-for-reporting-api.md) to ensure your application is running with the right set of permissions.
### Error: Application missing Microsoft Graph API 'Read all audit log data' permission
-Please follow the steps in the [Prerequisites to access the Azure Active Directory reporting API](howto-configure-prerequisites-for-reporting-api.md) to ensure your application is running with the right set of permissions.
+Follow the steps in the [Prerequisites to access the Azure Active Directory reporting API](howto-configure-prerequisites-for-reporting-api.md) to ensure your application is running with the right set of permissions.
## Next Steps
active-directory Troubleshoot Missing Audit Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/troubleshoot-missing-audit-data.md
description: Provides you with a resolution to missing data in Azure Active Dire
documentationcenter: '' -+ editor: '' ms.assetid: 7cbe4337-bb77-4ee0-b254-3e368be06db7
na Previously updated : 01/15/2018 Last updated : 08/26/2022
active-directory Troubleshoot Missing Data Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/troubleshoot-missing-data-download.md
description: Provides you with a resolution to missing data in downloaded Azure
documentationcenter: '' -+ editor: '' ms.assetid: ffce7eb1-99da-4ea7-9c4d-2322b755c8ce
na Previously updated : 11/13/2018 Last updated : 08/26/2022
active-directory Tutorial Access Api With Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-access-api-with-certificates.md
description: This tutorial explains how to use the Azure AD Reporting API with c
documentationcenter: '' -+ ms.assetid:
na Previously updated : 11/13/2018 Last updated : 08/26/2022
active-directory Tutorial Azure Monitor Stream Logs To Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md
description: Learn how to set up Azure Diagnostics to push Azure Active Director
documentationcenter: '' -+ editor: '' ms.assetid: 045f94b3-6f12-407a-8e9c-ed13ae7b43a3
na Previously updated : 09/02/2021 Last updated : 08/26/2022
After data is displayed in the event hub, you can access and read the data in tw
* **Splunk**: For more information about integrating Azure AD logs with Splunk, see [Integrate Azure AD logs with Splunk by using Azure Monitor](./howto-integrate-activity-logs-with-splunk.md).
- * **IBM QRadar**: The DSM and Azure Event Hub Protocol are available for download at [IBM support](https://www.ibm.com/support). For more information about integration with Azure, go to the [IBM QRadar Security Intelligence Platform 7.3.0](https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/c_dsm_guide_microsoft_azure_overview.html?cp=SS42VS_7.3.0) site.
+ * **IBM QRadar**: The DSM and Azure Event Hubs Protocol are available for download at [IBM support](https://www.ibm.com/support). For more information about integration with Azure, go to the [IBM QRadar Security Intelligence Platform 7.3.0](https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/c_dsm_guide_microsoft_azure_overview.html?cp=SS42VS_7.3.0) site.
* **Sumo Logic**: To set up Sumo Logic to consume data from an event hub, see [Install the Azure AD app and view the dashboards](https://help.sumologic.com/Send-Data/Applications-and-Other-Data-Sources/Azure_Active_Directory/Install_the_Azure_Active_Directory_App_and_View_the_Dashboards).
active-directory Tutorial Log Analytics Wizard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-log-analytics-wizard.md
Previously updated : 08/05/2020 Last updated : 08/26/2022 -+ #Customer intent: As an IT admin, I want to set up log analytics so I can analyze the health of my environment.
active-directory Workbook Authentication Prompts Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-authentication-prompts-analysis.md
description: Learn how to use the authentication prompts analysis workbook.
documentationcenter: '' -+ editor: '' Previously updated : 02/22/2022 Last updated : 08/26/2022
active-directory Workbook Conditional Access Gap Analyzer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-conditional-access-gap-analyzer.md
description: Learn how to use the conditional access gap analyzer workbook.
documentationcenter: '' -+ editor: '' Previously updated : 11/05/2021 Last updated : 08/26/2022
active-directory Workbook Cross Tenant Access Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-cross-tenant-access-activity.md
description: Learn how to use the cross-tenant access activity workbook.
documentationcenter: '' -+ editor: '' Previously updated : 02/14/2022 Last updated : 08/26/2022
active-directory Workbook Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-legacy authentication.md
description: Learn how to use the sign-ins using legacy authentication workbook.
documentationcenter: '' -+ editor: '' Previously updated : 03/16/2022 Last updated : 08/26/2022
This workbook supports multiple filters:
- Many email protocols that once relied on legacy authentication now support more secure modern authentication methods. If you see legacy email authentication protocols in this workbook, consider migrating to modern authentication for email instead. For more information, see [Deprecation of Basic authentication in Exchange Online](/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online). -- Some clients can use both legacy authentication or modern authentication depending on client configuration. If you see ΓÇ£modern mobile/desktop clientΓÇ¥ or ΓÇ£browserΓÇ¥ for a client in the Azure AD logs, it is using modern authentication. If it has a specific client or protocol name, such as ΓÇ£Exchange ActiveSyncΓÇ¥, it is using legacy authentication to connect to Azure AD. The client types in conditional access, and the Azure AD reporting page in the Azure Portal demarcate modern authentication clients and legacy authentication clients for you, and only legacy authentication is captured in this workbook.
+- Some clients can use both legacy authentication or modern authentication depending on client configuration. If you see ΓÇ£modern mobile/desktop clientΓÇ¥ or ΓÇ£browserΓÇ¥ for a client in the Azure AD logs, it is using modern authentication. If it has a specific client or protocol name, such as ΓÇ£Exchange ActiveSyncΓÇ¥, it is using legacy authentication to connect to Azure AD. The client types in conditional access, and the Azure AD reporting page in the Azure portal demarcate modern authentication clients and legacy authentication clients for you, and only legacy authentication is captured in this workbook.
## Next steps
active-directory Workbook Risk Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-risk-analysis.md
description: Learn how to use the identity protection risk analysis workbook.
documentationcenter: '' -+ editor: '' Previously updated : 03/08/2022 Last updated : 08/26/2022
active-directory Workbook Sensitive Operations Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-sensitive-operations-report.md
description: Learn how to use the sensitive operations report workbook.
documentationcenter: '' -+ editor: '' Previously updated : 11/05/2021 Last updated : 08/26/2022
active-directory Lr Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lr-tutorial.md
In this section, you enable Azure AD single sign-on in the LoginRadius Admin Con
1. Log in to your LoginRadius [Admin Console](https://adminconsole.loginradius.com/login) account.
-2. Go to your **Team Management** section in the [LoginRadius Admin Console](https://secure.loginradius.com/account/team).
+2. Go to your **Team Management** section in the [LoginRadius Admin Console](https://www.loginradius.com/docs/api/v2/admin-console/overview/).
3. Select the **Single Sign-On** tab, and then select **Azure AD**:
active-directory Verifiable Credentials Configure Verifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-verifier.md
Now you are ready to present and verify your first verified credential expert ca
1. From Visual Studio Code, run the *Verifiable_credentials_DotNet* project. Or from the command shell, run the following commands: ```bash
- cd active-directory-verifiable-credentials-dotnet/1. asp-net-core-api-idtokenhint
+ cd active-directory-verifiable-credentials-dotnet/1-asp-net-core-api-idtokenhint
dotnet build "asp-net-core-api-idtokenhint.csproj" -c Debug -o .\bin\Debug\netcoreapp3.1 dotnet run ```
aks Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/availability-zones.md
AKS clusters can currently be created using availability zones in the following
* South Africa North * South Central US * Sweden Central
+* Switzerland North
* UK South * US Gov Virginia * West Europe
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-about.md
The OSM project was originated by Microsoft and has since been donated and is go
OSM can be added to your Azure Kubernetes Service (AKS) cluster by enabling the OSM add-on using the [Azure CLI][osm-azure-cli] or a [Bicep template][osm-bicep]. The OSM add-on provides a fully supported installation of OSM that is integrated with AKS. > [!IMPORTANT]
-> The OSM add-on installs version *1.1.1* of OSM on clusters running Kubernetes version 1.23.5 and higher. The OSM add-on installs version *1.0.0.* on clusters running a Kubernetes version below 1.23.5.
+> Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM:
+> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.0* of OSM.
+> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.1* of OSM.
+> - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM.
## Capabilities and features
aks Open Service Mesh Binary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-binary.md
zone_pivot_groups: client-operating-system
This article will discuss how to download the OSM client library to be used to operate and configure the OSM add-on for AKS, and how to configure the binary for your environment.
-> [!WARNING]
-> If you are using a Kubernetes version below 1.23.5, the OSM add-on installs version *1.0.0.* of OSM on your cluster, and you must use the OSM client library version *1.0.0* with the following commands.
+> [!IMPORTANT]
+> Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM:
+> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.0* of OSM.
+> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.1* of OSM.
+> - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM.
+ ::: zone pivot="client-operating-system-linux"
aks Open Service Mesh Deploy Addon Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-az-cli.md
This article shows you how to install the Open Service Mesh (OSM) add-on on an Azure Kubernetes Service (AKS) cluster and verify that it's installed and running. > [!IMPORTANT]
-> The OSM add-on installs version *1.1.1* of OSM on clusters running Kubernetes version 1.23.5 and higher. The OSM add-on installs version *1.0.0.* on clusters running a Kubernetes version below 1.23.5.
+> Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM:
+> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.0* of OSM.
+> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.1* of OSM.
+> - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM.
## Prerequisites
aks Open Service Mesh Deploy Addon Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-bicep.md
This article shows you how to deploy the Open Service Mesh (OSM) add-on to Azure Kubernetes Service (AKS) by using a [Bicep](../azure-resource-manager/bicep/index.yml) template. > [!IMPORTANT]
-> The OSM add-on installs version *1.1.1* of OSM on clusters running Kubernetes version 1.23.5 and higher. The OSM add-on installs version *1.0.0.* on clusters running a Kubernetes version below 1.23.5.
+> Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM:
+> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.0* of OSM.
+> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.1* of OSM.
+> - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM.
[Bicep](../azure-resource-manager/bicep/overview.md) is a domain-specific language that uses declarative syntax to deploy Azure resources. You can use Bicep in place of creating [Azure Resource Manager templates](../azure-resource-manager/templates/overview.md) to deploy your infrastructure-as-code Azure resources.
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
For AKS clusters that use multiple node pools or Windows Server nodes, see [Upgr
### [Azure CLI](#tab/azure-cli)
-This article requires that you are running the Azure CLI version 2.0.65 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+This article requires that you're running the Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
### [Azure PowerShell](#tab/azure-powershell)
az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster --outpu
> [!NOTE] > When you upgrade a supported AKS cluster, Kubernetes minor versions can't be skipped. All upgrades must be performed sequentially by major version number. For example, upgrades between *1.14.x* -> *1.15.x* or *1.15.x* -> *1.16.x* are allowed, however *1.14.x* -> *1.16.x* is not allowed.
->
+>
> Skipping multiple versions can only be done when upgrading from an _unsupported version_ back to a _supported version_. For example, an upgrade from an unsupported *1.10.x* -> a supported *1.15.x* can be completed if available. The following example output shows that the cluster can be upgraded to versions *1.19.1* and *1.19.3*:
Name ResourceGroup MasterVersion Upgrades
default myResourceGroup 1.18.10 1.19.1, 1.19.3 ```
-The following output shows that no upgrades are available (or it may also be possible that cli is not upgraded):
+The following example output means that the appservice-kube extension isn't compatible with your Azure CLI version (a minimum of version 2.34.1 is required):
+
+```console
+The 'appservice-kube' extension is not compatible with this version of the CLI.
+You have CLI core version 2.0.81 and this extension requires a min of 2.34.1.
+Table output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info.
+```
+
+If you receive this output, you need to update your Azure CLI version. The `az upgrade` command was added in version 2.11.0 and doesn't work with versions prior to 2.11.0. Older versions can be updated by reinstalling Azure CLI as described in [Install the Azure CLI](/cli/azure/install-azure-cli). If your Azure CLI version is 2.11.0 or later, you'll receive a message to run `az upgrade` to upgrade Azure CLI to the latest version.
+
+If your Azure CLI is updated and you receive the following example output, it means that no upgrades are available:
```console ERROR: Table output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info. ```
-> [!IMPORTANT]
-> If no upgrade is available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. Attempting to upgrade a cluster to a newer Kubernetes version when `az aks get-upgrades` shows no upgrades available is not supported.
+If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when `az aks get-upgrades` shows that no upgrades are available.
### [Azure PowerShell](#tab/azure-powershell) To check which Kubernetes releases are available for your cluster, use the [Get-AzAksUpgradeProfile][get-azaksupgradeprofile] command. The following example checks for available upgrades to *myAKSCluster* in *myResourceGroup*: ```azurepowershell-interactive
-Get-AzAksUpgradeProfile -ResourceGroupName myResourceGroup -ClusterName myAKSCluster |
+ Get-AzAksUpgradeProfile -ResourceGroupName myResourceGroup -ClusterName myAKSCluster |
Select-Object -Property Name, ControlPlaneProfileKubernetesVersion -ExpandProperty ControlPlaneProfileUpgrade | Format-Table -Property * ``` > [!NOTE] > When you upgrade a supported AKS cluster, Kubernetes minor versions can't be skipped. All upgrades must be performed sequentially by major version number. For example, upgrades between *1.14.x* -> *1.15.x* or *1.15.x* -> *1.16.x* are allowed, however *1.14.x* -> *1.16.x* is not allowed.
->
+>
> Skipping multiple versions can only be done when upgrading from an _unsupported version_ back to a _supported version_. For example, an upgrade from an unsupported *1.10.x* -> a supported *1.15.x* can be completed if available. The following example output shows that the cluster can be upgraded to versions *1.19.1* and *1.19.3*:
default 1.18.10 1.19.1
default 1.18.10 1.19.3 ```
-> [!IMPORTANT]
-> If no upgrade is available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. Attempting to upgrade a cluster to a newer Kubernetes version when `Get-AzAksUpgradeProfile` shows no upgrades available is not supported.
+If no upgrade is available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when `Get-AzAksUpgradeProfile` shows that no upgrades are available.
## Customize node surge upgrade
-> [!Important]
+> [!IMPORTANT]
> Node surges require subscription quota for the requested max surge count for each upgrade operation. For example, a cluster that has 5 node pools, each with a count of 4 nodes, has a total of 20 nodes. If each node pool has a max surge value of 50%, additional compute and IP quota of 10 nodes (2 nodes * 5 pools) is required to complete the upgrade. > > If using Azure CNI, validate there are available IPs in the subnet as well to [satisfy IP requirements of Azure CNI](configure-azure-cni.md).
-By default, AKS configures upgrades to surge with one extra node. A default value of one for the max surge settings will enable AKS to minimize workload disruption by creating an extra node before the cordon/drain of existing applications to replace an older versioned node. The max surge value may be customized per node pool to enable a trade-off between upgrade speed and upgrade disruption. By increasing the max surge value, the upgrade process completes faster, but setting a large value for max surge may cause disruptions during the upgrade process.
+By default, AKS configures upgrades to surge with one extra node. A default value of one for the max surge settings will enable AKS to minimize workload disruption by creating an extra node before the cordon/drain of existing applications to replace an older versioned node. The max surge value may be customized per node pool to enable a trade-off between upgrade speed and upgrade disruption. By increasing the max surge value, the upgrade process completes faster, but setting a large value for max surge may cause disruptions during the upgrade process.
For example, a max surge value of 100% provides the fastest possible upgrade process (doubling the node count) but also causes all nodes in the node pool to be drained simultaneously. You may wish to use a higher value such as this for testing environments. For production node pools, we recommend a max_surge setting of 33%.
AKS accepts both integer values and a percentage value for max surge. An integer
During an upgrade, the max surge value can be a minimum of 1 and a maximum value equal to the number of nodes in your node pool. You can set larger values, but the maximum number of nodes used for max surge won't be higher than the number of nodes in the pool at the time of upgrade.
-> [!Important]
+> [!IMPORTANT]
> The max surge setting on a node pool is persistent. Subsequent Kubernetes upgrades or node version upgrades will use this setting. You may change the max surge value for your node pools at any time. For production node pools, we recommend a max-surge setting of 33%. Use the following commands to set max surge values for new or existing node pools.
az aks nodepool update -n mynodepool -g MyResourceGroup --cluster-name MyManaged
### [Azure CLI](#tab/azure-cli)
-With a list of available versions for your AKS cluster, use the [az aks upgrade][az-aks-upgrade] command to upgrade. During the upgrade process, AKS will:
-- add a new buffer node (or as many nodes as configured in [max surge](#customize-node-surge-upgrade)) to the cluster that runs the specified Kubernetes version. -- [cordon and drain][kubernetes-drain] one of the old nodes to minimize disruption to running applications (if you're using max surge, it will [cordon and drain][kubernetes-drain] as many nodes at the same time as the number of buffer nodes specified). -- When the old node is fully drained, it will be reimaged to receive the new version and it will become the buffer node for the following node to be upgraded. -- This process repeats until all nodes in the cluster have been upgraded.
+With a list of available versions for your AKS cluster, use the [az aks upgrade][az-aks-upgrade] command to upgrade. During the upgrade process, AKS will:
+
+- Add a new buffer node (or as many nodes as configured in [max surge](#customize-node-surge-upgrade)) to the cluster that runs the specified Kubernetes version.
+- [Cordon and drain][kubernetes-drain] one of the old nodes to minimize disruption to running applications. If you're using max surge, it will [cordon and drain][kubernetes-drain] as many nodes at the same time as the number of buffer nodes specified.
+- When the old node is fully drained, it will be reimaged to receive the new version, and it will become the buffer node for the following node to be upgraded.
+- This process repeats until all nodes in the cluster have been upgraded.
- At the end of the process, the last buffer node will be deleted, maintaining the existing agent node count and zone balance. [!INCLUDE [alias minor version callout](./includes/aliasminorversion/alias-minor-version-upgrade.md)]
myAKSCluster eastus myResourceGroup 1.19.1 Succeeded
### [Azure PowerShell](#tab/azure-powershell)
-With a list of available versions for your AKS cluster, use the [Set-AzAksCluster][set-azakscluster] cmdlet to upgrade. During the upgrade process, AKS will:
-- add a new buffer node (or as many nodes as configured in [max surge](#customize-node-surge-upgrade)) to the cluster that runs the specified Kubernetes version. -- [cordon and drain][kubernetes-drain] one of the old nodes to minimize disruption to running applications (if you're using max surge it will [cordon and drain][kubernetes-drain] as many nodes at the same time as the number of buffer nodes specified). -- When the old node is fully drained, it will be reimaged to receive the new version and it will become the buffer node for the following node to be upgraded. -- This process repeats until all nodes in the cluster have been upgraded.
+With a list of available versions for your AKS cluster, use the [Set-AzAksCluster][set-azakscluster] cmdlet to upgrade. During the upgrade process, AKS will:
+
+- Add a new buffer node (or as many nodes as configured in [max surge](#customize-node-surge-upgrade)) to the cluster that runs the specified Kubernetes version.
+- [Cordon and drain][kubernetes-drain] one of the old nodes to minimize disruption to running applications. If you're using max surge, it will [cordon and drain][kubernetes-drain] as many nodes at the same time as the number of buffer nodes specified.
+- When the old node is fully drained, it will be reimaged to receive the new version, and it will become the buffer node for the following node to be upgraded.
+- This process repeats until all nodes in the cluster have been upgraded.
- At the end of the process, the last buffer node will be deleted, maintaining the existing agent node count and zone balance. [!INCLUDE [alias minor version callout](./includes/aliasminorversion/alias-minor-version-upgrade.md)]
myAKSCluster eastus 1.19.1 Succeeded myakscluster-dns-379cb
## View the upgrade events
-When you upgrade your cluster, the following Kubenetes events may occur on each node:
+When you upgrade your cluster, the following Kubernetes events may occur on each node:
- Surge ΓÇô Create surge node. - Drain ΓÇô Pods are being evicted from the node. Each pod has a 30-minute timeout to complete the eviction.
default 9m22s Normal Surge node/aks-nodepool1-96663640-vmss000002 Created a surg
## Set auto-upgrade channel
-In addition to manually upgrading a cluster, you can set an auto-upgrade channel on your cluster. For more information, see [Auto-upgrading an AKS cluster][aks-auto-upgrade].
+In addition to manually upgrading a cluster, you can set an auto-upgrade channel on your cluster. For more information, see [Auto-upgrading an AKS cluster][aks-auto-upgrade].
## Special considerations for node pools that span multiple Availability Zones AKS uses best-effort zone balancing in node groups. During an Upgrade surge, zone(s) for the surge node(s) in virtual machine scale sets is unknown ahead of time. This can temporarily cause an unbalanced zone configuration during an upgrade. However, AKS deletes the surge node(s) once the upgrade has been completed and preserves the original zone balance. If you desire to keep your zones balanced during upgrade, increase the surge to a multiple of three nodes. Virtual machine scale sets will then balance your nodes across Availability Zones with best-effort zone balancing.
-If you have PVCs backed by Azure LRS Disks, theyΓÇÖll be bound to a particular zone and may fail to recover immediately if the surge node doesnΓÇÖt match the zone of the PVC. This could cause downtime on your application when the Upgrade operation continues to drain nodes but the PVs are bound to a zone. To handle this case and maintain high availability, configure a [Pod Disruption Budget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) on your application. This allows Kubernetes to respect your availability requirements during Upgrade's drain operation.
+If you have PVCs backed by Azure LRS Disks, theyΓÇÖll be bound to a particular zone, and they may fail to recover immediately if the surge node doesnΓÇÖt match the zone of the PVC. This could cause downtime on your application when the Upgrade operation continues to drain nodes but the PVs are bound to a zone. To handle this case and maintain high availability, configure a [Pod Disruption Budget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) on your application. This allows Kubernetes to respect your availability requirements during Upgrade's drain operation.
## Next steps
api-management Compute Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/compute-infrastructure.md
The following table summarizes the compute platforms currently used for instance
| Version | Description | Architecture | Tiers | | -| -| -- | - |
-| `stv2` | Single-tenant v2 | [Virtual machine scale sets](../virtual-machine-scale-sets/overview.md) | Developer, Basic, Standard, Premium<sup>1</sup> |
-| `stv1` | Single-tenant v1 | [Cloud Service (classic)](../cloud-services/cloud-services-choose-me.md) | Developer, Basic, Standard, Premium |
-| `mtv1` | Multi-tenant v1 | [App service](../app-service/overview.md) | Consumption |
+| `stv2` | Single-tenant v2 | Azure-allocated compute infrastructure that supports availability zones, private endpoints | Developer, Basic, Standard, Premium<sup>1</sup> |
+| `stv1` | Single-tenant v1 | Azure-allocated compute infrastructure | Developer, Basic, Standard, Premium |
+| `mtv1` | Multi-tenant v1 | Shared infrastructure that supports native autoscaling and scaling down to zero in times of no traffic | Consumption |
+ <sup>1</sup> Newly created instances in these tiers, created using the Azure portal or specifying API version 2021-01-01-preview or later. Includes some existing instances in Developer and Premium tiers configured with virtual networks or availability zones.
applied-ai-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
Below is a self-contained `docker compose` example to run the Form Recognizer L
```yml version: "3.9"
-azure-cognitive-service-layout:
+ azure-cognitive-service-layout:
container_name: azure-cognitive-service-layout image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout environment:
azure-cognitive-service-layout:
networks: ocrvnet: driver: bridge+ ``` Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
### Form Recognizer v3.0 generally available
-**Form Recognizer REST API v3.0 is now generally available and ready for use in production applications!**
+**Form Recognizer REST API v3.0 is now generally available and ready for use in production applications!** Update your applications with [**REST API version 2022-08-31**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument).
#### The August release introduces the following new capabilities and updates:
Complete a [quickstart](./quickstarts/try-sdk-rest-api.md) to get started writin
## See also
-* [What is Form Recognizer?](./overview.md)
+* [What is Form Recognizer?](./overview.md)
automanage Arm Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/arm-deploy.md
The following ARM template will onboard your specified machine onto Azure Automa
"machineName": { "type": "String" },
- "configurationProfile": {
+ "configurationProfileName": {
"type": "String" } },
The following ARM template will onboard your specified machine onto Azure Automa
"apiVersion": "2021-04-30-preview", "name": "[concat(parameters('machineName'), '/Microsoft.Automanage/default')]", "properties": {
- "configurationProfile": "[parameters('configurationProfile')]"
+ "configurationProfile": "[parameters('configurationProfileName')]"
} } ]
This ARM template will create a configuration profile assignment for your specif
The `configurationProfile` value can be one of the following values: * "/providers/Microsoft.Automanage/bestPractices/AzureBestPracticesProduction" * "/providers/Microsoft.Automanage/bestPractices/AzureBestPracticesDevTest"
+* "/subscriptions/[sub ID]/resourceGroups/resourceGroupName/providers/Microsoft.Automanage/configurationProfiles/customProfileName (for custom profiles)
Follow these steps to deploy the ARM template: 1. Save this ARM template as `azuredeploy.json` 1. Run this ARM template deployment with `az deployment group create --resource-group myResourceGroup --template-file azuredeploy.json`
-1. Provide the values for machineName, and configurationProfileAssignment when prompted
+1. Provide the values for machineName, and configurationProfileName when prompted
1. You're ready to deploy As with any ARM template, it's possible to factor out the parameters into a separate `azuredeploy.parameters.json` file and use that as an argument when deploying.
automanage Automanage Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-arc.md
Automanage supports the following operating systems for Azure Arc-enabled server
|[Update Management](../automation/update-management/overview.md) |You can use Update Management in Azure Automation to manage operating system updates for your machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. |Production, Dev/Test | |[Microsoft Antimalware](../security/fundamentals/antimalware.md) |Microsoft Antimalware for Azure is a free real-time protection that helps identify and remove viruses, spyware, and other malicious software. It generates alerts when known malicious or unwanted software tries to install itself or run on your Azure systems. **Note:** Microsoft Antimalware requires that there be no other antimalware software installed, or it may fail to work. This is also only supported for Windows Server 2016 and above. |Production, Dev/Test | |[Change Tracking & Inventory](../automation/change-tracking/overview.md) |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. |Production, Dev/Test |
-|[Azure Guest Configuration](../governance/machine-configuration/overview.md) | Guest Configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the Azure security baseline using the Guest Configuration extension. For Arc machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. |Production, Dev/Test |
+|[Machine Configuration](../governance/machine-configuration/overview.md) | Machine Configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the Azure security baseline using the Guest Configuration extension. For Arc machines, the machine configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. |Production, Dev/Test |
|[Azure Automation Account](../automation/automation-create-standalone-account.md) |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. |Production, Dev/Test | |[Log Analytics Workspace](../azure-monitor/logs/log-analytics-overview.md) |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. |Production, Dev/Test |
automanage Automanage Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-linux.md
Automanage supports the following Linux distributions and versions:
|[Microsoft Defender for Cloud](../security-center/security-center-introduction.md) |Microsoft Defender for Cloud is a unified infrastructure security management system that strengthens the security posture of your data centers, and provides advanced threat protection across your hybrid workloads in the cloud. Learn [more](../security-center/security-center-introduction.md). Automanage will configure the subscription where your VM resides to the free-tier offering of Microsoft Defender for Cloud (Enhanced security off). If your subscription is already onboarded to Microsoft Defender for Cloud, then Automanage will not reconfigure it. |Production, Dev/Test | |[Update Management](../automation/update-management/overview.md) |You can use Update Management in Azure Automation to manage operating system updates for your machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. Learn [more](../automation/update-management/overview.md). |Production, Dev/Test | |[Change Tracking & Inventory](../automation/change-tracking/overview.md) |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. Learn [more](../automation/change-tracking/overview.md). |Production, Dev/Test |
-|[Guest configuration](../governance/machine-configuration/overview.md) | Guest configuration is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the Azure Linux baseline using the guest configuration extension. For Linux machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. Learn [more](../governance/machine-configuration/overview.md). |Production, Dev/Test |
+|[Machine configuration](../governance/machine-configuration/overview.md) | Machine configuration is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the Azure Linux baseline using the guest configuration extension. For Linux machines, the machine configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. Learn [more](../governance/machine-configuration/overview.md). |Production, Dev/Test |
|[Boot Diagnostics](../virtual-machines/boot-diagnostics.md) | Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures. Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log information and screenshots. This will only be enabled for machines that are using managed disks. |Production, Dev/Test | |[Azure Automation Account](../automation/automation-create-standalone-account.md) |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. Learn [more](../automation/automation-intro.md). |Production, Dev/Test | |[Log Analytics Workspace](../azure-monitor/logs/log-analytics-workspace-overview.md) |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. Learn [more](../azure-monitor/logs/workspace-design.md). |Production, Dev/Test |
automanage Automanage Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-virtual-machines.md
Previously updated : 5/12/2022 Last updated : 8/25/2022
-# Preview: Azure Automanage for machine best practices
+# Preview: Azure Automanage machine best practices
This article covers information about Azure Automanage for machine best practices, which have the following benefits:
Azure Automanage also automatically monitors for drift and corrects for it when
Automanage doesn't store/process customer data outside the geography your VMs are located. In the Southeast Asia region, Automanage does not store/process data outside of Southeast Asia. > [!NOTE]
-> Automanage can be enabled on Azure virtual machines as well as Azure Arc-enabled servers. Automanage is not available in US Government Cloud at this time.
+> Automanage can be enabled on Azure virtual machines and Azure Arc-enabled servers. Automanage is not available in US Government Cloud at this time.
## Prerequisites There are several prerequisites to consider before trying to enable Azure Automanage on your virtual machines. - Supported [Windows Server versions](automanage-windows-server.md#supported-windows-server-versions) and [Linux distros](automanage-linux.md#supported-linux-distributions-and-versions)-- VMs must be in a supported region (see below)-- User must have correct permissions (see below)
+- Machines must be in a [supported region](#supported-regions)
+- User must have correct [permissions](#required-rbac-permissions)
+- Machines must meet the [eligibility requirements](#enabling-automanage-for-vms-in-azure-portal)
- Automanage does not support Sandbox subscriptions at this time-- Automanage does not support Windows client images at this time
+- Automanage does not support [Trusted Launch VMs](../virtual-machines/trusted-launch.md)
### Supported regions Automanage only supports VMs located in the following regions:
Automanage only supports VMs located in the following regions:
* AU Southeast * Southeast Asia
+> [!NOTE]
+> If the machine is connected to a log analytics workspace, the log analytics workspace must be located in one of the supported regions listed above.
+ ### Required RBAC permissions To onboard, Automanage requires slightly different RBAC roles depending on whether you are enabling Automanage for the first time in a subscription.
In the Machine selection pane in the portal, you will notice the **Eligibility**
- Machine is not using one of the supported images: [Windows Server versions](automanage-windows-server.md#supported-windows-server-versions) and [Linux distros](automanage-linux.md#supported-linux-distributions-and-versions) - Machine is not located in a supported [region](#supported-regions) - Machine's log analytics workspace is not located in a supported [region](#supported-regions)-- User does not have permissions to the log analytics workspace's subscription. Check out the [required permissions](#required-rbac-permissions)-- The Automanage resource provider is not registered on the subscription. Check out [how to register a Resource Provider](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1) with the Automanage resource provider: *Microsoft.Automanage*
+- User does not have sufficient permissions to the log analytics workspace or to the machine. Check out the [required permissions](#required-rbac-permissions)
- Machine does not have necessary VM agents installed which the Automanage service requires. Check out the [Windows agent installation](../virtual-machines/extensions/agent-windows.md) and the [Linux agent installation](../virtual-machines/extensions/agent-linux.md) - Arc machine is not connected. Learn more about the [Arc agent status](../azure-arc/servers/overview.md#agent-status) and [how to connect](../azure-arc/servers/deployment-options.md#agent-installation-details)
+> [!NOTE]
+> If the machine is powered off, you can still onboard the machine to Automanage. However, Automanage will report the machine as "Unknown" in the Automanage status because Automanage needs the machine to be powered on to assess if the machine is configured to the profile. Once you power on your machine, Automanage will try to onboard the machine to the selected configuration profile.
+ Once you have selected your eligible machines, Click **Enable**, and you're done. The only time you might need to interact with this machine to manage these services is in the event we attempted to remediate your VM, but failed to do so. If we successfully remediate your VM, we will bring it back into compliance without even alerting you. For more details, see [Status of VMs](#status-of-vms).
The only time you might need to interact with this machine to manage these servi
## Enabling Automanage for VMs using Azure Policy You can also enable Automanage on VMs at scale using the built-in Azure Policy. The policy has a DeployIfNotExists effect, which means that all eligible VMs located within the scope of the policy will be automatically onboarded to Automanage VM Best Practices.
-A direct link to the policy is [here](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff889cab7-da27-4c41-a3b0-de1f6f87c55).
+A direct link to the policy using the built-in profiles is [here](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff889cab7-da27-4c41-a3b0-de1f6f87c550).
+
+A direct link to the policy using a custom configuration profile is [here](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb025cfb4-3702-47c2-9110-87fe0cfcc99b).
For more information, check out how to enable the [Automanage built-in policy](virtual-machines-policy-enable.md).
In the Azure portal, go to the **Automanage ΓÇô Azure machine best practices** p
For each listed machine, the following details are displayed: Name, Configuration profile, Status, Resource type, Resource group, Subscription. The **Status** column can display the following states:-- *In progress* - the VM was just enabled and is being configured
+- *In progress* - the VM is being configured
- *Conformant* - the VM is configured and no drift is detected-- *Not conformant* - the VM has drifted and we were unable to remediate or the machine is powered off and Automanage will attempt to onboard or remediate the VM when it is next running
+- *Not conformant* - the VM has drifted and Automanage was unable to correct one or more services to the assigned configuration profile
- *Needs upgrade* - the VM is onboarded to an earlier version of Automanage and needs to be [upgraded](automanage-upgrade.md) to the latest version-- *Error* - the Automanage service is unable to monitor one or more resources
+- *Unknown* - the Automanage service is unable to determine the desired configuration of the machine. This is usually because the VM agent is not installed or the machine is not running. It can also indicate that the Automanage service does not have the necessary permissions that it needs to determine the desired configuration
+- *Error* - the Automanage service encountered an error while attempting to determine if the machine conforms with the desired configuration
If you see the **Status** as *Not conformant* or *Error*, you can troubleshoot by clicking on the status in the portal and using the troubleshooting links provided
automanage Automanage Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-windows-server.md
Automanage supports the following Windows versions:
|[Microsoft Antimalware](../security/fundamentals/antimalware.md) |Microsoft Antimalware for Azure is a free real-time protection that helps identify and remove viruses, spyware, and other malicious software. It generates alerts when known malicious or unwanted software tries to install itself or run on your Azure systems. **Note:** Microsoft Antimalware requires that there be no other antimalware software installed, or it may fail to work. |Production, Dev/Test | |[Update Management](../automation/update-management/overview.md) |You can use Update Management in Azure Automation to manage operating system updates for your machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. |Production, Dev/Test | |[Change Tracking & Inventory](../automation/change-tracking/overview.md) |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. |Production, Dev/Test |
-|[Guest configuration](../governance/machine-configuration/overview.md) | Guest configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the [Windows security baselines](/windows/security/threat-protection/windows-security-baselines) using the guest configuration extension. For Windows machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. Learn [more](../governance/machine-configuration/overview.md). To modify the audit mode for Windows machines, use a custom profile to choose your audit mode setting. [Learn more](virtual-machines-custom-profile.md) |Production, Dev/Test |
+|[Machine configuration](../governance/machine-configuration/overview.md) | Machine configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the [Windows security baselines](/windows/security/threat-protection/windows-security-baselines) using the guest configuration extension. For Windows machines, the machine configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. Learn [more](../governance/machine-configuration/overview.md). To modify the audit mode for Windows machines, use a custom profile to choose your audit mode setting. [Learn more](virtual-machines-custom-profile.md) |Production, Dev/Test |
|[Boot Diagnostics](../virtual-machines/boot-diagnostics.md) | Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures. Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log information and screenshots. This will only be enabled for machines that are using managed disks. |Production, Dev/Test | |[Windows Admin Center](/windows-server/manage/windows-admin-center/azure/manage-vm) | Use Windows Admin Center (preview) in the Azure portal to manage the Windows Server operating system inside an Azure VM. This is only supported for machines using Windows Server 2016 or higher. Automanage configures Windows Admin Center over a Private IP address. If you wish to connect with Windows Admin Center over a Public IP address, please open an inbound port rule for port 6516. Automanage onboards Windows Admin Center for the Dev/Test profile by default. Use the preferences to enable or disable Windows Admin Center for the Production and Dev/Test environments. |Production, Dev/Test | |[Azure Automation Account](../automation/automation-create-standalone-account.md) |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. |Production, Dev/Test |
automanage Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/common-errors.md
Onboarding a machine to Automanage will result in an Azure Resource Manager depl
:::image type="content" source="media\common-errors\failure-flyout.png" alt-text="Automanage failure detail flyout."::: ### Check the deployments for the resource group containing the failed machine
-The failure flyout will contain a link to the deployments in the resource group containing the machine that failed onboarding. The flyout will also contain a prefix name you can use to filter deployments with. Clicking the deployment link will take you to the deployments blade, where you can then filter deployments to see Automanage deployments to your machine. If you're deploying across multiple regions, ensure that you click on the deployment in the correct region.
+The failure flyout will contain a link to the deployments in the resource group containing the machine that failed onboarding. Clicking the deployment link will take you to the deployments blade where you can see the Automanage deployments to your machine. If you're deploying across multiple regions, ensure that you click on the deployment in the correct region.
### Check the deployments for the subscription containing the failed machine
-If you don't see any failures in the resource group deployment, then your next step would be to look at the deployments in your subscription containing the machine that failed onboarding. Click the **Deployments for subscription** link in the failure flyout and filter deployments using the **Automanage-DefaultResourceGroup** filter. Use the resource group name from the failure blade to filter deployments. The deployment name will be suffixed with a region name. If you're deploying across multiple regions, ensure that you click on the deployment in the correct region.
+If you don't see any failures in the resource group deployment, then your next step would be to look at the deployments in your subscription containing the machine that failed onboarding. Click the **Deployments for subscription** link in the failure flyout to see all Automanage related deployments for further troubleshooting.
### Check deployments in a subscription linked to a Log Analytics workspace If you don't see any failed deployments in the resource group or subscription containing your failed machine, and if your failed machine is connected to a Log Analytics workspace in a different subscription, then go to the subscription linked to your Log Analytics workspace and check for failed deployments.
If you don't see any failed deployments in the resource group or subscription co
Error | Mitigation :--|:-|
-Automanage account insufficient permissions error | This error may occur if you have recently moved a subscription containing a new Automanage Account into a new tenant. Steps to resolve this error are located [here](./repair-automanage-account.md).
-Workspace region not matching region mapping requirements | Automanage was unable to onboard your machine because the Log Analytics workspace that the machine is currently linked to is not mapped to a supported Automation region. Ensure that your existing Log Analytics workspace and Automation account are located in a [supported region mapping](../automation/how-to/region-mappings.md).
+Automanage account insufficient permissions error | This error may occur if you've recently moved a subscription containing a new Automanage Account into a new tenant. Steps to resolve this error are located [here](./repair-automanage-account.md).
+Workspace region not matching region mapping requirements | Automanage was unable to onboard your machine because the Log Analytics workspace that the machine is currently linked to isn't mapped to a supported Automation region. Ensure that your existing Log Analytics workspace and Automation account are located in a [supported region mapping](../automation/how-to/region-mappings.md).
+The template deployment failed because of policy violation | Automanage was unable to onboard your machine because it violates an existing policy. If the policy violation is related to tags, you can [deploy a custom configuration profile](./virtual-machines-custom-profile.md#create-a-custom-profile-using-azure-resource-manager-templates) with tags for the following ARM resources: default resource group, automation account, recovery services vault, and log analytics workspace
"Access denied because of the deny assignment with name 'System deny assignment created by managed application'" | A [denyAssignment](../role-based-access-control/deny-assignments.md) was created on your resource, which prevented Automanage from accessing your resource. This denyAssignment may have been created by either a [Blueprint](../governance/blueprints/concepts/resource-locking.md) or a [Managed Application](../azure-resource-manager/managed-applications/overview.md). "OS Information: Name='(null)', ver='(null)', agent status='Not Ready'." | Ensure that you're running a [minimum supported agent version](/troubleshoot/azure/virtual-machines/support-extensions-agent-version), the agent is running ([Linux](/troubleshoot/azure/virtual-machines/linux-azure-guest-agent) and [Windows](/troubleshoot/azure/virtual-machines/windows-azure-guest-agent)), and that the agent is up to date ([Linux](../virtual-machines/extensions/update-linux-agent.md) and [Windows](../virtual-machines/extensions/agent-windows.md)).
-"Unable to determine the OS for the VM OS Name:, ver . Please check that the VM Agent is running, the current status is Ready." | Ensure that you're running a [minimum supported agent version](/troubleshoot/azure/virtual-machines/support-extensions-agent-version), the agent is running ([Linux](/troubleshoot/azure/virtual-machines/linux-azure-guest-agent) and [Windows](/troubleshoot/azure/virtual-machines/windows-azure-guest-agent)), and that the agent is up to date ([Linux](../virtual-machines/extensions/update-linux-agent.md) and [Windows](../virtual-machines/extensions/agent-windows.md)).
+"Unable to determine the OS for the VM. Check that the VM Agent is running, the current status is Ready." | Ensure that you're running a [minimum supported agent version](/troubleshoot/azure/virtual-machines/support-extensions-agent-version), the agent is running ([Linux](/troubleshoot/azure/virtual-machines/linux-azure-guest-agent) and [Windows](/troubleshoot/azure/virtual-machines/windows-azure-guest-agent)), and that the agent is up to date ([Linux](../virtual-machines/extensions/update-linux-agent.md) and [Windows](../virtual-machines/extensions/agent-windows.md)).
"VM has reported a failure when processing extension 'IaaSAntimalware'" | Ensure you don't have another antimalware/antivirus offering already installed on your VM. If that fails, contact support.
-ASC workspace: Automanage does not currently support the Log Analytics service in _location_. | Check that your VM is located in a [supported region](./automanage-virtual-machines.md#supported-regions).
-The template deployment failed because of policy violation. Please see details for more information. | There is a policy preventing Automanage from onboarding your VM. Check the policies that are applied to your subscription or resource group containing your VM you want to onboard to Automanage.
-"The assignment has failed; there is no additional information available" | Please open a case with Microsoft Azure support.
+ASC workspace: Automanage doesn't currently support the Log Analytics service in _location_. | Check that your VM is located in a [supported region](./automanage-virtual-machines.md#supported-regions).
+"The assignment has failed; there is no additional information available" | Open a case with Microsoft Azure support.
## Next steps
automanage Virtual Machines Policy Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/virtual-machines-policy-enable.md
If you don't have an Azure subscription, [create an account](https://azure.micro
> The following Azure RBAC permission is needed to enable Automanage: **Owner** role or **Contributor** along with **User Access Administrator** roles. ## Direct link to Policy
-The Automanage policy definition can be found in the Azure portal by the name of [Configure virtual machines to be onboarded to Azure Automanage](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff889cab7-da27-4c41-a3b0-de1f6f87c550). If you click on this link, skip directly to step 8 in [Locate and assign the policy](#locate-and-assign-the-policy) below.
+There are two Automanage built-in policies:
+1. Built-in Automanage profiles (dev/test and production): The Automanage policy definition can be found in the Azure portal by the name of [Configure virtual machines to be onboarded to Azure Automanage](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff889cab7-da27-4c41-a3b0-de1f6f87c550).
+1. Custom configuration profiles: The Automanage policy definition can be found in the Azure portal by the name of [Configure virtual machines to be onboarded to Azure Automanage with Custom Configuration Profile](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb025cfb4-3702-47c2-9110-87fe0cfcc99b0).
+
+If you click on this link, skip directly to step 8 in [Locate and assign the policy](#locate-and-assign-the-policy) below.
## Sign in to Azure
Sign in to the [Azure portal](https://portal.azure.com/).
1. Click the **Categories** dropdown to see the available options 1. Select the **Automanage** option 1. Now the list will update to show a built-in policy with a name that starts with *Configure virtual machines to be onboarded to Azure Automanage*
-1. Click on the *Configure virtual machines to be onboarded to Azure Automanage* built-in policy name
+1. Click on the *Configure virtual machines to be onboarded to Azure Automanage* built-in policy name. Choose the *Configure virtual machines to be onboarded to Azure Automanage with Custom Configuration Profile* policy if you would like to use an Automanage custom profile.
1. After clicking on the policy, you can now see the **Definition** tab > [!NOTE]
Sign in to the [Azure portal](https://portal.azure.com/).
> The Scope lets you define which VMs this policy applies to. You can set application at the subscription level or resource group level. If you set a resource group, all VMs that are currently in that resource group or any future VMs we add to it will have Automanage automatically enabled. 1. Click on the **Parameters** tab and set the **Configuration Profile** and the desired **Effect**
+ > [!NOTE]
+ > If you would like the policy to only apply to resources with a certain tag (key/value pair) you can add this into the "Inclusion Tag Name" and "Inclusion Tag Values". You need to uncheck the "Only show parameters that need input or review" to see this option.
1. Under the **Review + create** tab, review the settings 1. Apply the Assignment by clicking **Create** 1. View your assignments in the **Assignments** tab next to **Definition**
azure-app-configuration Howto Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-geo-replication.md
# Enable geo-replication (Preview)
-This article covers replication of Azure App Configuration stores. You'll learn about how to create and delete a replica in your configuration store.
+This article covers replication of Azure App Configuration stores. You'll learn about how to create, use and delete a replica in your configuration store.
To learn more about the concept of geo-replication, see [Geo-replication in Azure App Configuration](./concept-soft-delete.md).
To delete a replica in the portal, follow the steps below.
-->
+## Use replicas
+
+Each replica you create has its dedicated endpoint. If your application resides in multiple geolocations, you can update each deployment of your application in a location to connect to the replica closer to that location, which helps minimize the network latency between your application and App Configuration. Since each replica has its separate request quota, this setup also helps the scalability of your application while it grows to a multi-region distributed service.
+
+When geo-replication is enabled, and if one replica isn't accessible, you can let your application failover to another replica for improved resiliency. App Configuration provider libraries have built-in failover support by accepting multiple replica endpoints. You can provide a list of your replica endpoints in the order of the most preferred to the least preferred endpoint. When the current endpoint isn't accessible, the provider library will fail over to a less preferred endpoint, but it will try to connect to the more preferred endpoints from time to time. When a more preferred endpoint becomes available, it will switch to it for future requests. You can update your application as the sample code below to take advantage of the failover feature.
+
+> [!NOTE]
+> You can only use Azure AD authentication to connect to replicas. Authentication with access keys is not supported during the preview.
+
+<!-- ### [.NET](#tab/dotnet) -->
+
+```csharp
+configurationBuilder.AddAzureAppConfiguration(options =>
+{
+ // Provide an ordered list of replica endpoints
+ var endpoints = new Uri[] {
+ new Uri("https://<first-replica-endpoint>.azconfig.io"),
+ new Uri("https://<second-replica-endpoint>.azconfig.io") };
+
+ // Connect to replica endpoints using AAD authentication
+ options.Connect(endpoints, new DefaultAzureCredential());
+
+ // Other changes to options
+});
+```
+
+> [!NOTE]
+> The failover support is available if you use version **5.3.0-preview** or later of any of the following packages.
+> - `Microsoft.Extensions.Configuration.AzureAppConfiguration`
+> - `Microsoft.Azure.AppConfiguration.AspNetCore`
+> - `Microsoft.Azure.AppConfiguration.Functions.Worker`
+
+<!-- ### [Java Spring](#tab/spring)
+Placeholder for Java Spring instructions
+ -->
+
+The failover may occur if the App Configuration provider observes the following conditions.
+- Receives responses with service unavailable status (HTTP status code 500 or above).
+- Experiences with network connectivity issues.
+- Requests are throttled (HTTP status code 429).
+
+The failover won't happen for client errors like authentication failures.
+ ## Next steps > [!div class="nextstepaction"]
azure-arc Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/private-link.md
+
+ Title: Private connectivity for Arc enabled Kubernetes clusters using private link (preview)
Last updated : 04/08/2021+
+description: With Azure Arc, you can use a Private Link Scope model to allow multiple Kubernetes clusters to use a single private endpoint.
+++
+# Private connectivity for Arc enabled Kubernetes clusters using private link (preview)
+
+[Azure Private Link](/azure/private-link/private-link-overview) allows you to securely link Azure services to your virtual network using private endpoints. This means you can connect your on-premises Kubernetes clusters with Azure Arc and send all traffic over an Azure ExpressRoute or site-to-site VPN connection instead of using public networks. In Azure Arc, you can use a Private Link Scope model to allow multiple Kubernetes clusters to communicate with their Azure Arc resources using a single private endpoint.
+
+This document covers when to use and how to set up Azure Arc Private Link (preview).
+
+> [!IMPORTANT]
+> The Azure Arc Private Link feature is currently in PREVIEW in all regions where Azure Arc enabled Kubernetes is present, except South East Asia.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Advantages
+
+With Private Link you can:
+
+* Connect privately to Azure Arc without opening up any public network access.
+* Ensure data from the Arc-enabled Kubernetes cluster is only accessed through authorized private networks.
+* Prevent data exfiltration from your private networks by defining specific Azure Arc-enabled Kubernetes clusters and other Azure services resources, such as Azure Monitor, that connects through your private endpoint.
+* Securely connect your private on-premises network to Azure Arc using ExpressRoute and Private Link.
+* Keep all traffic inside the Microsoft Azure backbone network.
+
+For more information, see [Key benefits of Azure Private Link](/azure/private-link/private-link-overview#key-benefits).
+
+## How it works
+
+Azure Arc Private Link Scope connects private endpoints (and the virtual networks they're contained in) to an Azure resource, in this case Azure Arc-enabled Kubernetes clusters. When you enable any one of the Arc-enabled Kubernetes cluster supported extensions, such as Azure Monitor, then connection to other Azure resources may be required for these scenarios. For example, in the case of Azure Monitor, the logs collected from the cluster are sent to Log Analytics workspace.
+
+Connectivity to the other Azure resources from an Arc-enabled Kubernetes cluster listed earlier requires configuring Private Link for each service. For an example, see [Private Link for Azure Monitor](/azure/azure-monitor/logs/private-link-security).
+
+## Current limitations
+
+Consider these current limitations when planning your Private Link setup.
+
+* You can associate at most one Azure Arc Private Link Scope with a virtual network.
+* An Azure Arc-enabled Kubernetes cluster can only connect to one Azure Arc Private Link Scope.
+* All on-premises Kubernetes clusters need to use the same private endpoint by resolving the correct private endpoint information (FQDN record name and private IP address) using the same DNS forwarder. For more information, see [Azure Private Endpoint DNS configuration](/azure/private-link/private-endpoint-dns). The Azure Arc-enabled Kubernetes cluster, Azure Arc Private Link Scope, and virtual network must be in the same Azure region. The Private Endpoint and the virtual network must also be in the same Azure region, but this region can be different from that of your Azure Arc Private Link Scope and Arc-enabled Kubernetes cluster.
+* Traffic to Azure Active Directory, Azure Resource Manager and Microsoft Container Registry service tags must be allowed through your on-premises network firewall during the preview.
+* Other Azure services that you will use, for example Azure Monitor, requires their own private endpoints in your virtual network.
+
+ > [!NOTE]
+ > The [Cluster Connect](conceptual-cluster-connect.md) (and hence the [Custom location](custom-locations.md)) feature is not supported on Azure Arc-enabled Kubernetes clusters with private connectivity enabled. This is planned and will be added later. Network connectivity using private links for Azure Arc services like Azure Arc-enabled data services and Azure Arc-enabled App services that use these features are also not supported yet. Refer to the section below for a list of [cluster extensions or Azure Arc services that support network connectivity through private links](#cluster-extensions-that-support-network-connectivity-through-private-links).
+
+## Cluster extensions that support network connectivity through private links
+
+On Azure Arc-enabled Kubernetes clusters configured with private links, the following extensions support end-to-end connectivity through private links. Refer to the guidance linked to each cluster extension for additional configuration steps and details on support for private links.
+
+* [Azure GitOps](conceptual-gitops-flux2.md)
+* [Azure Monitor](/azure/azure-monitor/logs/private-link-security)
+
+## Planning your Private Link setup
+
+To connect your Kubernetes cluster to Azure Arc over a private link, you need to configure your network to accomplish the following:
+
+1. Establish a connection between your on-premises network and an Azure virtual network using a [site-to-site VPN](/azure/vpn-gateway/tutorial-site-to-site-portal) or [ExpressRoute](/azure/expressroute/expressroute-howto-linkvnet-arm) circuit.
+1. Deploy an Azure Arc Private Link Scope, which controls which Kubernetes clusters can communicate with Azure Arc over private endpoints and associate it with your Azure virtual network using a private endpoint.
+1. Update the DNS configuration on your local network to resolve the private endpoint addresses.
+1. Configure your local firewall to allow access to Azure Active Directory, Azure Resource Manager and Microsoft Container Registry.
+1. Associate the Azure Arc-enabled Kubernetes clusters with the Azure Arc Private Link Scope.
+1. Optionally, deploy private endpoints for other Azure services your Azure Arc enabled Kubernetes cluster is managed by, such as Azure Monitor.
+The rest of this document assumes you have already set up your ExpressRoute circuit or site-to-site VPN connection.
+
+## Network configuration
+
+Azure Arc-enabled Kubernetes integrates with several Azure services to bring cloud management and governance to your hybrid Kubernetes clusters. Most of these services already offer private endpoints, but you need to configure your firewall and routing rules to allow access to Azure Active Directory and Azure Resource Manager over the internet until these services offer private endpoints. You also need to allow access to Microsoft Container Registry (and Azure Front Door.First Party as a precursor for Microsoft Container Registry) to pull images & Helm charts to enable services like Azure Monitor, as well as for initial setup of Azure Arc agents on the Kubernetes clusters.
+
+There are two ways you can achieve this:
+
+* If your network is configured to route all internet-bound traffic through the Azure VPN or ExpressRoute circuit, you can configure the network security group (NSG) associated with your subnet in Azure to allow outbound TCP 443 (HTTPS) access to Azure AD, Azure Resource Manager, Azure Frontdoor and Microsoft Container Registry using [service tags] (/azure/virtual-network/service-tags-overview). The NSG rules should look like the following:
+
+ | Setting | Azure AD rule | Azure Resource Manager rule | AzureFrontDoorFirstParty rule | Microsoft Container Registry rule |
+ |-||||
+ | Source | Virtual Network | Virtual Network | Virtual Network | Virtual Network
+ | Source Port ranges | * | * | * | *
+ | Destination | Service Tag | Service Tag | Service Tag | Service Tag
+ | Destination service tag | AzureActiveDirectory | AzureResourceManager | FrontDoor.FirstParty | MicrosoftContainerRegistry
+ | Destination port ranges | 443 | 443 | 443 | 443
+ | Protocol | TCP | TCP | TCP | TCP
+ | Action | Allow | Allow | Allow (Both inbound and outbound) | Allow
+ | Priority | 150 (must be lower than any rules that block internet access) | 151 (must be lower than any rules that block internet access) | 152 (must be lower than any rules that block internet access) | 153 (must be lower than any rules that block internet access) |
+ | Name | AllowAADOutboundAccess | AllowAzOutboundAccess | AllowAzureFrontDoorFirstPartyAccess | AllowMCROutboundAccess
+
+* Configure the firewall on your local network to allow outbound TCP 443 (HTTPS) access to Azure AD, Azure Resource Manager, and Microsoft Container Registry, and inbound & outbound access to Azure FrontDoor.FirstParty using the downloadable service tag files. The JSON file contains all the public IP address ranges used by Azure AD, Azure Resource Manager, Azure FrontDoor.FirstParty, and Microsoft Container Registry and is updated monthly to reflect any changes. Azure Active Directory's service tag is AzureActiveDirectory, Azure Resource Manager's service tag is AzureResourceManager, Microsoft Container Registry's service tag is MicrosoftContainerRegistry, and Azure Front Door's service tag is FrontDoor.FirstParty. Consult with your network administrator and network firewall vendor to learn how to configure your firewall rules.
+
+## Create an Azure Arc Private Link Scope
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to **Create a resource** in the Azure portal, then search for Azure Arc Private Link Scope. Or you can go directly to the [Azure Arc Private Link Scope page](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.HybridCompute%2FprivateLinkScopes) in the portal.
+
+1. Select **Create**.
+1. Select a subscription and resource group. During the preview, your virtual network and Azure Arc-enabled Kubernetes clusters must be in the same subscription as the Azure Arc Private Link Scope.
+1. Give the Azure Arc Private Link Scope a name.
+1. You can optionally require every Arc-enabled Kubernetes cluster associated with this Azure Arc Private Link Scope to send data to the service through the private endpoint. If you select **Enable public network access**, Kubernetes clusters associated with this Azure Arc Private Link Scope can communicate with the service over both private or public networks. You can change this setting after creating the scope as needed.
+1. Select **Review + Create**.
+
+ :::image type="content" source="media/private-link/create-private-link-scope.png" alt-text="Screenshot of the Azure Arc Private Link Scope creation screen in the Azure portal.":::
+
+1. After the validation completes, select **Create**.
+
+### Create a private endpoint
+
+Once your Azure Arc Private Link Scope is created, you need to connect it with one or more virtual networks using a private endpoint. The private endpoint exposes access to the Azure Arc services on a private IP in your virtual network address space.
+
+The Private Endpoint on your virtual network allows it to reach Azure Arc-enabled Kubernetes cluster endpoints through private IPs from your network's pool, instead of using to the public IPs of these endpoints. That allows you to keep using your Azure Arc-enabled Kubernetes clusters without opening your VNet to unrequested outbound traffic. Traffic from the Private Endpoint to your resources will go through Microsoft Azure, and is not routed to public networks.
+
+1. In your scope resource, select **Private Endpoint connections** in the left-hand resource menu. Select **Add** to start the endpoint create process. You can also approve connections that were started in the Private Link center by selecting them, then selecting **Approve**.
+
+ :::image type="content" source="media/private-link/create-private-endpoint.png" alt-text="Screenshot of the Private Endpoint connections screen in the Azure portal.":::
+
+1. Pick the subscription, resource group, and name of the endpoint, and the region you want to use. This must be the same region as your virtual network.
+1. Select **Next: Resource**.
+1. On the **Resource** page, perform the following:
+ 1. Select the subscription that contains your Azure Arc Private Link Scope resource.
+ 1. For **Resource type**, choose Microsoft.HybridCompute/privateLinkScopes.
+ 1. From the **Resource** drop-down, choose the Azure Arc Private Link Scope that you created earlier.
+ 1. Select **Next: Configuration**.
+1. On the **Configuration** page, perform the following:
+ 1. Choose the virtual network and subnet from which you want to connect to Azure Arc-enabled Kubernetes clusters.
+ 1. For **Integrate with private DNS zone**, select **Yes**. A new Private DNS Zone will be created. The actual DNS zones may be different from what is shown in the screenshot below.
+
+ :::image type="content" source="media/private-link/create-private-endpoint-2.png" alt-text="Screenshot of the Configuration step to create a private endpoint in the Azure portal.":::
+
+ > [!NOTE]
+ > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link, including this private endpoint and the Private Scope configuration. Next, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](/azure/private-link/private-endpoint-dns). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Arc-enabled Kubernetes clusters.
+ 1. Select **Review + create**.
+ 1. Let validation pass.
+ 1. Select **Create**.
+
+ :::image type="content" source="media/private-link/create-private-endpoint-2.png" alt-text="Screenshot of the Configuration step to create a private endpoint in the Azure portal.":::
+
+ > [!NOTE]
+ > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link, including this private endpoint and the Private Scope configuration. Next, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](/azure/private-link/private-endpoint-dns). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Arc-enabled Kubernetes clusters.
+
+## Configure on-premises DNS forwarding
+
+Your on-premises Kubernetes clusters need to be able to resolve the private link DNS records to the private endpoint IP addresses. How you configure this depends on whether you are using Azure private DNS zones to maintain DNS records or using your own DNS server on-premises, along with how many clusters you are configuring.
+
+### DNS configuration using Azure-integrated private DNS zones
+
+If you set up private DNS zones for Azure Arc-enabled Kubernetes clusters when creating the private endpoint, your on-premises Kubernetes clusters must be able to forward DNS queries to the built-in Azure DNS servers to resolve the private endpoint addresses correctly. You need a DNS forwarder in Azure (either a purpose-built VM or an Azure Firewall instance with DNS proxy enabled), after which you can configure your on-premises DNS server to forward queries to Azure to resolve private endpoint IP addresses.
+
+The private endpoint documentation provides guidance for configuring [on-premises workloads using a DNS forwarder](/azure/private-link/private-endpoint-dns#on-premises-workloads-using-a-dns-forwarder).
+
+### Manual DNS server configuration
+
+If you opted out of using Azure private DNS zones during private endpoint creation, you'll need to create the required DNS records in your on-premises DNS server.
+
+1. Go to the Azure portal.
+1. Navigate to the private endpoint resource associated with your virtual network and Azure Arc Private Link Scope.
+1. From the left-hand pane, select **DNS configuration** to see a list of the DNS records and corresponding IP addresses youΓÇÖll need to set up on your DNS server. The FQDNs and IP addresses will change based on the region you selected for your private endpoint and the available IP addresses in your subnet.
+
+ :::image type="content" source="media/private-link/update-dns-configuration.png" alt-text="Screenshot showing manual DNS server configuration in the Azure portal.":::
+
+1. Follow the guidance from your DNS server vendor to add the necessary DNS zones and A records to match the table in the portal. Ensure that you select a DNS server that is appropriately scoped for your network. Every Kubernetes cluster that uses this DNS server now resolves the private endpoint IP addresses and must be associated with the Azure Arc Private Link Scope, or the connection will be refused.
+
+## Configure private links
+
+> [!NOTE]
+> Configuring private links for Azure Arc enabled Kubernetes clusters is supported starting from version 1.3.0 of connectedk8s CLI extension. Ensure that you are using connectedk8s CLI extension version greater than or equal to 1.2.9.
+
+You can configure private links for an existing Azure Arc-enabled Kubernetes cluster or when onboarding a Kubernetes cluster to Azure Arc for the first time using the command below:
+
+```azurecli
+az connectedk8s connect -g <resource-group-name> -n <connected-cluster-name> -l <location> --enable-private-link true --private-link-scope-resource-id <pls-arm-id>
+```
+
+| Parameter name | Description |
+| -- | -- |
+| --enable-private-link |Property to enable/disable private links feature. Set it to "True" to enable connectivity with private links. |
+| --private-link-scope-resource-id | ID of the private link scope resource created earlier. For example: /subscriptions//resourceGroups//providers/Microsoft.HybridCompute/privateLinkScopes/ |
+
+For Azure Arc-enabled Kubernetes clusters that were set up prior to configuring the Azure Arc private link scope, you can configure private links through the Azure portal using the following steps:
+
+1. In the Azure portal, navigate to your Azure Arc Private Link Scope resource.
+1. From the left pane, select **Azure Arc resources** and then **+ Add**.
+1. Select the Kubernetes clusters in the list that you want to associate with the Private Link Scope, and then choose **Select** to save your changes.
+
+ > [!NOTE]
+ > The list only shows Azure Arc-enabled Kubernetes clusters that are within the same subscription and region as your Private Link Scope.
+
+ :::image type="content" source="media/private-link/select-clusters.png" alt-text="Screenshot of the list of Kubernetes clusters for the Azure Arc Private Link Scope." lightbox="media/private-link/select-clusters.png":::
+
+## Troubleshooting
+
+If you run into problems, the following suggestions may help:
+
+* Check your on-premises DNS server(s) to verify it is either forwarding to Azure DNS or is configured with appropriate A records in your private link zone. These lookup commands should return private IP addresses in your Azure virtual network. If they resolve public IP addresses, double check your machine or server and networkΓÇÖs DNS configuration.
+
+ ```console
+ nslookup gbl.his.arc.azure.com
+ nslookup agentserviceapi.guestconfiguration.azure.com
+ nslookup dp.kubernetesconfiguration.azure.com
+ ```
+
+* If you are having trouble onboarding your Kubernetes cluster, confirm that youΓÇÖve added the Azure Active Directory, Azure Resource Manager, AzureFrontDoor.FirstParty and Microsoft Container Registry service tags to your local network firewall.
+
+## Next steps
+
+* Learn more about [Azure Private Endpoint](/azure/private-link/private-link-overview).
+* Learn how to [troubleshoot Azure Private Endpoint connectivity problems](/azure/private-link/troubleshoot-private-endpoint-connectivity).
+* Learn how to [configure Private Link for Azure Monitor](/azure/azure-monitor/logs/private-link-security).
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
Title: "Quickstart: Connect an existing Kubernetes cluster to Azure Arc" description: In this quickstart, you learn how to connect an Azure Arc-enabled Kubernetes cluster. Previously updated : 08/03/2022 Last updated : 08/25/2022 ms.devlang: azurecli
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
|`*.servicebus.windows.net`, `guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`, `sts.windows.net`, `https://k8sconnectcsp.azureedge.net` | For [Cluster Connect](cluster-connect.md) and for [Custom Location](custom-locations.md) based scenarios. | |`https://k8connecthelm.azureedge.net` | `az connectedk8s connect` uses Helm 3 to deploy Azure Arc agents on the Kubernetes cluster. This endpoint is needed for Helm client download to facilitate deployment of the agent helm chart. |
+> [!NOTE]
+> To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET /urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder.
+ ## Create a resource group Run the following command:
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
The following Microsoft provided Kubernetes distributions and infrastructure pro
| - | - | | Cluster API Provider on Azure | Release version: [0.4.12](https://github.com/kubernetes-sigs/cluster-api-provider-azure/releases/tag/v0.4.12); Kubernetes version: [1.18.2](https://github.com/kubernetes/kubernetes/releases/tag/v1.18.2) | | AKS on Azure Stack HCI | Release version: [December 2020 Update](https://github.com/Azure/aks-hci/releases/tag/AKS-HCI-2012); Kubernetes version: [1.18.8](https://github.com/kubernetes/kubernetes/releases/tag/v1.18.8) |
+| K8s on Azure Stack Edge | Release version: Azure Stack Edge 2207 (2.2.2037.5375); Kubernetes version: [1.22.6](https://github.com/kubernetes/kubernetes/releases/tag/v1.22.6) |
The following providers and their corresponding Kubernetes distributions have successfully passed the conformance tests for Azure Arc-enabled Kubernetes:
azure-arc Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/security-overview.md
Title: Azure Arc resource bridge (preview) security overview description: Security information about Azure resource bridge (preview). Previously updated : 07/14/2022 Last updated : 08/25/2022 # Azure Arc resource bridge (preview) security overview
Azure Arc resource bridge (preview) is represented as a resource in a resource g
Users and applications who are granted the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) or Administrator role to the resource group can make changes to the resource bridge, including deploying or deleting cluster extensions.
+## Data residency
+
+Azure Arc resource bridge follows data residency regulations specific to each region. If applicable, data is backed up in a secondary pair region in accordance with data residency regulations. Otherwise, data resides only in that specific region. Data isn't stored or processed across different geographies.
+ ## Data encryption at rest
-The Azure Arc resource bridge stores resource information in Azure Cosmos DB. As described in [Encryption at rest in Azure Cosmos DB](../../cosmos-db/database-encryption-at-rest.md), all the data is encrypted at rest.
+Azure Arc resource bridge stores resource information in Azure Cosmos DB. As described in [Encryption at rest in Azure Cosmos DB](../../cosmos-db/database-encryption-at-rest.md), all the data is encrypted at rest.
## Security audit logs
-The [activity log](../../azure-monitor/essentials/activity-log.md) is a platform log in Azure that provides insight into subscription-level events. This includes tracking when the Azure Arc resource bridge is modified, deleted, or added. You can [view the activity log](../../azure-monitor/essentials/activity-log.md#view-the-activity-log) in the Azure portal or retrieve entries with PowerShell and Azure CLI. By default, activity log events are [retained for 90 days](../../azure-monitor/essentials/activity-log.md#retention-period) and then deleted.
+The [activity log](../../azure-monitor/essentials/activity-log.md) is an Azure platform log that provides insight into subscription-level events. This includes tracking when the Azure Arc resource bridge is modified, deleted, or added. You can [view the activity log](../../azure-monitor/essentials/activity-log.md#view-the-activity-log) in the Azure portal or retrieve entries with PowerShell and Azure CLI. By default, activity log events are [retained for 90 days](../../azure-monitor/essentials/activity-log.md#retention-period) and then deleted.
## Next steps
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/network-requirements.md
The table below lists the URLs that must be available in order to install and us
|`dc.services.visualstudio.com`|Agent telemetry|Optional| Public | > [!NOTE]
-> To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET /urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the <location> placeholder.
+> To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET /urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder.
### [Azure Government](#tab/azure-government)
azure-functions Durable Functions Http Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-http-api.md
The response payload for the **HTTP 202** cases is a JSON object with the follow
| **`terminatePostUri`** |The "terminate" URL of the orchestration instance. | | **`purgeHistoryDeleteUri`** |The "purge history" URL of the orchestration instance. | | **`rewindPostUri`** |(preview) The "rewind" URL of the orchestration instance. |
+| **`suspendPostUri`** |The "suspend" URL of the orchestration instance. |
+| **`resumePostUri`** |The "resume" URL of the orchestration instance. |
The data type of all fields is `string`.
Here is an example response payload for an orchestration instance with `abc123`
"purgeHistoryDeleteUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/abc123?code=XXX", "sendEventPostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/abc123/raiseEvent/{eventName}?code=XXX", "statusQueryGetUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/abc123?code=XXX",
- "terminatePostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/abc123/terminate?reason={text}&code=XXX"
+ "terminatePostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/abc123/terminate?reason={text}&code=XXX",
+ "suspendPostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/abc123/suspend?reason={text}&code=XXX",
+ "resumePostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/abc123/resume?reason={text}&code=XXX"
} ```
The response payload for the **HTTP 200** and **HTTP 202** cases is a JSON objec
| Field | Data type | Description | |--|--|-|
-| **`runtimeStatus`** | string | The runtime status of the instance. Values include *Running*, *Pending*, *Failed*, *Canceled*, *Terminated*, *Completed*. |
+| **`runtimeStatus`** | string | The runtime status of the instance. Values include *Running*, *Pending*, *Failed*, *Canceled*, *Terminated*, *Completed*, *Suspended*. |
| **`input`** | JSON | The JSON data used to initialize the instance. This field is `null` if the `showInput` query string parameter is set to `false`.| | **`customStatus`** | JSON | The JSON data used for custom orchestration status. This field is `null` if not set. | | **`output`** | JSON | The JSON output of the instance. This field is `null` if the instance is not in a completed state. |
POST /admin/extensions/DurableTaskExtension/instances/bcf6fb5067b046fbb021b52ba7
The responses for this API do not contain any content.
+## Suspend instance (preview)
+
+Suspends a running orchestration instance.
+
+### Request
+
+In version 2.x of the Functions runtime, the request is formatted as follows (multiple lines are shown for clarity):
+
+```http
+POST /runtime/webhooks/durabletask/instances/{instanceId}/suspend
+ ?reason={text}
+ &taskHub={taskHub}
+ &connection={connectionName}
+ &code={systemKey}
+```
+
+| Field | Parameter Type | Description |
+|-|--|-|
+| **`instanceId`** | URL | The ID of the orchestration instance. |
+| **`reason`** | Query string | Optional. The reason for suspending the orchestration instance. |
+
+Several possible status code values can be returned.
+
+* **HTTP 202 (Accepted)**: The suspend request was accepted for processing.
+* **HTTP 404 (Not Found)**: The specified instance was not found.
+* **HTTP 410 (Gone)**: The specified instance has completed, failed, or terminated.
+
+The responses for this API do not contain any content.
+
+## Resume instance (preview)
+
+Resumes a suspended orchestration instance.
+
+### Request
+
+In version 2.x of the Functions runtime, the request is formatted as follows (multiple lines are shown for clarity):
+
+```http
+POST /runtime/webhooks/durabletask/instances/{instanceId}/resume
+ ?reason={text}
+ &taskHub={taskHub}
+ &connection={connectionName}
+ &code={systemKey}
+```
+
+| Field | Parameter Type | Description |
+|-|--|-|
+| **`instanceId`** | URL | The ID of the orchestration instance. |
+| **`reason`** | Query string | Optional. The reason for resuming the orchestration instance. |
+
+Several possible status code values can be returned.
+
+* **HTTP 202 (Accepted)**: The resume request was accepted for processing.
+* **HTTP 404 (Not Found)**: The specified instance was not found.
+* **HTTP 410 (Gone)**: The specified instance has completed, failed, or terminated.
+
+The responses for this API do not contain any content.
+ ## Rewind instance (preview) Restores a failed orchestration instance into a running state by replaying the most recent failed operations.
azure-functions Durable Functions Instance Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-instance-management.md
ms.devlang: csharp, java, javascript, python
# Manage instances in Durable Functions in Azure
-Orchestrations in Durable Functions are long-running stateful functions that can be started, queried, and terminated using built-in management APIs. Several other instance management APIs are also exposed by the Durable Functions [orchestration client binding](durable-functions-bindings.md#orchestration-client), such as sending external events to instances, purging instance history, etc. This article goes into the details of all supported instance management operations.
+Orchestrations in Durable Functions are long-running stateful functions that can be started, queried, suspended, resumed, and terminated using built-in management APIs. Several other instance management APIs are also exposed by the Durable Functions [orchestration client binding](durable-functions-bindings.md#orchestration-client), such as sending external events to instances, purging instance history, etc. This article goes into the details of all supported instance management operations.
## Start instances
The method returns an object with the following properties:
* **LastUpdatedTime**: The time at which the orchestration last checkpointed. * **Input**: The input of the function as a JSON value. This field isn't populated if `showInput` is false. * **CustomStatus**: Custom orchestration status in JSON format.
-* **Output**: The output of the function as a JSON value (if the function has completed). If the orchestrator function failed, this property includes the failure details. If the orchestrator function was terminated, this property includes the reason for the termination (if any).
+* **Output**: The output of the function as a JSON value (if the function has completed). If the orchestrator function failed, this property includes the failure details. If the orchestrator function was suspended or terminated, this property includes the reason for the suspension or termination (if any).
* **RuntimeStatus**: One of the following values: * **Pending**: The instance has been scheduled but has not yet started running. * **Running**: The instance has started running.
The method returns an object with the following properties:
* **ContinuedAsNew**: The instance has restarted itself with a new history. This state is a transient state. * **Failed**: The instance failed with an error. * **Terminated**: The instance was stopped abruptly.
+ * **Suspended**: The instance was suspended and may be resumed at a later point in time.
* **History**: The execution history of the orchestration. This field is only populated if `showHistory` is set to `true`. > [!NOTE]
A terminated instance will eventually transition into the `Terminated` state. Ho
> [!NOTE] > Instance termination doesn't currently propagate. Activity functions and sub-orchestrations run to completion, regardless of whether you've terminated the orchestration instance that called them.
+## Suspend and Resume instances (preview)
+
+Suspending an orchestration allows you to stop a running orchestration. Unlike with termination, you have the option to resume a suspended orchestrator at a later point in time.
+
+The two parameters for the suspend API are an instance ID and a reason string, which are written to logs and to the instance status.
+
+# [C#](#tab/csharp)
+
+```csharp
+[FunctionName("SuspendResumeInstance")]
+public static async Task Run(
+ [DurableClient] IDurableOrchestrationClient client,
+ [QueueTrigger("suspend-resume-queue")] string instanceId)
+{
+ string suspendReason = "Need to pause workflow";
+ await client.SuspendAsync(instanceId, suspendReason);
+
+ // ... wait for some period of time since suspending is an async operation...
+
+ string resumeReason = "Continue workflow";
+ await client.ResumeAsync(instanceId, resumeReason);
+}
+```
+
+# [JavaScript](#tab/javascript)
+> [!NOTE]
+> This feature is currently not supported in JavaScript.
+# [Python](#tab/python)
+> [!NOTE]
+> This feature is currently not supported in Python.
+# [Java](#tab/java)
+> [!NOTE]
+> This feature is currently not supported in Java.
+++
+A suspended instance will eventually transition to the `Suspended` state. However, this transition will not happen immediately. Rather, the suspend operation will be queued in the task hub along with other operations for that instance. You can use the instance query APIs to know when a running instance has actually reached the Suspended state.
+
+When a suspended orchestrator is resumed, its status will change back to `Running`.
+ ### Azure Functions Core Tools You can also terminate an orchestration instance directly, by using the [`func durable terminate` command](../functions-core-tools-reference.md#func-durable-terminate) in Core Tools.
public HttpResponseMessage httpStartAndWait(
-Call the function with the following line. Use 2 seconds for the timeout and 0.5 seconds for the retry interval:
+Call the function with the following line. Use 2 seconds for the timeout and 0.5 second for the retry interval:
```bash curl -X POST "http://localhost:7071/orchestrators/E1_HelloSequence/wait?timeout=2&retryInterval=0.5"
Transfer-Encoding: chunked
"id": "d3b72dddefce4e758d92f4d411567177", "sendEventPostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/d3b72dddefce4e758d92f4d411567177/raiseEvent/{eventName}?taskHub={taskHub}&connection={connection}&code={systemKey}", "statusQueryGetUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/d3b72dddefce4e758d92f4d411567177?taskHub={taskHub}&connection={connection}&code={systemKey}",
- "terminatePostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/d3b72dddefce4e758d92f4d411567177/terminate?reason={text}&taskHub={taskHub}&connection={connection}&code={systemKey}"
+ "terminatePostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/d3b72dddefce4e758d92f4d411567177/terminate?reason={text}&taskHub={taskHub}&connection={connection}&code={systemKey}",
+ "suspendPostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/d3b72dddefce4e758d92f4d411567177/suspend?reason={text}&taskHub={taskHub}&connection={connection}&code={systemKey}",
+ "resumePostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/d3b72dddefce4e758d92f4d411567177/resume?reason={text}&taskHub={taskHub}&connection={connection}&code={systemKey}"
} ```
The methods return an object with the following string properties:
* **SendEventPostUri**: The "raise event" URL of the orchestration instance. * **TerminatePostUri**: The "terminate" URL of the orchestration instance. * **PurgeHistoryDeleteUri**: The "purge history" URL of the orchestration instance.
+* **suspendPostUri**: The "suspend" URL of the orchestration instance.
+* **resumePostUri**: The "resume" URL of the orchestration instance.
Functions can send instances of these objects to external systems to monitor or raise events on the corresponding orchestrations, as shown in the following examples:
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
For [ASP.NET Core](asp-net-core.md#adding-telemetryinitializers) applications, a
# [Java](#tab/java)
-**Java agent**
-
-For [Java agent 3.0](./java-in-process-agent.md) the cloud role name is set as follows:
+The cloud role name is set as follows:
```json {
For [Java agent 3.0](./java-in-process-agent.md) the cloud role name is set as f
} ```
-You can also set the cloud role name using the environment variable ```APPLICATIONINSIGHTS_ROLE_NAME```.
-
-**Java SDK**
-
-If you're using the SDK, starting with Application Insights Java SDK 2.5.0, you can specify the cloud role name
-by adding `<RoleName>` to your `ApplicationInsights.xml` file, for example.
-
-```xml
-<?xml version="1.0" encoding="utf-8"?>
-<ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings" schemaVersion="2014-05-30">
- <InstrumentationKey>** Your instrumentation key **</InstrumentationKey>
- <RoleName>** Your role name **</RoleName>
- ...
-</ApplicationInsights>
-```
-
-If you use Spring Boot with the Application Insights Spring Boot starter, the only required change is to set your custom name for the application in the application.properties file.
-
-`spring.application.name=<name-of-app>`
-
-The Spring Boot starter will automatically assign cloud role name to the value you enter for the spring.application.name property.
+You can also set the cloud role name using via environment variable or system property,
+see [configuring cloud role name](./java-standalone-config.md#cloud-role-name) for details.
# [Node.js](#tab/nodejs)
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
System.Diagnostics.Tracing has an [Autoflush feature](/dotnet/api/system.diagnos
### How do I do this for Java?
-In Java codeless instrumentation (recommended) the logs are collected out of the box, use [Java 3.0 agent](./java-in-process-agent.md).
-If you're using the Java SDK, use the [Java log adapters](java-2x-trace-logs.md).
+The Application Insights Java agent collects logs from Log4j, Logback and java.util.logging out of the box.
### There's no Application Insights option on the project context menu * Make sure that Developer Analytics Tools is installed on the development machine. At Visual Studio **Tools** > **Extensions and Updates**, look for **Developer Analytics Tools**. If it isn't on the **Installed** tab, open the **Online** tab and install it.
azure-monitor Auto Collect Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/auto-collect-dependencies.md
Below is the currently supported list of dependency calls that are automatically
## Java
-| App servers | Versions |
-|-|-|
-| [Tomcat](https://tomcat.apache.org/) | 7, 8 |
-| [JBoss EAP](https://developers.redhat.com/products/eap/download/) | 6, 7 |
-| [Jetty](https://www.eclipse.org/jetty/) | 9 |
-| <b>App frameworks </b> | |
-| [Spring](https://spring.io/) | 3.0 |
-| [Spring Boot](https://spring.io/projects/spring-boot) | 1.5.9+<sup>*</sup> |
-| Java Servlet | 3.1+ |
-| <b>Communication libraries</b> | |
-| [Apache Http Client](https://mvnrepository.com/artifact/org.apache.httpcomponents/httpclient) | 4.3+<sup>ΓÇá</sup> |
-| <b>Storage clients</b> | |
-| [SQL Server]( https://mvnrepository.com/artifact/com.microsoft.sqlserver/mssql-jdbc) | 1+<sup>ΓÇá</sup> |
-| [PostgreSQL (Beta Support)](https://github.com/Microsoft/ApplicationInsights-Jav#version-240-beta) | |
-| [Oracle]( https://www.oracle.com/technetwork/database/application-development/jdbc/downloads/https://docsupdatetracker.net/index.html) | 1+<sup>ΓÇá</sup> |
-| [MySql]( https://mvnrepository.com/artifact/mysql/mysql-connector-java) | 1+<sup>ΓÇá</sup> |
-| <b>Logging libraries</b> | |
-| [Logback](https://logback.qos.ch/) | 1+ |
-| [Log4j](https://logging.apache.org/log4j/) | 1.2+ |
-| <b>Metrics libraries</b> | |
-| JMX | 1.0+ |
-> [!NOTE]
-> *Except reactive programing support.
-> <br>ΓÇáRequires installation of [JVM Agent](java-2x-agent.md#install-the-application-insights-agent-for-java).
+See the list of Application Insights Java's
+[autocollected dependencies](java-in-process-agent.md#autocollected-dependencies).
## Node.js
A list of the latest [currently-supported modules](https://github.com/microsoft/
## Next steps - Set up custom dependency tracking for [.NET](./asp-net-dependencies.md).-- Set up custom dependency tracking for [Java](java-2x-agent.md).
+- Set up custom dependency tracking for [Java](java-in-process-agent.md#add-spans).
- Set up custom dependency tracking for [OpenCensus Python](./opencensus-python-dependency.md). - [Write custom dependency telemetry](./api-custom-events-metrics.md#trackdependency) - See [data model](./data-model.md) for Application Insights types and data model.
azure-monitor Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/correlation.md
The Application Insights .NET SDK uses `DiagnosticSource` and `Activity` to coll
<a name="java-correlation"></a> ## Telemetry correlation in Java
-[Java agent](./java-in-process-agent.md) supports automatic correlation of telemetry. It automatically populates `operation_id` for all telemetry (like traces, exceptions, and custom events) issued within the scope of a request. It also propagates the correlation headers (described earlier) for service-to-service calls via HTTP, if the [Java SDK agent](java-2x-agent.md) is configured.
+[Application Insights Java](./java-in-process-agent.md) supports automatic correlation of telemetry.
+It automatically populates `operation_id` for all telemetry (like traces, exceptions, and custom events) issued within the scope of a request. It also propagates the correlation headers (described earlier) for service-to-service calls via HTTP, RPC, and messaging. See the list of Application Insights Java's
+[autocollected dependencies which support distributed trace propagation](java-in-process-agent.md#autocollected-dependencies).
> [!NOTE]
-> Application Insights Java agent auto-collects requests and dependencies for JMS, Kafka, Netty/Webflux, and more. For Java SDK only calls made via Apache HttpClient are supported for the correlation feature. Automatic context propagation across messaging technologies (like Kafka, RabbitMQ, and Azure Service Bus) isn't supported in the SDK.
-
-> [!NOTE]
-> To collect custom telemetry you need to instrument the application with Java 2.6 SDK.
+> See [custom telemetry](./java-in-process-agent.md#custom-telemetry) if the auto-instrumentation does not cover all
+> of your needs.
### Role names You might want to customize the way component names are displayed in the [Application Map](../../azure-monitor/app/app-map.md). To do so, you can manually set the `cloud_RoleName` by taking one of the following actions: -- For Application Insights Java agent 3.0, set the cloud role name as follows:
+- For Application Insights Java, set the cloud role name as follows:
```json {
You might want to customize the way component names are displayed in the [Applic
} } ```
- You can also set the cloud role name using the environment variable `APPLICATIONINSIGHTS_ROLE_NAME`.
--- With Application Insights Java SDK 2.5.0 and later, you can specify the `cloud_RoleName`
- by adding `<RoleName>` to your ApplicationInsights.xml file:
--
- ```xml
- <?xml version="1.0" encoding="utf-8"?>
- <ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings" schemaVersion="2014-05-30">
- <ConnectionString>InstrumentationKey=00000000-0000-0000-0000-000000000000</ConnectionString>
- <RoleName>** Your role name **</RoleName>
- ...
- </ApplicationInsights>
- ```
--- If you use Spring Boot with the Application Insights Spring Boot Starter, you just need to set your custom name for the application in the application.properties file:-
- `spring.application.name=<name-of-app>`
- The Spring Boot Starter automatically assigns `cloudRoleName` to the value you enter for the `spring.application.name` property.
+ You can also set the cloud role name using via environment variable or system property,
+ see [configuring cloud role name](./java-standalone-config.md#cloud-role-name) for details.
## Next steps
azure-monitor Data Model Dependency Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-dependency-telemetry.md
Indication of successful or unsuccessful call.
## Next steps - Set up dependency tracking for [.NET](./asp-net-dependencies.md).-- Set up dependency tracking for [Java](java-2x-agent.md).
+- Set up dependency tracking for [Java](./java-in-process-agent.md).
- [Write custom dependency telemetry](./api-custom-events-metrics.md#trackdependency) - See [data model](data-model.md) for Application Insights types and data model. - Check out [platforms](./platforms.md) supported by Application Insights.
azure-monitor Data Model Trace Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-trace-telemetry.md
Trace severity level. Value can be `Verbose`, `Information`, `Warning`, `Error`,
## Next steps - [Explore .NET trace logs in Application Insights](./asp-net-trace-logs.md).-- [Explore Java trace logs in Application Insights](java-2x-trace-logs.md).
+- [Explore Java trace logs in Application Insights](./java-in-process-agent.md#autocollected-logs).
- See [data model](data-model.md) for Application Insights types and data model. - [Write custom trace telemetry](./api-custom-events-metrics.md#tracktrace) - Check out [platforms](./platforms.md) supported by Application Insights.
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
There are three sources of data:
* Each SDK has many [modules](./configuration-with-applicationinsights-config.md), which use different techniques to collect different types of telemetry. * If you install the SDK in development, you can use its API to send your own telemetry, in addition to the standard modules. This custom telemetry can include any data you want to send.
-* In some web servers, there are also agents that run alongside the app and send telemetry about CPU, memory, and network occupancy. For example, Azure VMs, Docker hosts, and [Java EE servers](java-2x-agent.md) can have such agents.
+* In some web servers, there are also agents that run alongside the app and send telemetry about CPU, memory, and network occupancy. For example, Azure VMs, Docker hosts, and [Java application servers](./java-in-process-agent.md) can have such agents.
* [Availability tests](./monitor-web-app-availability.md) are processes run by Microsoft that send requests to your web app at regular intervals. The results are sent to the Application Insights service. ### What kinds of data are collected?
azure-monitor Diagnostic Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/diagnostic-search.md
The first time you do this, you are asked to configure a link to your Azure DevO
In addition to the out-of-the-box telemetry sent by Application Insights SDK, you can:
-* Capture log traces from your favorite logging framework in [.NET](./asp-net-trace-logs.md) or [Java](java-2x-trace-logs.md). This means you can search through your log traces and correlate them with page views, exceptions, and other events.
+* Capture log traces from your favorite logging framework in [.NET](./asp-net-trace-logs.md) or [Java](./java-in-process-agent.md#autocollected-logs). This means you can search through your log traces and correlate them with page views, exceptions, and other events.
* [Write code](./api-custom-events-metrics.md) to send custom events, page views, and exceptions. [Learn how to send logs and custom telemetry to Application Insights](./asp-net-trace-logs.md).
azure-monitor Platforms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/platforms.md
## Logging frameworks * [ILogger](./ilogger.md) * [Log4Net, NLog, or System.Diagnostics.Trace](./asp-net-trace-logs.md)
-* [Java, Log4J, or Logback](java-2x-trace-logs.md)
+* [Log4J, Logback, or java.util.logging](./java-in-process-agent.md#autocollected-logs)
* [LogStash plugin](https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash/logstash-output-applicationinsights) * [Azure Monitor](/archive/blogs/msoms/application-insights-connector-in-oms)
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
By default no sampling is enabled in the Java auto-instrumentation and SDK. Curr
* To configure sampling overrides that override the default sampling rate and apply different sampling rates to selected requests and dependencies, use the [sampling override guide](./java-standalone-sampling-overrides.md#getting-started). * To configure fixed-rate sampling that applies to all of your telemetry, use the [fixed rate sampling guide](./java-standalone-config.md#sampling).
-#### Configuring Java 2.x SDK
-
-1. Download and configure your web application with the latest [Application Insights Java SDK](./java-2x-get-started.md).
-
-2. **Enable the fixed-rate sampling module** by adding the following snippet to `ApplicationInsights.xml` file:
-
- ```xml
- <TelemetryProcessors>
- <BuiltInProcessors>
- <Processor type="FixedRateSamplingTelemetryProcessor">
- <!-- Set a percentage close to 100/N where N is an integer. -->
- <!-- E.g. 50 (=100/2), 33.33 (=100/3), 25 (=100/4), 20, 1 (=100/100), 0.1 (=100/1000) -->
- <Add name="SamplingPercentage" value="50" />
- </Processor>
- </BuiltInProcessors>
- </TelemetryProcessors>
- ```
-
-3. You can include or exclude specific types of telemetry from sampling using the following tags inside the `Processor` tag's `FixedRateSamplingTelemetryProcessor`:
-
- ```xml
- <ExcludedTypes>
- <ExcludedType>Request</ExcludedType>
- </ExcludedTypes>
-
- <IncludedTypes>
- <IncludedType>Exception</IncludedType>
- </IncludedTypes>
- ```
-
-The telemetry types that can be included or excluded from sampling are: `Dependency`, `Event`, `Exception`, `PageView`, `Request`, and `Trace`.
- > [!NOTE] > For the sampling percentage, choose a percentage that is close to 100/N where N is an integer. Currently sampling doesn't support other values.
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
You can set the connection string in the `applicationinsights.json` configuratio
For more information, [connection string configuration](./java-standalone-config.md#connection-string).
-For Application Insights Java 2.x, you can set the connection string in the `ApplicationInsights.xml` configuration file:
-
-```xml
-<?xml version="1.0" encoding="utf-8"?>
-<ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings">
- <ConnectionString>InstrumentationKey=00000000-0000-0000-0000-000000000000</ConnectionString>
-</ApplicationInsights>
-```
- # [JavaScript](#tab/js) Important: JavaScript doesn't support the use of Environment Variables.
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
Monitoring your containers is critical, especially when you're running a product
Container insights gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and Container logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are sent to the [metrics database in Azure Monitor](../essentials/data-platform-metrics.md). Log data is sent to your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md). + ## Features of Container insights
azure-monitor Container Insights Transition Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-transition-hybrid.md
- Title: "Transition to using Container Insights on Azure Arc-enabled Kubernetes clusters" Previously updated : 08/25/2022---
-description: "Learn how to migrate from using script-based hybrid monitoring solutions to Container Insights on Azure Arc-enabled Kubernetes clusters"
---
-# Transition to using Container Insights on Azure Arc-enabled Kubernetes
-
-On May 31, 2022 Container Insights support for Azure Red Hat OpenShift v4.x was retired. If you use the script-based model of Container Insights for Azure Red Hat OpenShift v4.x, make sure to transition to Container Insights on [Azure Arc enabled Kubernetes](./container-insights-enable-arc-enabled-clusters.md) prior to that date.
-
-## Steps to complete the transition
-
-To transition to Container Insights on Azure Arc enabled Kubernetes, we recommend the following approach.
-
-1. Learn about the feature differences between Container Insights with Azure Red Hat OpenShift v4.x and Azure Arc enabled Kubernetes
-2. [Disable your existing monitoring](./container-insights-optout-openshift-v4.md) for your Azure Red Hat OpenShift cluster
-3. Read documentation on the [Azure Arc enabled Kubernetes cluster extensions](../../azure-arc/kubernetes/extensions.md) and the [Container Insights onboarding prerequisites](./container-insights-enable-arc-enabled-clusters.md#prerequisites) to understand the requirements
-4. [Connect your cluster](../../azure-arc/kubernetes/quickstart-connect-cluster.md) to Azure Arc enabled Kubernetes platform
-5. [Turn on Container Insights](./container-insights-enable-arc-enabled-clusters.md) for Azure Arc enabled Kubernetes using Azure portal, CLI, or ARM.
-6. [Validate](./container-insights-enable-arc-enabled-clusters.md#verify-extension-installation-status) the current configuration is working
-
-## Container Insights on Azure Red Hat OpenShift v4.x vs Azure Arc enabled Kubernetes
-
-The following table highlights the key differences between monitoring using the Azure Red Hat OpenShift v4.x script versus through Azure Arc enabled Kubernetes cluster extensions. Container Insights on Azure Arc enabled Kubernetes offers a substantial upgrade to that on Azure Red Hat OpenShift v4.x.
-
-| Feature Differences | Azure Red Hat OpenShift v.4x monitoring | Azure Arc enabled Kubernetes monitoring |
-| - | -- | - |
-| Onboarding | Manual script-based installation only | Single click onboarding using Azure Arc cluster extensions via Azure portal, CLI, or ARM |
-| Alerting | Log based alerts only | Log based alerting and [recommended metric-based](./container-insights-metric-alerts.md) alerts |
-| Metrics | Does not support Azure Monitor metrics | Supports Azure Monitor metrics |
-| Consumption | Viewable only from Azure Monitor blade | Accessible from both Azure Monitor and Azure Arc enabled Kubernetes resource blade |
-| Agent | Manual agent upgrades | Automatic updates for monitoring agent with version control through Azure Arc cluster extensions |
-| Feature parity | No additional updates beyond May 2022 | First class parity and updates inline with Container Insights on AKS |
-
-## Next steps
--- [Disable existing monitoring](./container-insights-optout-openshift-v4.md) for your Azure Red Hat OpenShift v4.x cluster -- [Connect your cluster](../../azure-arc/kubernetes/quickstart-connect-cluster.md) to the Azure Arc enabled Kubernetes platform-- [Configure Container Insights](./container-insights-enable-arc-enabled-clusters.md) on Azure Arc enabled Kubernetes
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 08/24/2022 Last updated : 08/26/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files Standard network features are supported for the following reg
* Australia Southeast * Canada Central * Central US
+* East Asia
* East US * East US 2 * France Central
Azure NetApp Files Standard network features are supported for the following reg
* Japan East * North Central US * North Europe
+* Norway East
* South Central US * Southeast Asia * Switzerland North
azure-portal Azure Portal Video Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-video-series.md
Title: Azure portal how-to video series description: Find video demos for how to work with Azure services in the portal. View and link directly to the latest how-to videos.
-keywords:
Previously updated : 03/03/2022- Last updated : 08/25/2022+ # Azure portal how-to video series
The [Azure portal how-to video series](https://www.youtube.com/playlist?list=PLL
## Featured video
-In this featured video, we show you how to move your resources in Azure between resource groups and locations.
+In this featured video, we show you how to create a storage account in the Azure portal.
-> [!VIDEO https://www.youtube.com/embed/8HVAP4giLdc]
+> [!VIDEO https://www.youtube.com/embed/AhuNgBafmUo]
-[How to move Azure resources](https://www.youtube.com/watch?v=8HVAP4giLdc)
+[How to create a storage account](https://www.youtube.com/watch?v=AhuNgBafmUo)
Catch up on these videos you may have missed:
-| [How to easily manage your virtual machine](https://www.youtube.com/watch?v=vQClJHt2ulQ) | [How to use pills to filter in the Azure portal](https://www.youtube.com/watch?v=XyKh_3NxUlM) | [How to get a visualization view of your resources](https://www.youtube.com/watch?v=wudqkkJd5E4) |
+| [How to use search in the Azure portal](https://www.youtube.com/watch?v=PcHF_DzsETA) | [How to check your subscription's secure score](https://www.youtube.com/watch?v=yqb3qvsjqXY) | [How to find and use Translator](https://www.youtube.com/watch?v=6xBHkHkFmZ4) |
| | | |
-| [![Image of YouTube video about how to easily manage your virtual machine](https://i.ytimg.com/vi/vQClJHt2ulQ/hqdefault.jpg)](http://www.youtube.com/watch?v=vQClJHt2ulQ) | [![Image of YouTube video about how to use pills to filter in the Azure portal](https://i.ytimg.com/vi/XyKh_3NxUlM/hqdefault.jpg)](https://www.youtube.com/watch?v=XyKh_3NxUlM) | [![Image of YouTube video about how to get a visualization view of your resources](https://i.ytimg.com/vi/wudqkkJd5E4/hqdefault.jpg)](http://www.youtube.com/watch?v=wudqkkJd5E4) |
+| [![Image of YouTube video about how to use search in the Azure portal](https://i.ytimg.com/vi/PcHF_DzsETA/hqdefault.jpg)](http://www.youtube.com/watch?v=PcHF_DzsETA) | [![Image of YouTube video about how to check your subscription's secure score](https://i.ytimg.com/vi/yqb3qvsjqXY/hqdefault.jpg)](https://www.youtube.com/watch?v=yqb3qvsjqXY) | [![Image of YouTube video about how to find and use Translator](https://i.ytimg.com/vi/6xBHkHkFmZ4/hqdefault.jpg)](http://www.youtube.com/watch?v=6xBHkHkFmZ4) |
## Video playlist
azure-resource-manager Template Tutorial Add Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-parameters.md
# Tutorial: Add parameters to your ARM template
-In the [previous tutorial](template-tutorial-add-resource.md), you learned how to add an [Azure storage account](../../storage/common/storage-account-create.md) to the template and deploy it. In this tutorial, you learn how to improve the Azure Resource Manager template (ARM template) by adding parameters. This tutorial takes about **14 minutes** to complete.
+In the [previous tutorial](template-tutorial-add-resource.md), you learned how to add an [Azure storage account](../../storage/common/storage-account-create.md) to the template and deploy it. In this tutorial, you learn how to improve the Azure Resource Manager template (ARM template) by adding parameters. This instruction takes **14 minutes** to complete.
## Prerequisites We recommend that you complete the [tutorial about resources](template-tutorial-add-resource.md), but it's not required.
-You need to have [Visual Studio Code](https://code.visualstudio.com/) installed and working with the Azure Resource Manager Tools extension, and either Azure PowerShell or Azure CLI. For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
+You need to have [Visual Studio Code](https://code.visualstudio.com/) installed and working with the Azure Resource Manager Tools extension, and either Azure PowerShell or Azure Command-Line Interface (CLI). For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
## Review template
azure-resource-manager Template Tutorial Add Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-resource.md
description: Describes the steps to create your first Azure Resource Manager tem
Last updated 06/14/2022 -+ # Tutorial: Add a resource to your ARM template
-In the [previous tutorial](template-tutorial-create-first-template.md), you learned how to create and deploy your first blank Azure Resource Manager template (ARM template). Now, you're ready to deploy an actual resource to that template. In this case, an [Azure storage account](../../storage/common/storage-account-create.md). It takes about **9 minutes** to complete this instruction.
+In the [previous tutorial](template-tutorial-create-first-template.md), you learned how to create and deploy your first blank Azure Resource Manager template (ARM template). Now, you're ready to deploy an actual resource to that template. In this case, an [Azure storage account](../../storage/common/storage-account-create.md). This instruction takes **9 minutes** to complete.
## Prerequisites We recommend that you complete the [introductory tutorial about templates](template-tutorial-create-first-template.md), but it's not required.
-You need to have [Visual Studio Code](https://code.visualstudio.com/) installed and working with the Azure Resource Manager Tools extension, and either Azure PowerShell or Azure CLI. For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
+You need to have [Visual Studio Code](https://code.visualstudio.com/) installed and working with the Azure Resource Manager Tools extension, and either Azure PowerShell or Azure Command-Line Interface (CLI). For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
## Add resource
azure-resource-manager Template Tutorial Create First Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-first-template.md
# Tutorial: Create and deploy your first ARM template
-This tutorial introduces you to Azure Resource Manager templates (ARM templates). It shows you how to create a starter template and deploy it to Azure. It teaches you about the template structure and the tools you need to work with templates. It takes about **12 minutes** to complete this tutorial, but the actual time varies based on how many tools you need to install.
+This tutorial introduces you to Azure Resource Manager templates (ARM templates). It shows you how to create a starter template and deploy it to Azure. It teaches you about the template structure and the tools you need to work with templates. This instruction takes **12 minutes** to complete, but the actual finish time varies based on how many tools you need to install.
This tutorial is the first of a series. As you progress through the series, you modify the starting template, step by step, until you explore all of the core parts of an ARM template. These elements are the building blocks for more complex templates. We hope by the end of the series you're confident in creating your own templates and ready to automate your deployments with templates.
azure-resource-manager Error Not Found https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-not-found.md
Last updated 11/30/2021
-# Resolve resource not found errors
+# Resolve Resource Not Found errors
This article describes the error you see when a resource can't be found during an operation. Typically, you see this error when deploying resources with a Bicep file or Azure Resource Manager template (ARM template). You also see this error when doing management tasks and Azure Resource Manager can't find the required resource. For example, if you try to add tags to a resource that doesn't exist, you receive this error.
-## Symptom
+## Symptoms
There are two error codes that indicate the resource can't be found. The `NotFound` error returns a result similar to:
group {resource group name} was not found.
Resource Manager needs to retrieve the properties for a resource, but can't find the resource in your subscription.
-## Solution 1 - check resource properties
+## Solution 1 - Check resource properties
When you receive this error while doing a management task, check the values you provided for the resource. The three values to check are:
If you're using PowerShell or Azure CLI, check that you're running commands in t
If you can't verify the properties, sign in to the [Microsoft Azure portal](https://portal.azure.com). Find the resource you're trying to use and examine the resource name, resource group, and subscription.
-## Solution 2 - set dependencies
+## Solution 2 - Set Dependencies
If you get this error when deploying a template, you may need to add a dependency. Resource Manager optimizes deployments by creating resources in parallel, when possible.
When you see dependency problems, you need to gain insight into the order of res
:::image type="content" source="media/error-not-found/deployment-events-sequence.png" alt-text="Screenshot of activity log for resources deployed in sequential order.":::
-## Solution 3 - get external resource
+## Solution 3 - Get external resource
# [Bicep](#tab/bicep)
The following example gets the resource ID for a resource that exists in a diffe
-## Solution 4 - get managed identity from resource
+## Solution 4 - Get managed identity from resource
# [Bicep](#tab/bicep)
Or, to get the tenant ID for a managed identity that is applied to a virtual mac
-## Solution 5 - check functions
+## Solution 5 - Check functions
# [Bicep](#tab/bicep)
When deploying a template, look for expressions that use the [reference](../temp
-## Solution 6 - after deleting resource
+## Solution 6 - After deleting resource
When you delete a resource, there might be a short amount of time when the resource appears in the portal but isn't available. If you select the resource, you'll get an error that the resource is **Not found**.
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
For the pricing details, see [pricing](https://azure.microsoft.com/pricing/detai
> [!NOTE] > Use the same Azure AD user you used when connecting to Azure.
-It's mandatory to have the following three accounts located in the same region:
+It's strongly recommended to have the following three accounts located in the same region:
* The Azure Video Indexer account that you're creating. * The Azure Video Indexer account that you're connecting with the Media Services account.
If your storage account is behind a firewall, see [storage account that is behin
The following Azure Media Services related considerations apply:
-* If you plan to connect to an existing Media Services account, make sure the Media Services account was created with the classic APIs.
-
- ![Media Services classic API](./media/create-account/enable-classic-api.png)
* If you connect to a new Media Services account, Azure Video Indexer automatically starts the default **Streaming Endpoint** in it: ![Media Services streaming endpoint](./media/create-account/ams-streaming-endpoint.png)
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
In addition, the model now includes people and locations in-context which are no
### Azure Video Indexer is deployed on US Government cloud You can now create an Azure Video Indexer paid account on US government cloud in Virginia and Arizona regions.
-Azure Video Indexer free trial offering isn't available in the mentioned region. For more information go to Azure Video Indexer Documentation.
+Azure Video Indexer trial offering isn't available in the mentioned region. For more information go to Azure Video Indexer Documentation.
### Azure Video Indexer deployed in the India Central region
azure-video-indexer Upload Index Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/upload-index-videos.md
This article shows how to upload and index videos by using the Azure Video Index
When you're creating an Azure Video Indexer account, you choose between: -- A free trial account. Azure Video Indexer provides up to 600 minutes of free indexing to website users and up to 2,400 minutes of free indexing to API users.
+- A trial account. Azure Video Indexer provides up to 600 minutes of free indexing to website users and up to 2,400 minutes of free indexing to API users.
- A paid option where you're not limited by a quota. You create an Azure Video Indexer account that's [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for indexed minutes. For more information about account types, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
azure-video-indexer Video Indexer Use Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-use-apis.md
Azure Video Indexer consolidates various audio and video artificial intelligence (AI) technologies offered by Microsoft into one integrated service, making development simpler. The APIs are designed to enable developers to focus on consuming Media AI technologies without worrying about scale, global reach, availability, and reliability of cloud platforms. You can use the API to upload your files, get detailed video insights, get URLs of embeddable insight and player widgets, and more.
-When creating an Azure Video Indexer account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you're not limited by the quota). With a free trial, Azure Video Indexer provides up to 600 minutes of free indexing to website users and up to 2400 minutes of free indexing to API users. With a paid option, you create an Azure Video Indexer account that's [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for minutes indexed, for more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
+When creating an Azure Video Indexer account, you can choose a trial account (where you get a certain number of free indexing minutes) or a paid option (where you're not limited by the quota). With a trial, account, Azure Video Indexer provides up to 600 minutes of free indexing to website users and up to 2400 minutes of free indexing to API users. With a paid option, you create an Azure Video Indexer account that's [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for minutes indexed, for more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
This article shows how the developers can take advantage of the [Azure Video Indexer API](https://api-portal.videoindexer.ai/).
backup Backup Azure Mabs Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mabs-troubleshoot.md
Title: Troubleshoot Azure Backup Server
description: Troubleshoot installation, registration of Azure Backup Server, and backup and restore of application workloads. Previously updated : 07/05/2019 Last updated : 08/26/2022+++ # Troubleshoot Azure Backup Server
Reg query "HKLM\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Setup"
| Operation | Error details | Workaround | | | | |
-| Registering to a vault | Invalid vault credentials provided. The file is corrupted or does not have the latest credentials associated with the recovery service. | Recommended action: <br> <ul><li> Download the latest credentials file from the vault and try again. <br>(OR)</li> <li> If the previous action didn't work, try downloading the credentials to a different local directory or create a new vault. <br>(OR)</li> <li> Try updating the date and time settings as described in [this article](./backup-azure-mars-troubleshoot.md#invalid-vault-credentials-provided). <br>(OR)</li> <li> Check to see if c:\windows\temp has more than 65000 files. Move stale files to another location or delete the items in the Temp folder. <br>(OR)</li> <li> Check the status of certificates. <br> a. Open **Manage Computer Certificates** (in Control Panel). <br> b. Expand the **Personal** node and its child node **Certificates**.<br> c. Remove the certificate **Windows Azure Tools**. <br> d. Retry the registration in the Azure Backup client. <br> (OR) </li> <li> Check to see if any group policy is in place. </li></ul> |
+| Registering to a vault | Invalid vault credentials provided. The file is corrupted or doesn't have the latest credentials associated with the recovery service. | Recommended action: <br> <ul><li> Download the latest credentials file from the vault and try again. <br>(OR)</li> <li> If the previous action didn't work, try downloading the credentials to a different local directory or create a new vault. <br>(OR)</li> <li> Try updating the date and time settings as described in [this article](./backup-azure-mars-troubleshoot.md#invalid-vault-credentials-provided). <br>(OR)</li> <li> Check to see if c:\windows\temp has more than 65000 files. Move stale files to another location or delete the items in the Temp folder. <br>(OR)</li> <li> Check the status of certificates. <br> a. Open **Manage Computer Certificates** (in Control Panel). <br> b. Expand the **Personal** node and its child node **Certificates**.<br> c. Remove the certificate **Windows Azure Tools**. <br> d. Retry the registration in the Azure Backup client. <br> (OR) </li> <li> Check to see if any group policy is in place. </li> <li> To prevent errors during vault registration, ensure that you've the MARS agent version 2.0.9249.0 or above installed. If not, we recommend you to download and install the latest version [from here](https://aka.ms/azurebackup_agent). </li></ul> |
## Replica is inconsistent
Reg query "HKLM\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Setup"
| Operation | Error details | Workaround | | | | |
-| Backup | Online recovery point creation failed | **Error Message**: Windows Azure Backup Agent was unable to create a snapshot of the selected volume. <br> **Workaround**: Try increasing the space in replica and recovery point volume.<br> <br> **Error Message**: The Windows Azure Backup Agent cannot connect to the OBEngine service <br> **Workaround**: verify that the OBEngine exists in the list of running services on the computer. If the OBEngine service is not running, use the "net start OBEngine" command to start the OBEngine service. <br> <br> **Error Message**: The encryption passphrase for this server is not set. Please configure an encryption passphrase. <br> **Workaround**: Try configuring an encryption passphrase. If it fails, take the following steps: <br> <ol><li>Verify that the scratch location exists. This is the location that's mentioned in the registry **HKEY_LOCAL_MACHINE\Software\Microsoft\Windows Azure Backup\Config**, with the name **ScratchLocation** should exist.</li><li> If the scratch location exists, try re-registering by using the old passphrase. *Whenever you configure an encryption passphrase, save it in a secure location.*</li><ol>|
+| Backup | Online recovery point creation failed | **Error Message**: Windows Azure Backup Agent was unable to create a snapshot of the selected volume. <br> **Workaround**: Try increasing the space in replica and recovery point volume.<br> <br> **Error Message**: The Windows Azure Backup Agent can't connect to the OBEngine service <br> **Workaround**: verify that the OBEngine exists in the list of running services on the computer. If the OBEngine service isn't running, use the "net start OBEngine" command to start the OBEngine service. <br> <br> **Error Message**: The encryption passphrase for this server isn't set. Please configure an encryption passphrase. <br> **Workaround**: Try configuring an encryption passphrase. If it fails, take the following steps: <br> <ol><li>Verify that the scratch location exists. This is the location that's mentioned in the registry **HKEY_LOCAL_MACHINE\Software\Microsoft\Windows Azure Backup\Config**, with the name **ScratchLocation** should exist.</li><li> If the scratch location exists, try re-registering by using the old passphrase. *Whenever you configure an encryption passphrase, save it in a secure location.*</li><ol>|
## The original and external DPM servers must be registered to the same vault | Operation | Error details | Workaround | | | | |
-| Restore | **Error code**: CBPServerRegisteredVaultDontMatchWithCurrent/Vault Credentials Error: 100110 <br/> <br/>**Error message**: The original and external DPM servers must be registered to the same vault | **Cause**: This issue occurs when you're trying to restore files to the alternate server from the original server using the External DPM recovery option, and if the server that's being recovered and the original server from where the data is backed-up are not associated with the same Recovery Services vault.<br/> <br/>**Workaround** To resolve this issue ensure both the original and alternate server are registered to the same vault.|
+| Restore | **Error code**: CBPServerRegisteredVaultDontMatchWithCurrent/Vault Credentials Error: 100110 <br/> <br/>**Error message**: The original and external DPM servers must be registered to the same vault | **Cause**: This issue occurs when you're trying to restore files to the alternate server from the original server using the External DPM recovery option, and if the server that's being recovered and the original server from where the data is backed-up aren't associated with the same Recovery Services vault.<br/> <br/>**Workaround**: To resolve this issue, ensure both the original and alternate servers are registered to the same vault.|
## Online recovery point creation jobs for VMware VM fail | Operation | Error details | Workaround | | | | |
-| Backup | Online recovery point creation jobs for VMware VM fail. DPM encountered an error from VMware while trying to get ChangeTracking information. ErrorCode - FileFaultFault (ID 33621) | <ol><li> Reset the CTK on VMware for the affected VMs.</li> <li>Check that independent disk is not in place on VMware.</li> <li>Stop protection for the affected VMs and reprotect with the **Refresh** button. </li><li>Run a CC for the affected VMs.</li></ol>|
+| Backup | Online recovery point creation jobs for VMware VM fail. DPM encountered an error from VMware while trying to get ChangeTracking information. ErrorCode - FileFaultFault (ID 33621) | <ol><li> Reset the CTK on VMware for the affected VMs.</li> <li>Check that independent disk isn't in place on VMware.</li> <li>Stop protection for the affected VMs and reprotect with the **Refresh** button. </li><li>Run a CC for the affected VMs.</li></ol>|
## The agent operation failed because of a communication error with the DPM agent coordinator service on the server
backup Backup Azure Microsoft Azure Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-microsoft-azure-backup.md
Title: Use Azure Backup Server to back up workloads description: In this article, learn how to prepare your environment to protect and back up workloads using Microsoft Azure Backup Server (MABS). Previously updated : 04/14/2021 Last updated : 08/26/2022+++ # Install and upgrade Azure Backup Server
Always join Azure Backup Server to a domain. Moving an existing Azure Backup Ser
Whether you send backup data to Azure, or keep it locally, Azure Backup Server must be registered with a Recovery Services vault.
+>[!Note]
+>If you encounter difficulties to register to a vault or other errors, ensure that you've the MARS agent version 2.0.9249.0 or above installed. If not, we recommend you to install the latest version [from here](https://aka.ms/azurebackup_agent).
+ [!INCLUDE [backup-create-rs-vault.md](../../includes/backup-create-rs-vault.md)] ### Set storage replication
Once the extraction process complete, check the box to launch the freshly extrac
3. The Azure Backup Server installation package comes bundled with the appropriate SQL Server binaries needed. When starting a new Azure Backup Server installation, pick the option **Install new Instance of SQL Server with this Setup** and select the **Check and Install** button. Once the prerequisites are successfully installed, select **Next**. >[!NOTE]
+ >
>If you wish to use your own SQL server, the supported SQL Server versions are SQL Server 2014 SP1 or higher, 2016 and 2017. All SQL Server versions should be Standard or Enterprise 64-bit. >Azure Backup Server won't work with a remote SQL Server instance. The instance being used by Azure Backup Server needs to be local. If you're using an existing SQL server for MABS, the MABS setup only supports the use of *named instances* of SQL server.
Once the extraction process complete, check the box to launch the freshly extrac
![Summary of settings](./media/backup-azure-microsoft-azure-backup/summary-screen.png) 8. The installation happens in phases. In the first phase, the Microsoft Azure Recovery Services Agent is installed on the server. The wizard also checks for Internet connectivity. If Internet connectivity is available, you can continue with the installation. If not, you need to provide proxy details to connect to the Internet.
+ >[!Important]
+ >If you run into errors in vault registration, ensure that you're using the latest version of the MARS agent, instead of the version packaged with MABS server. You can download the latest version [from here](https://aka.ms/azurebackup_agent) and replace the *MARSAgentInstaller.exe* file in *System Center Microsoft Azure Backup Server v3\MARSAgent* folder before installation and registration on new servers.
+ The next step is to configure the Microsoft Azure Recovery Services Agent. As a part of the configuration, you'll have to provide your vault credentials to register the machine to the Recovery Services vault. You'll also provide a passphrase to encrypt/decrypt the data sent between Azure and your premises. You can automatically generate a passphrase or provide your own minimum 16-character passphrase. Continue with the wizard until the agent has been configured. ![Register Server Wizard](./media/backup-azure-microsoft-azure-backup/mars/04.png)
backup Install Mars Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/install-mars-agent.md
The data that's available for backup depends on where the agent is installed.
* Make sure that you have an Azure account if you need to back up a server or client to Azure. If you don't have an account, you can create a [free one](https://azure.microsoft.com/free/) in just a few minutes. * Verify internet access on the machines that you want to back up. * Ensure the user installing and configuring the MARS agent has local administrator privileges on the server to be protected.
-* To prevent errors during vault registration, ensure that the MARS agent version 2.0.9249.0 or above is installed. If not, we recommend you to install it [from here](https://aka.ms/azurebackup_agent).
+* To prevent errors during vault registration, ensure that the latest MARS agent version is used. If not, we recommend you to download it [from here](https://aka.ms/azurebackup_agent) or [from the Azure portal as mentioned in this section](#download-the-mars-agent).
[!INCLUDE [How to create a Recovery Services vault](../../includes/backup-create-rs-vault.md)]
If you've already installed the agent on any machines, ensure you're running the
1. Select **Finish**. The agent is now installed, and your machine is registered to the vault. You're ready to configure and schedule your backup.
+ If you are running into issues during vault registration, see the [troubleshooting guide](backup-azure-mars-troubleshoot.md#invalid-vault-credentials-provided).
+ >[!Note] >We strongly recommend you save your passphrase in an alternate secure location, such as the Azure key vault. Microsoft can't recover the data without the passphrase. [Learn](../key-vault/secrets/quick-create-portal.md) how to store a secret in a key vault.
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/call-api.md
First you will need to get your resource key and endpoint:
Single label classification: * [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample9_SingleLabelClassify.md)
- * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/ClassifyDocumentSingleCategory.java)
+ * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/SingleLabelClassifyDocument.java)
* [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-text-analytics_6.0.0-beta.1/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_single_label_classify.py) Multi label classification: * [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample10_MultiLabelClassify.md)
- * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/ClassifyDocumentMultiCategory.java)
+ * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/MultiLabelClassifyDocument.java)
* [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-text-analytics_6.0.0-beta.1/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) * [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_multi_label_classify.py)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/overview.md
As you use custom text classification, see the following reference documentation
|REST APIs (Authoring) | [REST API documentation](https://aka.ms/ct-authoring-swagger) | | |REST APIs (Runtime) | [REST API documentation](https://aka.ms/ct-runtime-swagger) | | |C# (Runtime) | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples - Single label classification](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample9_SingleLabelClassify.md) [C# samples - Multi label classification](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample10_MultiLabelClassify.md) |
-| Java (Runtime) | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples - Single label classification](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/ClassifyDocumentSingleCategory.java) [Java Samples - Multi label classification](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/ClassifyDocumentMultiCategory.java) |
+| Java (Runtime) | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples - Single label classification](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/SingleLabelClassifyDocument.java) [Java Samples - Multi label classification](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/textanalytics/azure-ai-textanalytics/src/samples/java/com/azure/ai/textanalytics/lro/MultiLabelClassifyDocument.java) |
|JavaScript (Runtime) | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples - Single label classification](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-text-analytics_6.0.0-beta.1/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) [JavaScript samples - Multi label classification](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-text-analytics_6.0.0-beta.1/sdk/textanalytics/ai-text-analytics/samples/v5/javascript/customText.js) | |Python (Runtime)| [Python documentation](/python/api/azure-ai-textanalytics/azure.ai.textanalytics?view=azure-python-preview&preserve-view=true) | [Python samples - Single label classification](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_single_label_classify.py) [Python samples - Multi label classification](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_multi_label_classify.py) |
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
Another area where Davinci excels is in understanding the intent of text. Davinc
### Curie
-Curie is extremely powerful, yet very fast. While Davinci is stronger when it comes to analyzing complicated text, Curie is quite capable for many nuanced tasks like sentiment classification and summarization. Curie is also good at answering questions and performing Q&A and as a general service chatbot.
+Curie is powerful, yet fast. While Davinci is stronger when it comes to analyzing complicated text, Curie is capable for many nuanced tasks like sentiment classification and summarization. Curie is also good at answering questions and performing Q&A and as a general service chatbot.
**Use for**: Language translation, complex classification, text sentiment, summarization ### Babbage
-Babbage can perform straightforward tasks like simple classification. ItΓÇÖs also quite capable when it comes to Semantic Search ranking how well documents match up with search queries.
+Babbage can perform straightforward tasks like simple classification. ItΓÇÖs also capable when it comes to semantic search ranking how well documents match up with search queries.
**Use for**: Moderate classification, semantic search classification
Ada is usually the fastest model and can perform tasks like parsing text, addres
The Codex models are descendants of our base GPT-3 models that can understand and generate code. Their training data contains both natural language and billions of lines of public code from GitHub.
-TheyΓÇÖre most capable in Python and proficient in over a dozen languages including JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
+TheyΓÇÖre most capable in Python and proficient in over a dozen languages, including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
Currently we only offer one Codex model: `code-cushman-001`.
Ada (1024 dimensions),
Babbage (2048 dimensions), Curie (4096 dimensions), Davinci (12,288 dimensions).
-Davinci is the most capable, but is slower and more expensive than the other models. Ada is the least capable, but is significantly faster and cheaper.
+Davinci is the most capable, but is slower and more expensive than the other models. Ada is the least capable, but is both faster and cheaper.
These embedding models are specifically created to be good at a particular task.
These models help measure whether long documents are relevant to a short search
### Code search embeddings
-Similarly to search embeddings, there are two types: one for embedding code snippets to be retrieved and one for embedding natural language search queries.
+Similar to text search embeddings, there are two types: one for embedding code snippets to be retrieved and one for embedding natural language search queries.
| USE CASES | AVAILABLE MODELS | ||| | Code search and relevance | code-search-ada-code-001, <br> code-search-ada-text-001, <br> code-search-babbage-code-001, <br> code-search-babbage-text-001 |
-When using our embedding models, please keep in mind their limitations and risks.
+When using our embedding models, keep in mind their limitations and risks.
## Finding the right model
-We recommend starting with our Davinci model since it will be the best way to understand what the service is capable of. After you have an idea of what you want to accomplish, you can either stay with Davinci if youΓÇÖre not concerned about cost and speed or move onto Curie or another model and try to optimize around its capabilities.
+We recommend starting with our Davinci model since it will be the best way to understand what the service is capable of. After you have an idea of what you want to accomplish, you can either stay with Davinci if youΓÇÖre not concerned about cost and speed, or you can move onto Curie or another model and try to optimize around its capabilities.
## Next steps
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/embeddings.md
An embedding is a special format of data representation that can be easily utili
## How to get embeddings
-To obtain an embedding vector for a piece of text we make a request to the embeddings endpoint as shown in the following code snippets:
+To obtain an embedding vector for a piece of text, we make a request to the embeddings endpoint as shown in the following code snippets:
```console curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2022-06-01-preview\
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYM
### Verify inputs don't exceed the maximum length
-The maximum length of input text for our embedding models are 2048 tokens (approximately equivalent to around 2-3 pages of text). You should verify that your inputs don't exceed this limit before making a request.
+The maximum length of input text for our embedding models is 2048 tokens (equivalent to around 2-3 pages of text). You should verify that your inputs don't exceed this limit before making a request.
### Choose the best model for your task
-For the search models you can obtain embeddings in two ways. The `<search_model>-doc` model is used for longer pieces of text (to be searched over) and the `<search_model>-query` model is used for shorter pieces of text, typically queries or class labels in zero shot classification. You can read more about all of the embeddings models in our [Models](../concepts/models.md) guide.
+For the search models, you can obtain embeddings in two ways. The `<search_model>-doc` model is used for longer pieces of text (to be searched over) and the `<search_model>-query` model is used for shorter pieces of text, typically queries or class labels in zero shot classification. You can read more about all of the Embeddings models in our [Models](../concepts/models.md) guide.
### Replace newlines with a single space
-Unless you are embedding code, we suggest replacing newlines (\n) in your input with a single space, as we have observed inferior results when newlines are present.
+Unless you're embedding code, we suggest replacing newlines (\n) in your input with a single space, as we have observed inferior results when newlines are present.
## Limitations & risks
-Our embedding models may be unreliable or pose social risks in certain cases, and may cause harm in the absence of mitigations. Please review our Responsible AI content for more information on how to approach their use responsibly.
+Our embedding models may be unreliable or pose social risks in certain cases, and may cause harm in the absence of mitigations. Review our Responsible AI content for more information on how to approach their use responsibly.
## Next steps
cognitive-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/managed-identity.md
Assigning yourself to the Cognitive Services User role will allow you to use you
Use the access token to authorize your API call by setting the `Authorization` header value. ```bash
- curl ${endpoint%/}/openai/deployment/YOUR_DEPLOYMNET_NAME/search?api-version=2022-06-01-preview \
+ curl ${endpoint%/}/openai/deployment/YOUR_DEPLOYMENT_NAME/completions?api-version=2022-06-01-preview \
-H "Content-Type: application/json" \ -H "Authorization: Bearer $accessToken" \
- -d '{ "documents": ["White House", "hospital", "school"], "query": "the president"}'
+ -d '{ "prompt": "Once upon a time" }'
``` ## Authorize access to managed identities
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quickstart.md
Title: 'Quickstart - Deploy a model and generate text using Azure OpenAI'
-description: Walkthrough on how to get started with Azure OpenAI and make your first completions and search calls.
+description: Walkthrough on how to get started with Azure OpenAI and make your first completions call.
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
-# Calling Recording overview
+# Call Recording overview
[!INCLUDE [Public Preview](../../includes/public-preview-include-document.md)] > [!NOTE] > Call Recording is not enabled for [Teams interoperability](../teams-interop.md).
-Call Recording provides a set of APIs to start, stop, pause and resume recording. These APIs can be accessed from server-side business logic or via events triggered by user actions. Recorded media output is in MP4 Audio+Video format, which is the same format that Teams uses to record media. Notifications related to media and metadata are emitted via Event Grid. Recordings are stored for 48 hours on built-in temporary storage for retrieval and movement to a long-term storage solution of choice. Call Recording supports all ACS data regions.
+Call Recording provides a set of APIs to start, stop, pause and resume recording. These APIs can be accessed from server-side business logic or via events triggered by user actions. Recorded media output is in MP4 Audio+Video format, which is the same format that Teams uses to record media. Notifications related to media and metadata are emitted via Event Grid. Recordings are stored for 48 hours on built-in temporary storage for retrieval and movement to a long-term storage solution of choice. Call Recording supports all Azure Communication Services data regions.
![Call recording concept diagram](../media/call-recording-concept.png) ## Media output types
-Call recording currently supports mixed audio+video MP4 and mixed audio-only MP3/WAV output formats in Public Preview. The mixed audio+video output media matches meeting recordings produced via Microsoft Teams recording.
+Call recording currently supports mixed audio+video MP4 and mixed audio MP3/WAV output formats in Public Preview. The mixed audio+video output media matches meeting recordings produced via Microsoft Teams recording.
| Content Type | Content Format | Channel Type | Video | Audio | | :-- | :- | :-- | :- | : |
-| audioVideo | mp4 | mixed | 1920x1080 8 FPS video of all participants in default tile arrangement | 16kHz mp4a mixed audio of all participants |
-| audioOnly| mp3/wav | mixed | N/A | 16kHz mp3/wav mixed audio of all participants |
-| audioOnly| wav | unmixed | N/A | 16kHz wav, 0-5 channels for each participant |
+| audio + video | mp4 | mixed | 1920x1080 8 FPS video of all participants in default tile arrangement | 16kHz mp4a mixed audio of all participants |
+| audio| mp3/wav | mixed | N/A | 16kHz mp3/wav mixed audio of all participants |
+| audio| wav | unmixed | N/A | 16kHz wav, 0-5 channels, 1 for each participant |
## Channel types > [!NOTE]
-> **Unmixed audio-only** is still in a **Private Preview** and NOT enabled for Teams Interop meetings.
+> **Unmixed audio** is in **Private Preview**.
-| Channel type | Content format | Output | Scenario |
-||--|||
-| Mixed audio-video | Mp4 | Single file, single channel | Keeping records and meeting notes Coaching and Training |
-| Mixed audio-only | Mp3 (lossy)/ wav (lossless) | Single file, single channel | Compliance & Adherence Coaching and Training |
-| **Unmixed audio-only** | Mp3/wav | Single file, multiple channels maximum number of channels is 6 for mp3 and 50 for wav | Quality Assurance Analytics |
+| Channel type | Content format | Output | Scenario | Release Stage |
+||--|||-|
+| Mixed audio+video | Mp4 | Single file, single channel | Keeping records and meeting notes Coaching and Training | Public Preview |
+| Mixed audio | Mp3 (lossy)/ wav (lossless) | Single file, single channel | Compliance & Adherence Coaching and Training | Public Preview |
+| **Unmixed audio** | wav | Single file, up to 5 wav channels | Quality Assurance Analytics | **Private Preview** |
## Run-time Control APIs Run-time control APIs can be used to manage recording via internal business logic triggers, such as an application creating a group call and recording the conversation, or from a user-triggered action that tells the server application to start recording. Call Recording APIs are [Out-of-Call APIs](./call-automation-apis.md#out-of-call-apis), using the `serverCallId` to initiate recording. When creating a call, a `serverCallId` is returned via the `Microsoft.Communication.CallLegStateChanged` event after a call has been established. The `serverCallId` can be found in the `data.serverCallId` field. See our [Call Recording Quickstart Sample](../../quickstarts/voice-video-calling/call-recording-sample.md) to learn about retrieving the `serverCallId` from the Calling Client SDK. A `recordingOperationId` is returned when recording is started, which is then used for follow-on operations like pause and resume.
Run-time control APIs can be used to manage recording via internal business logi
## Event Grid notifications
+> [!NOTE]
> Azure Communication Services provides short term media storage for recordings. **Export any recorded content you wish to preserve within 48 hours.** After 48 hours, recordings will no longer be available. An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated` is published when a recording is ready for retrieval, typically a few minutes after the recording process has completed (e.g. meeting ended, recording stopped). Recording event notifications include `contentLocation` and `metadataLocation`, which are used to retrieve both recorded media and a recording metadata file.
An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated`
"eventTime": string // ISO 8601 date time for when the event was created } ```
+## Metadata Schema
+```typescript
+{
+ "resourceId": <string>, // stable resource id of the ACS resource recording
+ "callId": <string>, // group id of the call
+ "chunkDocumentId": <string>, // object identifier for the chunk this metadata corresponds to
+ "chunkIndex": <number>, // index of this chunk with respect to all chunks in the recording
+ "chunkStartTime": <string>, // ISO 8601 datetime for the start time of the chunk this metadata corresponds to
+ "chunkDuration": <number>, // duration of the chunk this metadata corresponds to in milliseconds
+ "pauseResumeIntervals": [
+ "startTime": <string>, // ISO 8601 datetime for the time at which the recording was paused
+ "duration": <number> // duration of the pause in the recording in milliseconds
+ ],
+ "recordingInfo": {
+ "contentType": <string>, // content type of recording, e.g. audio/audioVideo
+ "channelType": <string>, // channel type of recording, e.g. mixed/unmixed
+ "format": <string>, // format of the recording, e.g. mp4/mp3/wav
+ "audioConfiguration": {
+ "sampleRate": <number>, // sample rate for audio recording
+ "bitRate": <number>, // bitrate for audio recording
+ "channels": <number> // number of audio channels in output recording
+ }
+ },
+ "participants": [
+ {
+ "participantId": <string>, // participant identifier of a participant captured in the recording
+ "channel": <number> // channel the participant was assigned to if the recording is unmixed
+ }
+ ]
+}
+
+```
+ ## Regulatory and privacy concerns Many countries and states have laws and regulations that apply to the recording of PSTN, voice, and video calls, which often require that users consent to the recording of their communications. It is your responsibility to use the call recording capabilities in compliance with the law. You must obtain consent from the parties of recorded communications in a manner that complies with the laws applicable to each participant. Regulations around the maintenance of personal data require the ability to export user data. In order to support these requirements, recording metadata files include the participantId for each call participant in the `participants` array. You can cross-reference the MRIs in the `participants` array with your internal user identities to identify participants in a call. An example of a recording metadata file is provided below for reference.
-## Availability
+## Available Languages
Currently, Azure Communication Services Call Recording APIs are available in C# and Java. ## Next steps
communication-services Chat Android Push Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/chat-android-push-notification.md
# Enable push notifications Push notifications let clients be notified for incoming messages and other operations occurring in a chat thread in situations where the mobile app isn't running in the foreground. Azure Communication Services supports a [list of events that you can subscribe to](../concepts/chat/concepts.md#push-notifications). > [!NOTE]
-> Chat push notifications are supported for Android SDK in versions starting from 1.1.0-beta.4 and 1.1.0. It is recommended that you use version 1.2.0 or newer, as older versions have a known issue with the registration renewal. Steps from 8 to 12 are only needed for versions equal to or greater than 1.2.0.
+> Chat push notifications are supported for Android SDK in versions starting from 1.1.0-beta.4 and 1.1.0. It is recommended that you use version 2.0.0 or newer, as older versions have a known issue with the registration renewal. Steps from 8 to 12 are only needed for versions equal to or greater than 2.0.0.
1. Set up Firebase Cloud Messaging for the ChatQuickstart project. Complete steps `Create a Firebase project`, `Register your app with Firebase`, `Add a Firebase configuration file`, `Add Firebase SDKs to your app`, and `Edit your app manifest` in [Firebase Documentation](https://firebase.google.com/docs/cloud-messaging/android/client).
connectors Apis List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/apis-list.md
Title: Overview about connectors in Azure Logic Apps
-description: Learn about connectors to create automated integration workflows in Azure Logic Apps.
+ Title: Azure Logic Apps connectors overview
+description: Overview about connectors for workflows in Azure Logic Apps.
ms.suite: integration Previously updated : 05/10/2022 Last updated : 08/25/2022
When you build workflows using Azure Logic Apps, you can use *connectors* to help you quickly and easily access data, events, and resources in other apps, services, systems, protocols, and platforms - often without writing any code. A connector provides prebuilt operations that you can use as steps in your workflows. Azure Logic Apps provides hundreds of connectors that you can use. If no connector is available for the resource that you want to access, you can use the generic HTTP operation to communicate with the service, or you can [create a custom connector](#custom-connectors-and-apis).
-This overview provides a high-level introduction to connectors and how they generally work. For information about the more popular and commonly used connectors in Azure Logic Apps, review the following documentation:
+This overview provides a high-level introduction to connectors and how they generally work.
+
+## What are connectors?
+
+Technically, many connectors provide a proxy or a wrapper around an API that the underlying service uses to communicate with Azure Logic Apps. This connector provides operations that you use in your workflows to perform tasks. An operation is available either as a *trigger* or *action* with properties you can configure. Some triggers and actions also require that you first [create and configure a connection](#connection-configuration) to the underlying service or system, for example, so that you can authenticate access to a user account. For more overview information, review [Connectors overview for Azure Logic Apps, Microsoft Power Automate, and Microsoft Power Apps](/connectors).
+
+ For information about the more popular and commonly used connectors in Azure Logic Apps, review the following documentation:
* [Connectors reference for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors) * [Built-in connectors for Azure Logic Apps](built-in.md)
This overview provides a high-level introduction to connectors and how they gene
* [Pricing and billing models in Azure Logic Apps](../logic-apps/logic-apps-pricing.md) * [Azure Logic Apps pricing details](https://azure.microsoft.com/pricing/details/logic-apps/)
-## What are connectors?
-
-Technically, a connector is a proxy or a wrapper around an API that the underlying service uses to communicate with Azure Logic Apps. This connector provides operations that you use in your workflows to perform tasks. An operation is available either as a *trigger* or *action* with properties you can configure. Some triggers and actions also require that you first [create and configure a connection](#connection-configuration) to the underlying service or system, for example, so that you can authenticate access to a user account. For more overview information, review [Connectors overview for Azure Logic Apps, Microsoft Power Automate, and Microsoft Power Apps](/connectors).
- ### Triggers A *trigger* specifies the event that starts the workflow and is always the first step in any workflow. Each trigger also follows a specific firing pattern that controls how the trigger monitors and responds to events. Usually, a trigger follows the *polling* pattern or *push* pattern, but sometimes, a trigger is available in both versions.
A trigger also passes along any inputs and other required data into your workflo
### Actions
-An *action* is an operation that follows the trigger and performs some kind of task in your workflow. You can use multiple actions in your workflow. For example, you might start the workflow with a SQL trigger that detects new customer data in an SQL database. Following the trigger, your workflow can have a SQL action that gets the customer data. Following the SQL action, your workflow can have another action, not necessarily SQL, that processes the data.
+An *action* is an operation that follows the trigger and performs some kind of task in your workflow. You can use multiple actions in your workflow. For example, you might start the workflow with a SQL trigger that detects new customer data in an SQL database. Following the trigger, your workflow can have a SQL action that gets the customer data. Following the SQL action, your workflow can have a different action that processes the data.
## Connector categories
For more information, review the following documentation:
### Recurrence for connection-based triggers
-In recurring connection-based triggers, such as Office 365 Outlook, the schedule isn't the only driver that controls execution. The time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, and other factors that might cause run times to drift or produce unexpected behavior, for example:
+For recurring connection-based triggers, such as Office 365 Outlook, the schedule isn't the only driver that controls execution. The time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, and other factors that might cause run times to drift or produce unexpected behavior, for example:
* Whether the trigger accesses a server that has more data, which the trigger immediately tries to fetch. * Any failures or retries that the trigger incurs.
connectors Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/built-in.md
ms.suite: integration Previously updated : 06/10/2022 Last updated : 08/25/2022 # Built-in connectors in Azure Logic Apps
In Standard logic app workflows, a built-in connector that has the following att
* Runs in the same process as the redesigned Azure Logic Apps runtime.
-These service provider-based built-in connectors are available alongside their [managed connector versions](managed.md).
+Service provider-based built-in connectors are available alongside their [managed connector versions](managed.md).
In contrast, a built-in connector that's *not a service provider* has the following attributes:
connectors Connectors Native Delay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-delay.md
tags: connectors
# Delay running the next action in Azure Logic Apps
-To have your logic app wait an amount of time before running the next action, you can add the built-in **Delay - Schedule** action before an action in your logic app's workflow. Or, you can add the built-in **Delay until - Schedule** action to wait until a specific date and time before running the next action. For more information about the built-in Schedule actions and triggers, see [Schedule and run recurring automated, tasks, and workflows with Azure Logic Apps](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md).
+
+To have your logic app wait an amount of time before running the next action, you can add the built-in **Delay** action before an action in your logic app's workflow. Or, you can add the built-in **Delay until** action to wait until a specific date and time before running the next action. For more information about the built-in Schedule actions and triggers, see [Schedule and run recurring automated, tasks, and workflows with Azure Logic Apps](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md).
* **Delay**: Wait for the specified number of time units, such as seconds, minutes, hours, days, weeks, or months, before the next action runs.
connectors Connectors Native Sliding Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-sliding-window.md
ms.suite: integration Previously updated : 03/25/2020 Last updated : 08/25/2022 # Schedule and run tasks for contiguous data by using the Sliding Window trigger in Azure Logic Apps + To regularly run tasks, processes, or jobs that must handle data in contiguous chunks, you can start your logic app workflow with the **Sliding Window** trigger. You can set a date and time as well as a time zone for starting the workflow and a recurrence for repeating that workflow. If recurrences are missed for any reason, for example, due to disruptions or disabled workflows, this trigger processes those missed recurrences. For example, when synchronizing data between your database and backup storage, use the Sliding Window trigger so that the data gets synchronized without incurring gaps. For more information about the built-in Schedule triggers and actions, see [Schedule and run recurring automated, tasks, and workflows with Azure Logic Apps](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md). Here are some patterns that this trigger supports:
connectors Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/managed.md
ms.suite: integration Previously updated : 05/10/2022 Last updated : 08/25/2022 # Managed connectors in Azure Logic Apps
container-apps Compare Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/compare-options.md
You can get started building your first container app [using the quickstarts](ge
[Azure Functions](../azure-functions/functions-overview.md) is a serverless Functions-as-a-Service (FaaS) solution. It's optimized for running event-driven applications using the functions programming model. It shares many characteristics with Azure Container Apps around scale and integration with events, but optimized for ephemeral functions deployed as either code or containers. The Azure Functions programming model provides productivity benefits for teams looking to trigger the execution of your functions on events and bind to other data sources. When building FaaS-style functions, Azure Functions is the ideal option. The Azure Functions programming model is available as a base container image, making it portable to other container based compute platforms allowing teams to reuse code as environment requirements change. ### Azure Spring Apps
-[Azure Spring Apps](../spring-apps/overview.md) is a platform as a service (PaaS) for Spring developers. If you want to run Spring Boot, Sprng Cloud or any other Spring applications on Azure, Azure Spring Apps is an ideal option. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
+[Azure Spring Apps](../spring-apps/overview.md) is a platform as a service (PaaS) for Spring developers. If you want to run Spring Boot, Spring Cloud or any other Spring applications on Azure, Azure Spring Apps is an ideal option. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
### Azure Red Hat OpenShift [Azure Red Hat OpenShift](../openshift/intro-openshift.md) is jointly engineered, operated, and supported by Red Hat and Microsoft to provide an integrated product and support experience for running Kubernetes-powered OpenShift. With Azure Red Hat OpenShift, teams can choose their own registry, networking, storage, and CI/CD solutions, or use the built-in solutions for automated source code management, container and application builds, deployments, scaling, health management, and more from OpenShift. If your team or organization is using OpenShift, Azure Red Hat OpenShift is an ideal option.
cosmos-db How To Configure Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-vnet-service-endpoint.md
description: This document describes the steps required to set up a virtual netw
Previously updated : 07/07/2021 Last updated : 08/25/2022 # Configure access to Azure Cosmos DB from virtual networks (VNet)+ [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-You can configure the Azure Cosmos account to allow access only from a specific subnet of virtual network (VNet). By enabling [Service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) to access Azure Cosmos DB on the subnet within a virtual network, the traffic from that subnet is sent to Azure Cosmos DB with the identity of the subnet and Virtual Network. Once the Azure Cosmos DB service endpoint is enabled, you can limit access to the subnet by adding it to your Azure Cosmos account.
+You can configure the Azure Cosmos account to allow access only from a specific subnet of a virtual network (VNET). Enable [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) on a subnet within a virtual network to control access to Azure Cosmos DB. The traffic from that subnet is sent to Azure Cosmos DB with the identity of the subnet and Virtual Network. Once the Azure Cosmos DB service endpoint is enabled, you can limit access to the subnet by adding it to your Azure Cosmos account.
-By default, an Azure Cosmos account is accessible from any source if the request is accompanied by a valid authorization token. When you add one or more subnets within VNets, only requests originating from those subnets will get a valid response. Requests originating from any other source will receive a 403 (Forbidden) response.
+By default, an Azure Cosmos account is accessible from any source if the request is accompanied by a valid authorization token. When you add one or more subnets within VNets, only requests originating from those subnets will get a valid response. Requests originating from any other source will receive a 403 (Forbidden) response.
You can configure Azure Cosmos DB accounts to allow access from only a specific subnet of an Azure virtual network. To limit access to an Azure Cosmos DB account with connections from a subnet in a virtual network:
-1. Enable the subnet to send the subnet and virtual network identity to Azure Cosmos DB. You can achieve this by enabling a service endpoint for Azure Cosmos DB on the specific subnet.
+1. Enable the service endpoint for Azure Cosmos DB to send the subnet and virtual network identity to Azure Cosmos DB.
1. Add a rule in the Azure Cosmos DB account to specify the subnet as a source from which the account can be accessed.
The following sections describe how to configure a virtual network service endpo
### Configure a service endpoint for an existing Azure virtual network and subnet
-1. From the **All resources** blade, find the Azure Cosmos DB account that you want to secure.
+1. From the **All resources** pane, find the Azure Cosmos DB account that you want to secure.
+
+1. Select **Networking** from the settings menu
+
+ :::image type="content" source="./media/how-to-configure-vnet-service-endpoint/networking-pane.png" alt-text="Screenshot of the networking menu option.":::
-1. Select **Firewalls and virtual networks** from the settings menu, and choose to allow access from **Selected networks**.
+1. Choose to allow access from **Selected networks**.
1. To grant access to an existing virtual network's subnet, under **Virtual networks**, select **Add existing Azure virtual network**. 1. Select the **Subscription** from which you want to add an Azure virtual network. Select the Azure **Virtual networks** and **Subnets** that you want to provide access to your Azure Cosmos DB account. Next, select **Enable** to enable selected networks with service endpoints for "Microsoft.AzureCosmosDB". When it's complete, select **Add**.
- :::image type="content" source="./media/how-to-configure-vnet-service-endpoint/choose-subnet-and-vnet.png" alt-text="Select virtual network and subnet":::
+ :::image type="content" source="./media/how-to-configure-vnet-service-endpoint/choose-subnet-and-vnet.png" alt-text="Screenshot of the dialog to select an existing Azure Virtual Network and subnet with an Azure Cosmos DB service endpoint.":::
> [!NOTE] > Configuring a VNET service endpoint may take up to 15 minutes to propagate and the endpoint may exhibit an inconsistent behavior during this period. 1. After the Azure Cosmos DB account is enabled for access from a virtual network, it will allow traffic from only this chosen subnet. The virtual network and subnet that you added should appear as shown in the following screenshot:
- :::image type="content" source="./media/how-to-configure-vnet-service-endpoint/vnet-and-subnet-configured-successfully.png" alt-text="Virtual network and subnet configured successfully":::
+ :::image type="content" source="./media/how-to-configure-vnet-service-endpoint/vnet-and-subnet-configured-successfully.png" alt-text="Screenshot of an Azure Virtual Network and subnet configured successfully in the list.":::
> [!NOTE] > To enable virtual network service endpoints, you need the following subscription permissions:
-> * Subscription with virtual network: Network contributor
-> * Subscription with Azure Cosmos DB account: DocumentDB account contributor
-> * If your virtual network and Azure Cosmos DB account are in different subscriptions, make sure that the subscription that has virtual network also has `Microsoft.DocumentDB` resource provider registered. To register a resource provider, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md) article.
+>
+> - Subscription with virtual network: Network contributor
+> - Subscription with Azure Cosmos DB account: DocumentDB account contributor
+> - If your virtual network and Azure Cosmos DB account are in different subscriptions, make sure that the subscription that has virtual network also has `Microsoft.DocumentDB` resource provider registered. To register a resource provider, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md) article.
+>
Here are the directions for registering subscription with resource provider. ### Configure a service endpoint for a new Azure virtual network and subnet
-1. From the **All resources** blade, find the Azure Cosmos DB account that you want to secure.
+1. From the **All resources** pane, find the Azure Cosmos DB account that you want to secure.
-1. Select **Firewalls and Azure virtual networks** from the settings menu, and choose to allow access from **Selected networks**.
+1. Select **Networking** from the settings menu, and choose to allow access from **Selected networks**.
1. To grant access to a new Azure virtual network, under **Virtual networks**, select **Add new virtual network**. 1. Provide the details required to create a new virtual network, and then select **Create**. The subnet will be created with a service endpoint for "Microsoft.AzureCosmosDB" enabled.
- :::image type="content" source="./media/how-to-configure-vnet-service-endpoint/choose-subnet-and-vnet-new-vnet.png" alt-text="Select a virtual network and subnet for a new virtual network":::
+ :::image type="content" source="./media/how-to-configure-vnet-service-endpoint/choose-subnet-and-vnet-new-vnet.png" alt-text="Screenshot of the dialog to create a new Azure Virtual Network, configure a subnet, and then enable the Azure Cosmos DB service endpoint.":::
If your Azure Cosmos DB account is used by other Azure services like Azure Cognitive Search, or is accessed from Stream analytics or Power BI, you allow access by selecting **Accept connections from within global Azure datacenters**.
To ensure that you have access to Azure Cosmos DB metrics from the portal, you n
## <a id="remove-vnet-or-subnet"></a>Remove a virtual network or subnet
-1. From the **All resources** blade, find the Azure Cosmos DB account for which you assigned service endpoints.
+1. From the **All resources** pane, find the Azure Cosmos DB account for which you assigned service endpoints.
-1. Select **Firewalls and virtual networks** from the settings menu.
+1. Select **Networking** from the settings menu.
1. To remove a virtual network or subnet rule, select **...** next to the virtual network or subnet, and select **Remove**.
- :::image type="content" source="./media/how-to-configure-vnet-service-endpoint/remove-a-vnet.png" alt-text="Remove a virtual network":::
+ :::image type="content" source="./media/how-to-configure-vnet-service-endpoint/remove-a-vnet.png" alt-text="Screenshot of the menu option to remove an associated Azure Virtual Network.":::
1. Select **Save** to apply your changes.
Use the following steps to configure a service endpoint to an Azure Cosmos DB ac
-Id $subnetId ```
-1. Update Azure Cosmos DB account properties with the new Virtual Network endpoint configuration:
+1. Update Azure Cosmos DB account properties with the new Virtual Network endpoint configuration:
```powershell $accountName = "<Cosmos DB account name>"
Use the following steps to configure a service endpoint to an Azure Cosmos DB ac
## <a id="configure-using-cli"></a>Configure a service endpoint by using the Azure CLI
-Azure Cosmos accounts can be configured for service endpoints when they are created or updated at a later time if the subnet is already configured for them. Service endpoints can also be enabled on the Cosmos account where the subnet is not yet configured for them and then will begin to work when the subnet is configured later. This flexibility allows for administrators who do not have access to both the Cosmos account and virtual network resources to make their configurations independent of each other.
+Azure Cosmos accounts can be configured for service endpoints when they're created or updated at a later time if the subnet is already configured for them. Service endpoints can also be enabled on the Cosmos account where the subnet isn't yet configured. Then the service endpoint will begin to work when the subnet is configured later. This flexibility allows for administrators who don't have access to both the Cosmos account and virtual network resources to make their configurations independent of each other.
### Create a new Cosmos account and connect it to a back end subnet for a new virtual network
-In this example the virtual network and subnet is created with service endpoints enabled for both when they are created.
+In this example, the virtual network and subnet are created with service endpoints enabled for both when they're created.
```azurecli-interactive # Create an Azure Cosmos Account with a service endpoint connected to a backend subnet
az cosmosdb create \
### Connect and configure a Cosmos account to a back end subnet independently
-This sample is intended to show how to connect an Azure Cosmos account to an existing new virtual network where the subnet is not yet configured for service endpoints. This is done by using the `--ignore-missing-vnet-service-endpoint` parameter. This allows the configuration for the Cosmos account to complete without error before the configuration to the virtual network's subnet is complete. Once the subnet configuration is complete, the Cosmos account will then be accessible through the configured subnet.
+This sample is intended to show how to connect an Azure Cosmos account to an existing or new virtual network. In this example, the subnet isn't yet configured for service endpoints. Configure the service endpoint by using the `--ignore-missing-vnet-service-endpoint` parameter. This configuration allows the Cosmos DB account to complete without error before the configuration to the virtual network's subnet is complete. Once the subnet configuration is complete, the Cosmos account will then be accessible through the configured subnet.
```azurecli-interactive # Create an Azure Cosmos Account with a service endpoint connected to a backend subnet
When you're using service endpoints with an Azure Cosmos account through a direc
To migrate an Azure Cosmos DB account from using IP firewall rules to using virtual network service endpoints, use the following steps.
-After an Azure Cosmos DB account is configured for a service endpoint for a subnet, requests from that subnet are sent to Azure Cosmos DB with virtual network and subnet source information instead of a source public IP address. These requests will no longer match an IP filter configured on the Azure Cosmos DB account, which is why the following steps are necessary to avoid downtime.
+After an Azure Cosmos DB account is configured for a service endpoint for a subnet, each request from that subnet is sent differently to Azure Cosmos DB. The requests are sent with virtual network and subnet source information instead of a source public IP address. These requests will no longer match an IP filter configured on the Azure Cosmos DB account, which is why the following steps are necessary to avoid downtime.
Before proceeding, enable the Azure Cosmos DB service endpoint on the virtual network and subnet using the step shown above in "Enable the service endpoint for an existing subnet of a virtual network".
Here are some frequently asked questions about configuring access from virtual n
### Are Notebooks and Mongo/Cassandra Shell currently compatible with Virtual Network enabled accounts?
-At the moment the [Mongo shell](https://devblogs.microsoft.com/cosmosdb/preview-native-mongo-shell/) and [Cassandra shell](https://devblogs.microsoft.com/cosmosdb/announcing-native-cassandra-shell-preview/) integrations in the Cosmos DB Data Explorer, and the [Jupyter Notebooks service](./cosmosdb-jupyter-notebooks.md), are not supported with VNET access. This is currently in active development.
+At the moment the [Mongo shell](https://devblogs.microsoft.com/cosmosdb/preview-native-mongo-shell/) and [Cassandra shell](https://devblogs.microsoft.com/cosmosdb/announcing-native-cassandra-shell-preview/) integrations in the Cosmos DB Data Explorer, and the [Jupyter Notebooks service](./cosmosdb-jupyter-notebooks.md), aren't supported with VNET access. This integration is currently in active development.
-### Can I specify both virtual network service endpoint and IP access control policy on an Azure Cosmos account?
+### Can I specify both virtual network service endpoint and IP access control policy on an Azure Cosmos account?
-You can enable both the virtual network service endpoint and an IP access control policy (also known as firewall) on your Azure Cosmos account. These two features are complementary and collectively ensure isolation and security of your Azure Cosmos account. Using IP firewall ensures that static IPs can access your account.
+You can enable both the virtual network service endpoint and an IP access control policy (also known as firewall) on your Azure Cosmos account. These two features are complementary and collectively ensure isolation and security of your Azure Cosmos account. Using IP firewall ensures that static IPs can access your account.
-### How do I limit access to subnet within a virtual network?
+### How do I limit access to subnet within a virtual network?
-There are two steps required to limit access to Azure Cosmos account from a subnet. First, you allow traffic from subnet to carry its subnet and virtual network identity to Azure Cosmos DB. This is done by enabling service endpoint for Azure Cosmos DB on the subnet. Next is adding a rule in the Azure Cosmos account specifying this subnet as a source from which account can be accessed.
+There are two steps required to limit access to Azure Cosmos account from a subnet. First, you allow traffic from subnet to carry its subnet and virtual network identity to Azure Cosmos DB. Changing the identity of the traffic is done by enabling service endpoint for Azure Cosmos DB on the subnet. Next is adding a rule in the Azure Cosmos account specifying this subnet as a source from which account can be accessed.
-### Will virtual network ACLs and IP Firewall reject requests or connections?
+### Will virtual network ACLs and IP Firewall reject requests or connections?
-When IP firewall or virtual network access rules are added, only requests from allowed sources get valid responses. Other requests are rejected with a 403 (Forbidden). It is important to distinguish Azure Cosmos account's firewall from a connection level firewall. The source can still connect to the service and the connections themselves aren't rejected.
+When IP firewall or virtual network access rules are added, only requests from allowed sources get valid responses. Other requests are rejected with a 403 (Forbidden). It's important to distinguish Azure Cosmos account's firewall from a connection level firewall. The source can still connect to the service and the connections themselves aren't rejected.
### My requests started getting blocked when I enabled service endpoint to Azure Cosmos DB on the subnet. What happened?
-Once service endpoint for Azure Cosmos DB is enabled on a subnet, the source of the traffic reaching the account switches from public IP to virtual network and subnet. If your Azure Cosmos account has IP-based firewall only, traffic from service enabled subnet would no longer match the IP firewall rules and therefore be rejected. Go over the steps to seamlessly migrate from IP-based firewall to virtual network-based access control.
+Once service endpoint for Azure Cosmos DB is enabled on a subnet, the source of the traffic reaching the account switches from public IP to virtual network and subnet. If your Azure Cosmos account has IP-based firewall only, traffic from service enabled subnet would no longer match the IP firewall rules, and therefore be rejected. Go over the steps to seamlessly migrate from IP-based firewall to virtual network-based access control.
-### Are additional Azure RBAC permissions needed for Azure Cosmos accounts with VNET service endpoints?
+### Are extra Azure role-based access control permissions needed for Azure Cosmos accounts with VNET service endpoints?
After you add the VNet service endpoints to an Azure Cosmos account, to make any changes to the account settings, you need access to the `Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action` action for all the VNETs configured on your Azure Cosmos account. This permission is required because the authorization process validates access to resources (such as database and virtual network resources) before evaluating any properties.
-
-The authorization validates permission for VNet resource action even if the user doesn't specify the VNET ACLs using Azure CLI. Currently, the Azure Cosmos account's control plane supports setting the complete state of the Azure Cosmos account. One of the parameters to the control plane calls is `virtualNetworkRules`. If this parameter is not specified, the Azure CLI makes a get database call to retrieves the `virtualNetworkRules` and uses this value in the update call.
-### Do the peered virtual networks also have access to Azure Cosmos account?
-Only virtual network and their subnets added to Azure Cosmos account have access. Their peered VNets cannot access the account until the subnets within peered virtual networks are added to the account.
+The authorization validates permission for VNet resource action even if the user doesn't specify the VNET ACLs using Azure CLI. Currently, the Azure Cosmos account's control plane supports setting the complete state of the Azure Cosmos account. One of the parameters to the control plane calls is `virtualNetworkRules`. If this parameter isn't specified, the Azure CLI makes a get database call to retrieve the `virtualNetworkRules` and uses this value in the update call.
+
+### Do the peered virtual networks also have access to Azure Cosmos account?
+
+Only virtual network and their subnets added to Azure Cosmos account have access. Their peered VNets can't access the account until the subnets within peered virtual networks are added to the account.
+
+### What is the maximum number of subnets allowed to access a single Cosmos account?
-### What is the maximum number of subnets allowed to access a single Cosmos account?
Currently, you can have at most 256 subnets allowed for an Azure Cosmos account.
-### Can I enable access from VPN and Express Route?
-For accessing Azure Cosmos account over Express route from on premises, you would need to enable Microsoft peering. Once you put IP firewall or virtual network access rules, you can add the public IP addresses used for Microsoft peering on your Azure Cosmos account IP firewall to allow on premises services access to Azure Cosmos account.
+### Can I enable access from VPN and Express Route?
+
+For accessing Azure Cosmos account over Express route from on premises, you would need to enable Microsoft peering. Once you put IP firewall or virtual network access rules, you can add the public IP addresses used for Microsoft peering on your Azure Cosmos account IP firewall to allow on premises services access to Azure Cosmos account.
-### Do I need to update the Network Security Groups (NSG) rules?
-NSG rules are used to limit connectivity to and from a subnet with virtual network. When you add service endpoint for Azure Cosmos DB to the subnet, there is no need to open outbound connectivity in NSG for your Azure Cosmos account.
+### Do I need to update the Network Security Groups (NSG) rules?
+
+NSG rules are used to limit connectivity to and from a subnet with virtual network. When you add service endpoint for Azure Cosmos DB to the subnet, there's no need to open outbound connectivity in NSG for your Azure Cosmos account.
### Are service endpoints available for all VNets?+ No, Only Azure Resource Manager virtual networks can have service endpoint enabled. Classic virtual networks don't support service endpoints.
-### When should I "Accept connections from within public Azure datacenters" for an Azure Cosmos DB account?
+### When should I accept connections from within global Azure datacenters for an Azure Cosmos DB account?
+ This setting should only be enabled when you want your Azure Cosmos DB account to be accessible to any Azure service in any Azure region. Other Azure first party services such as Azure Data Factory and Azure Cognitive Search provide documentation for how to secure access to data sources including Azure Cosmos DB accounts, for example:
-* [Azure Data Factory Managed Virtual Network](../data-factory/managed-virtual-network-private-endpoint.md)
-* [Azure Cognitive Search Indexer access to protected resources](../search/search-indexer-securing-resources.md)
+- [Azure Data Factory Managed Virtual Network](../data-factory/managed-virtual-network-private-endpoint.md)
+- [Azure Cognitive Search Indexer access to protected resources](../search/search-indexer-securing-resources.md)
## Next steps
-* To configure a firewall for Azure Cosmos DB, see the [Firewall support](how-to-configure-firewall.md) article.
+- To configure a firewall for Azure Cosmos DB, see the [Firewall support](how-to-configure-firewall.md) article.
cost-management-billing Ea Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-azure-marketplace.md
Previously updated : 06/08/2022 Last updated : 08/26/2022
Some third-party reseller services available on Azure Marketplace now consume yo
### Partners > [!NOTE]
-> The Azure Marketplace price list feature in the EA portal is retired.
-
-LSPs can download an Azure Marketplace price list from the price sheet page in the Azure Enterprise portal. Select the **Marketplace Price list** link in the upper right. Azure Marketplace price list shows all available services and their prices.
-
-To download the price list:
-
-1. In the Azure Enterprise portal, go to **Reports** > **Price Sheet**.
-1. In the top-right corner, find the link to Azure Marketplace price list under your username.
-1. Select and hold (or right-click) the link and select **Save Target As**.
-1. On the **Save** window, change the title of the document to `AzureMarketplacePricelist.zip`, which will change the file from an .xlsx to a .zip file.
-1. After the download is complete, you'll have a zip file with country-specific price lists.
-1. LSPs should reference the individual country file for country-specific pricing. LSPs can use the **Notifications** tab to be aware of SKUs that are net new or retired.
-1. Price changes occur infrequently. LSPs get email notifications of price increases and foreign exchange (FX) changes 30 days in advance.
-1. LSPs receive one invoice per enrollment, per ISV, per quarter.
+> The Azure Marketplace price list feature in the EA portal is retired. Currently, EA customers can't get a Marketplace price sheet.
### Enabling Azure Marketplace purchases
data-factory Connector Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-postgresql.md
A typical connection string is `Server=<server>;Database=<database>;Port=<port>;
| EncryptionMethod (EM)| The method the driver uses to encrypt data sent between the driver and the database server. E.g., `EncryptionMethod=<0/1/6>;`| 0 (No Encryption) **(Default)** / 1 (SSL) / 6 (RequestSSL) | No | | ValidateServerCertificate (VSC) | Determines whether the driver validates the certificate that is sent by the database server when SSL encryption is enabled (Encryption Method=1). E.g., `ValidateServerCertificate=<0/1>;`| 0 (Disabled) **(Default)** / 1 (Enabled) | No |
+> [!NOTE]
+> In order to have full SSL verification via the ODBC connection when using the Self Hosted Integration Runtime you must use an ODBC type connection instead of the PostgreSQL connector explicitly, and complete the following configuration:
+>
+> 1. Set up the DSN on any SHIR servers.
+> 1. Put the proper certificate for PostgreSQL in C:\Users\DIAHostService\AppData\Roaming\postgresql\root.crt on the SHIR servers. This is where the ODBC driver looks > for the SSL cert to verify when it connects to the database.
+> 1. In your data factory connection, use an ODBC type connection, with your connection string pointing to the DSN you created on your SHIR servers.
+ **Example:** ```json
data-factory Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery.md
Previously updated : 08/15/2022 Last updated : 08/23/2022
If you're using Git integration with your data factory and have a CI/CD pipeline
- **Integration runtimes and sharing**. Integration runtimes don't change often and are similar across all stages in your CI/CD. So Data Factory expects you to have the same name and type of integration runtime across all stages of CI/CD. If you want to share integration runtimes across all stages, consider using a ternary factory just to contain the shared integration runtimes. You can use this shared factory in all of your environments as a linked integration runtime type.
+ >[!Note]
+ >The integration runtime sharing is only available for self-hosted integration runtimes. Azure-SSIS integration runtimes don't support sharing.
+ - **Managed private endpoint deployment**. If a private endpoint already exists in a factory and you try to deploy an ARM template that contains a private endpoint with the same name but with modified properties, the deployment will fail. In other words, you can successfully deploy a private endpoint as long as it has the same properties as the one that already exists in the factory. If any property is different between environments, you can override it by parameterizing that property and providing the respective value during deployment. - **Key Vault**. When you use linked services whose connection information is stored in Azure Key Vault, it is recommended to keep separate key vaults for different environments. You can also configure separate permission levels for each key vault. For example, you might not want your team members to have permissions to production secrets. If you follow this approach, we recommend that you to keep the same secret names across all stages. If you keep the same secret names, you don't need to parameterize each connection string across CI/CD environments because the only thing that changes is the key vault name, which is a separate parameter.
data-factory Copy Activity Performance Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-performance-features.md
Note in the following scenarios, single copy activity execution can leverage mul
## Parallel copy
-You can set parallel copy (`parallelCopies` property) on copy activity to indicate the parallelism that you want the copy activity to use. You can think of this property as the maximum number of threads within the copy activity that read from your source or write to your sink data stores in parallel.
+You can set parallel copy (`parallelCopies` property in the JSON definition of the Copy activity, or `Degree of parallelism` setting in the **Settings** tab of the Copy activity properties in the user interface) on copy activity to indicate the parallelism that you want the copy activity to use. You can think of this property as the maximum number of threads within the copy activity that read from your source or write to your sink data stores in parallel.
The parallel copy is orthogonal to [Data Integration Units](#data-integration-units) or [Self-hosted IR nodes](#self-hosted-integration-runtime-scalability). It is counted across all the DIUs or Self-hosted IR nodes.
See the other copy activity articles:
- [Copy activity performance and scalability guide](copy-activity-performance.md) - [Troubleshoot copy activity performance](copy-activity-performance-troubleshooting.md) - [Use Azure Data Factory to migrate data from your data lake or data warehouse to Azure](data-migration-guidance-overview.md)-- [Migrate data from Amazon S3 to Azure Storage](data-migration-guidance-s3-azure-storage.md)
+- [Migrate data from Amazon S3 to Azure Storage](data-migration-guidance-s3-azure-storage.md)
data-factory Monitor Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-metrics-alerts.md
Sign in to the Azure portal, and select **Monitor** > **Alerts** to create alert
1. Define the action group.
+ > [!NOTE]
+ > The action group must be created within the same resource group as the data factory instance in order to be available for use from the data factory.
+ :::image type="content" source="media/monitor-using-azure-monitor/alerts_image9.png" alt-text="Screenshot that shows creating a rule, with New action group highlighted."::: :::image type="content" source="media/monitor-using-azure-monitor/alerts_image10.png" alt-text="Screenshot that shows creating a new action group.":::
Sign in to the Azure portal, and select **Monitor** > **Alerts** to create alert
## Next steps
-[Configure diagnostics settings and workspace](monitor-configure-diagnostics.md)
+[Configure diagnostics settings and workspace](monitor-configure-diagnostics.md)
data-factory Monitor Using Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-using-azure-monitor.md
Last updated 09/02/2021
Cloud applications are complex and have many moving parts. Monitors provide data to help ensure that your applications stay up and running in a healthy state. Monitors also help you avoid potential problems and troubleshoot past ones. You can use monitoring data to gain deep insights about your applications. This knowledge helps you improve application performance and maintainability. It also helps you automate actions that otherwise require manual intervention.
-Azure Monitor provides base-level infrastructure metrics and logs for most Azure services. Azure diagnostic logs are emitted by a resource and provide rich, frequent data about the operation of that resource. Azure Data Factory (ADF) can write diagnostic logs in Azure Monitor. For a seven-minute introduction and demonstration of this feature, watch the following video:
-
-> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Monitor-Data-Factory-pipelines-using-Operations-Management-Suite-OMS/player]
+Azure Monitor provides base-level infrastructure metrics and logs for most Azure services. Azure diagnostic logs are emitted by a resource and provide rich, frequent data about the operation of that resource. Azure Data Factory (ADF) can write diagnostic logs in Azure Monitor.
For more information, see [Azure Monitor overview](../azure-monitor/overview.md).
ddos-protection Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/telemetry.md
Title: View and configure DDoS protection telemetry for Azure DDoS Protection Standard
+ Title: 'Tutorial: View and configure DDoS protection telemetry for Azure DDoS Protection Standard'
description: Learn how to view and configure DDoS protection telemetry for Azure DDoS Protection Standard. documentationcenter: na
na Previously updated : 12/28/2020 Last updated : 08/25/2022
-# View and configure DDoS protection telemetry
+# Tutorial: View and configure DDoS protection telemetry
Azure DDoS Protection standard provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
In this tutorial, you'll learn how to:
> * View DDoS mitigation policies > * Validate and test DDoS protection telemetry
-### Metrics
-The metric names present different packet types, and bytes vs. packets, with a basic construct of tag names on each metric as follows:
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-- **Dropped tag name** (for example, **Inbound Packets Dropped DDoS**): The number of packets dropped/scrubbed by the DDoS protection system.-- **Forwarded tag name** (for example **Inbound Packets Forwarded DDoS**): The number of packets forwarded by the DDoS system to the destination VIP ΓÇô traffic that was not filtered.-- **No tag name** (for example **Inbound Packets DDoS**): The total number of packets that came into the scrubbing system ΓÇô representing the sum of the packets dropped and forwarded.-
-> [!NOTE]
-> While multiple options for **Aggregation** are displayed on Azure portal, only the aggregation types listed in the table below are supported for each metric. We apologize for this confusion and we are working to resolve it.
-
-The following [metrics](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkpublicipaddresses) are available for Azure DDoS Protection Standard. These metrics are also exportable via diagnostic settings (see [View and configure DDoS diagnostic logging](diagnostic-logging.md)).
--
-| Metric | Metric Display Name | Unit | Aggregation Type | Description |
-| | | | | |
-| BytesDroppedDDoSΓÇï | Inbound bytes dropped DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes dropped DDoSΓÇï|
-| BytesForwardedDDoSΓÇï | Inbound bytes forwarded DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes forwarded DDoSΓÇï |
-| BytesInDDoSΓÇï | Inbound bytes DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound bytes DDoSΓÇï |
-| DDoSTriggerSYNPacketsΓÇï | Inbound SYN packets to trigger DDoS mitigationΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound SYN packets to trigger DDoS mitigationΓÇï |
-| DDoSTriggerTCPPacketsΓÇï | Inbound TCP packets to trigger DDoS mitigationΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound TCP packets to trigger DDoS mitigationΓÇï |
-| DDoSTriggerUDPPacketsΓÇï | Inbound UDP packets to trigger DDoS mitigationΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets to trigger DDoS mitigationΓÇï |
-| IfUnderDDoSAttackΓÇï | Under DDoS attack or notΓÇï | CountΓÇï | MaximumΓÇï | Under DDoS attack or notΓÇï |
-| PacketsDroppedDDoSΓÇï | Inbound packets dropped DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound packets dropped DDoSΓÇï |
-| PacketsForwardedDDoSΓÇï | Inbound packets forwarded DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound packets forwarded DDoSΓÇï |
-| PacketsInDDoSΓÇï | Inbound packets DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound packets DDoSΓÇï |
-| TCPBytesDroppedDDoSΓÇï | Inbound TCP bytes dropped DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound TCP bytes dropped DDoSΓÇï |
-| TCPBytesForwardedDDoSΓÇï | Inbound TCP bytes forwarded DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound TCP bytes forwarded DDoSΓÇï |
-| TCPBytesInDDoSΓÇï | Inbound TCP bytes DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound TCP bytes DDoSΓÇï |
-| TCPPacketsDroppedDDoSΓÇï | Inbound TCP packets dropped DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound TCP packets dropped DDoSΓÇï |
-| TCPPacketsForwardedDDoSΓÇï | Inbound TCP packets forwarded DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound TCP packets forwarded DDoSΓÇï |
-| TCPPacketsInDDoSΓÇï | Inbound TCP packets DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound TCP packets DDoSΓÇï |
-| UDPBytesDroppedDDoSΓÇï | Inbound UDP bytes dropped DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound UDP bytes dropped DDoSΓÇï |
-| UDPBytesForwardedDDoSΓÇï | Inbound UDP bytes forwarded DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound UDP bytes forwarded DDoSΓÇï |
-| UDPBytesInDDoSΓÇï | Inbound UDP bytes DDoSΓÇï | BytesPerSecondΓÇï | MaximumΓÇï | Inbound UDP bytes DDoSΓÇï |
-| UDPPacketsDroppedDDoSΓÇï | Inbound UDP packets dropped DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets dropped DDoSΓÇï |
-| UDPPacketsForwardedDDoSΓÇï | Inbound UDP packets forwarded DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets forwarded DDoSΓÇï |
-| UDPPacketsInDDoSΓÇï | Inbound UDP packets DDoSΓÇï | CountPerSecondΓÇï | MaximumΓÇï | Inbound UDP packets DDoSΓÇï |
## Prerequisites -- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Standard protection plan](manage-ddos-protection.md) and DDoS Protection Standard must be enabled on a virtual network. - DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine. ## View DDoS protection telemetry
-Telemetry for an attack is provided through Azure Monitor in real time. While [mitigation triggers](#view-ddos-mitigation-policies) for TCP SYN, TCP & UDP are available during peace-time, other telemetry is available only when a public IP address has been under mitigation.
+Telemetry for an attack is provided through Azure Monitor in real time. While [mitigation triggers](#view-ddos-mitigation-policies) for TCP SYN, TCP & UDP are available during peace-time, other telemetry is available only when a public IP address has been under mitigation.
You can view DDoS telemetry for a protected public IP address through three different resource types: DDoS protection plan, virtual network, and public IP address.
-### DDoS protection plan
-1. Sign in to the [Azure portal](https://portal.azure.com/) and browse to your DDoS protection plan.
-2. Under **Monitoring**, select **Metrics**.
-3. Select **Scope**. Select the **Subscription** that contains the public IP address you want to log, select **Public IP Address** for **Resource type**, then select the specific public IP address you want to log metrics for, and then select **Apply**.
-4. Select the **Aggregation** type as **Max**.
+> [!NOTE]
+> While multiple options for **Aggregation** are displayed on Azure portal, only the aggregation types listed in the table below are supported for each metric. We apologize for this confusion and we are working to resolve it.
+
+### View metrics from DDoS protection plan
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and select your DDoS protection plan.
+1. On the Azure portal menu, select or search for and select **DDoS protection plans** then select your DDoS protection plan.
+1. Under **Monitoring**, select **Metrics**.
+1. Click **Add metric** then click **Scope**.
+1. In the **Select a scope** menu select the **Subscription** that contains the public IP address you want to log.
+1. Select **Public IP Address** for **Resource type** then select the specific public IP address you want to log metrics for, and then select **Apply**.
+1. For **Metric** select **Under DDoS attack or not**.
+1. Select the **Aggregation** type as **Max**.
++
+### View metrics from virtual network
-### Virtual network
1. Sign in to the [Azure portal](https://portal.azure.com/) and browse to your virtual network that has DDoS protection enabled.
-2. Under **Monitoring**, select **Metrics**.
-3. Select **Scope**. Select the **Subscription** that contains the public IP address you want to log, select **Public IP Address** for **Resource type**, then select the specific public IP address you want to log metrics for, and then select **Apply**.
-4. Select the **Aggregation** type as **Max**.
-5. Select **Add filter**. Under **Property**, select **Protected IP Address**, and the operator should be set to **=**. Under **Values**, you will see a dropdown of public IP addresses, associated with the virtual network, that are protected by DDoS protection enabled.
+1. Under **Monitoring**, select **Metrics**.
+1. Click **Add metric** then click **Scope**.
+1. In the **Select a scope** menu select the **Subscription** that contains the public IP address you want to log.
+1. Select **Public IP Address** for **Resource type** then select the specific public IP address you want to log metrics for, and then select **Apply**.
+1. Under **Metric** select your chosen metric then under **Aggregation** select type as **Max**.
+
+>[!NOTE]
+>To filter IP Addresses select **Add filter**. Under **Property**, select **Protected IP Address**, and the operator should be set to **=**. Under **Values**, you will see a dropdown of public IP addresses, associated with the virtual network, that are protected by DDoS protection.
+
-![DDoS Diagnostic Settings](./media/ddos-attack-telemetry/vnet-ddos-metrics.png)
+### View metrics from Public IP address
-### Public IP address
1. Sign in to the [Azure portal](https://portal.azure.com/) and browse to your public IP address.
-2. Under **Monitoring**, select **Metrics**.
-3. Select the **Aggregation** type as **Max**.
+1. On the Azure portal menu, select or search for and select **Public IP addresses** then select your public IP address.
+1. Under **Monitoring**, select **Metrics**.
+1. Click **Add metric** then click **Scope**.
+1. In the **Select a scope** menu select the **Subscription** that contains the public IP address you want to log.
+1. Select **Public IP Address** for **Resource type** then select the specific public IP address you want to log metrics for, and then select **Apply**.
+1. Under **Metric** select your chosen metric then under **Aggregation** select type as **Max**.
## View DDoS mitigation policies DDoS Protection Standard applies three auto-tuned mitigation policies (TCP SYN, TCP & UDP) for each public IP address of the protected resource, in the virtual network that has DDoS protection enabled. You can view the policy thresholds by selecting the **Inbound TCP packets to trigger DDoS mitigation** and **Inbound UDP packets to trigger DDoS mitigation** metrics with **aggregation** type as 'Max', as shown in the following picture:
-![View mitigation policies](./media/manage-ddos-protection/view-mitigation-policies.png)
## Validate and test To simulate a DDoS attack to validate DDoS protection telemetry, see [Validate DDoS detection](test-through-simulations.md).
event-hubs Event Hubs Premium Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-premium-overview.md
It replicates events to three replicas, distributed across Azure availability zo
In addition to these storage-related features and all capabilities and protocol support of the standard tier, the isolation model of the premium tier enables features like [dynamic partition scale-up](dynamically-add-partitions.md). You also get far more generous [quota allocations](event-hubs-quotas.md). Event Hubs Capture is included at no extra cost. > [!NOTE]
-> Event Hubs Premium supports TLS 1.2 or greater .
+> Event Hubs Premium supports TLS 1.2 or greater.
## Why premium? The premium tier offers three compelling benefits for customers who require better isolation in a multitenant environment with low latency and high throughput data ingestion needs.
Therefore, the premium tier is often a more cost effective option for event stre
For the extra robustness gained by availability-zone support, the minimal deployment scale for the dedicated tier is 8 capacity units (CU), but you'll have availability zone support in the premium tier from the first PU in all availability zone regions.
-You can purchase 1, 2, 4, 8 and 16 processing units for each namespace. As the premium tier is a capacity-based offering, the achievable throughput isn't set by a throttle as it's' in the standard tier, but depends on the work you ask Event Hubs to do, similar to the dedicated tier. The effective ingest and stream throughput per PU will depend on various factors, including:
+You can purchase 1, 2, 4, 8 and 16 processing units for each namespace. As the premium tier is a capacity-based offering, the achievable throughput isn't set by a throttle as it is in the standard tier, but depends on the work you ask Event Hubs to do, similar to the dedicated tier. The effective ingest and stream throughput per PU will depend on various factors, including:
* Number of producers and consumers * Payload size
event-hubs Event Hubs Resource Manager Namespace Event Hub Enable Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-resource-manager-namespace-event-hub-enable-capture.md
Title: Create an event hub with capture enabled - Azure Event Hubs | Microsoft Docs description: Create an Azure Event Hubs namespace with one event hub and enable Capture using Azure Resource Manager template Previously updated : 09/28/2021 Last updated : 08/26/2022 ms.devlang: azurecli
For more information about creating templates, see [Authoring Azure Resource Man
For more information about patterns and practices for Azure Resources naming conventions, see [Azure Resources naming conventions][Azure Resources naming conventions].
-For the complete templates, click the following GitHub links:
+For the complete templates, select the following GitHub links:
- [Event hub and enable Capture to Storage template][Event Hub and enable Capture to Storage template] - [Event hub and enable Capture to Azure Data Lake Store template][Event Hub and enable Capture to Azure Data Lake Store template]
For the complete templates, click the following GitHub links:
## What will you deploy?
-With this template, you deploy an Event Hubs namespace with an event hub, and also enable [Event Hubs Capture](event-hubs-capture-overview.md). Event Hubs Capture enables you to automatically deliver the streaming data in Event Hubs to Azure Blob storage or Azure Data Lake Store, within a specified time or size interval of your choosing. Click the following button to enable Event Hubs Capture into Azure Storage:
+With this template, you deploy an Event Hubs namespace with an event hub, and also enable [Event Hubs Capture](event-hubs-capture-overview.md). Event Hubs Capture enables you to automatically deliver the streaming data in Event Hubs to Azure Blob storage or Azure Data Lake Store, within a specified time or size interval of your choosing. Select the following button to enable Event Hubs Capture into Azure Storage:
[![Deploy to Azure](./media/event-hubs-resource-manager-namespace-event-hub/deploybutton.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.eventhub%2Feventhubs-create-namespace-and-enable-capture%2Fazuredeploy.json)
-Click the following button to enable Event Hubs Capture into Azure Data Lake Store:
+Select the following button to enable Event Hubs Capture into Azure Data Lake Store:
[![Deploy to Azure](./media/event-hubs-resource-manager-namespace-event-hub/deploybutton.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.eventhub%2Feventhubs-create-namespace-and-enable-capture%2Fazuredeploy.json) ## Parameters
-With Azure Resource Manager, you define parameters for values you want to specify when the template is deployed. The template includes a section called `Parameters` that contains all the parameter values. You should define a parameter for those values that vary based on the project you are deploying or based on the environment you are deploying to. Do not define parameters for values that always stay the same. Each parameter value is used in the template to define the resources that are deployed.
+With Azure Resource Manager, you define parameters for values you want to specify when the template is deployed. The template includes a section called `Parameters` that contains all the parameter values. You should define a parameter for those values that vary based on the project you're deploying or based on the environment you're deploying to. Don't define parameters for values that always stay the same. Each parameter value is used in the template to define the resources that are deployed.
The template defines the following parameters.
The destination folder path for the captured events. This is the folder in your
} ```
-## Resources to deploy for Azure Storage as destination to captured events
+## Azure Storage or Azure Data Lake Storage Gen 2 as destination
-Creates a namespace of type **EventHub**, with one event hub, and also enables Capture to Azure Blob Storage.
+Creates a namespace of type **EventHub**, with one event hub, and also enables Capture to Azure Blob Storage or Azure Data Lake Storage Gen2.
```json "resources":[
Creates a namespace of type **EventHub**, with one event hub, and also enables C
] ```
-## Resources to deploy for Azure Data Lake Store as destination
+## Azure Data Lake Storage Gen1 as destination
-Creates a namespace of type **EventHub**, with one event hub, and also enables Capture to Azure Data Lake Store.
+Creates a namespace of type **EventHub**, with one event hub, and also enables Capture to Azure Data Lake Storage Gen1. If you're using Gen2 of Data Lake Storage, see the previous section.
```json "resources": [
event-hubs Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/private-link-service.md
Title: Integrate Azure Event Hubs with Azure Private Link Service description: Learn how to integrate Azure Event Hubs with Azure Private Link Service Previously updated : 05/10/2021 Last updated : 08/26/2022
If you already have an Event Hubs namespace, you can create a private link conne
1. Select the **Azure subscription** in which you want to create the private endpoint. 2. Select the **resource group** for the private endpoint resource. 3. Enter a **name** for the private endpoint.
- 5. Select a **region** for the private endpoint. Your private endpoint must be in the same region as your virtual network, but can be in a different region from the private link resource that you are connecting to.
- 6. Select **Next: Resource >** button at the bottom of the page.
-
- ![Create Private Endpoint - Basics page](./media/private-link-service/create-private-endpoint-basics-page.png)
-8. On the **Resource** page, follow these steps:
- 1. For connection method, if you select **Connect to an Azure resource in my directory**, follow these steps:
- 1. Select the **Azure subscription** in which your **Event Hubs namespace** exists.
- 2. For **Resource type**, Select **Microsoft.EventHub/namespaces** for the **Resource type**.
- 3. For **Resource**, select an Event Hubs namespace from the drop-down list.
- 4. Confirm that the **Target subresource** is set to **namespace**.
- 5. Select **Next: Configuration >** button at the bottom of the page.
-
- ![Create Private Endpoint - Resource page](./media/private-link-service/create-private-endpoint-resource-page.png)
- 2. If you select **Connect to an Azure resource by resource ID or alias**, follow these steps:
- 1. Enter the **resource ID** or **alias**. It can be the resource ID or alias that someone has shared with you. The easiest way to get the resource ID is to navigate to the Event Hubs namespace in the Azure portal and copy the portion of URI starting from `/subscriptions/`. See the following image for an example.
- 2. For **Target sub-resource**, enter **namespace**. It's the type of the sub-resource that your private endpoint can access.
- 3. (optional) Enter a **request message**. The resource owner sees this message while managing private endpoint connection.
- 4. Then, select **Next: Configuration >** button at the bottom of the page.
-
- ![Create Private Endpoint - Connect using resource ID](./media/private-link-service/connect-resource-id.png)
-9. On the **Configuration** page, you select the subnet in a virtual network to where you want to deploy the private endpoint.
+ 1. Enter a **name for the network interface**.
+ 1. Select a **region** for the private endpoint. Your private endpoint must be in the same region as your virtual network, but can be in a different region from the private link resource that you are connecting to.
+ 1. Select **Next: Resource >** button at the bottom of the page.
+
+ :::image type="content" source="./media/private-link-service/create-private-endpoint-basics-page.png" alt-text="Screenshot showing the Basics page of the Create private endpoint wizard.":::
+8. On the **Resource** page, review settings, and select **Next: Virtual Network**.
+
+ :::image type="content" source="./media/private-link-service/create-private-endpoint-resource-page.png" alt-text="Screenshot showing the Resource page of the Create private endpoint wizard.":::
+9. On the **Virtual Network** page, you select the subnet in a virtual network to where you want to deploy the private endpoint.
1. Select a **virtual network**. Only virtual networks in the currently selected subscription and location are listed in the drop-down list. 2. Select a **subnet** in the virtual network you selected.
- 3. Select **Next: Tags >** button at the bottom of the page.
-
- ![Create Private Endpoint - Configuration page](./media/private-link-service/create-private-endpoint-configuration-page.png)
-10. On the **Tags** page, create any tags (names and values) that you want to associate with the private endpoint resource. Then, select **Review + create** button at the bottom of the page.
-11. On the **Review + create**, review all the settings, and select **Create** to create the private endpoint.
+ 1. Notice that the **network policy for private endpoints** is disabled. If you want to enable it, select **edit**, update the setting, and select **Save**.
+ 1. For **Private IP configuration**, by default, **Dynamically allocate IP address** option is selected. If you want to assign a static IP address, select **Statically allocate IP address***.
+ 1. For **Application security group**, select an existing application security group or create one that's to be associated with the private endpoint.
+ 1. Select **Next: DNS >** button at the bottom of the page.
+
+ :::image type="content" source="./media/private-link-service/create-private-endpoint-configuration-page.png" alt-text="Screenshot showing the Virtual Network page of the Create private endpoint wizard.":::
+10. On the **DNS** page, select whether you want the private endpoint to be integrated with a private DNS zone, and then select **Next: Tags**.
+1. On the **Tags** page, create any tags (names and values) that you want to associate with the private endpoint resource. Then, select **Review + create** button at the bottom of the page.
+1. On the **Review + create**, review all the settings, and select **Create** to create the private endpoint.
![Create Private Endpoint - Review and Create page](./media/private-link-service/create-private-endpoint-review-create-page.png) 12. Confirm that you see the private endpoint connection you created shows up in the list of endpoints. In this example, the private endpoint is auto-approved because you connected to an Azure resource in your directory and you have sufficient permissions.
There are four provisioning states:
1. To remove a private endpoint connection, select it in the list, and select **Remove** on the toolbar. 2. On the **Delete connection** page, select **Yes** to confirm the deletion of the private endpoint. If you select **No**, nothing happens.
-3. You should see the status changed to **Disconnected**. Then, you'll see the endpoint disappear from the list.
+3. You should see the status changed to **Disconnected**. Then, the endpoint will disappear from the list.
## Validate that the private link connection works
expressroute Expressroute Howto Erdirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-erdirect.md
ExpressRoute Direct and ExpressRoute circuit(s) in different subscriptions or Az
1. To enroll in the preview, send an e-mail to ExpressRouteDirect@microsoft.com with the ExpressRoute Direct and target ExpressRoute circuit Azure subscription IDs. You'll receive an e-mail once the feature get enabled for your subscriptions. +
+1. Sign in to Azure and select the ExpressRoute Direct subscription.
+
+ ```powershell
+ Connect-AzAccount
+
+ Select-AzSubscription -Subscription "<SubscriptionID or SubscriptionName>"
+ ```
++
+1. . Get ExpressRoute Direct details
+
+ ```powershell
+ Get-AzExpressRoutePort
+
+ $ERPort = Get-AzExpressRoutePort -Name $Name -ResourceGroupName $ResourceGroupName
+ ```
+ 1. Create the ExpressRoute Direct authorization by running the following commands in PowerShell: ```powershell
- Add-AzExpressRoutePortAuthorization -Name $Name -ExpressRoutePort $ERPort
+ Add-AzExpressRoutePortAuthorization -Name $AuthName -ExpressRoutePort $ERPort
``` Sample output:
ExpressRoute Direct and ExpressRoute circuit(s) in different subscriptions or Az
1. Verify the authorization was created successfully and store ExpressRoute Direct authorization into a variable: ```powershell
- $ERDirectAuthorization = Get-AzExpressRoutePortAuthorization -ExpressRoutePortObject $ERPort -Name $Name
+ $ERDirectAuthorization = Get-AzExpressRoutePortAuthorization -ExpressRoutePortObject $ERPort -Name $AuthName
$ERDirectAuthorization ```
ExpressRoute Direct and ExpressRoute circuit(s) in different subscriptions or Az
1. Redeem the authorization to create the ExpressRoute Direct circuit in different subscription or Azure Active Directory tenant with the following command: ```powershell
- New-AzExpressRouteCircuit -Name $Name -ResourceGroupName $RGName -Location $Location -SkuTier $SkuTier -SkuFamily $SkuFamily -BandwidthInGbps $BandwidthInGbps -AuthorizationKey $$ERDirectAuthorization.AuthorizationKey
+ Select-AzSubscription -Subscription "<SubscriptionID or SubscriptionName>"
+
+ New-AzExpressRouteCircuit -Name $Name -ResourceGroupName $RGName -Location $Location -SkuTier $SkuTier -SkuFamily $SkuFamily -BandwidthInGbps $BandwidthInGbps -AuthorizationKey $ERDirectAuthorization.AuthorizationKey
``` ## Next steps
firewall Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-template.md
The template used in this quickstart is from [Azure Quickstart Templates](https:
Multiple Azure resources are defined in the template: -- [**Microsoft.Storage/storageAccounts**](/azure/templates/microsoft.storage/storageAccounts)-- [**Microsoft.Network/routeTables**](/azure/templates/microsoft.network/routeTables)-- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networksecuritygroups)-- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)-- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses)-- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces)-- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines)-- [**Microsoft.Network/azureFirewalls**](/azure/templates/microsoft.network/azureFirewalls)
+- [**Microsoft.Storage/storageAccounts**](/azure/templates/microsoft.storage/storageAccounts?pivots=deployment-language-arm-template)
+- [**Microsoft.Network/routeTables**](/azure/templates/microsoft.network/routeTables?pivots=deployment-language-arm-template)
+- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networksecuritygroups?pivots=deployment-language-arm-template)
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks?pivots=deployment-language-arm-template)
+- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses?pivots=deployment-language-arm-template)
+- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces?pivots=deployment-language-arm-template)
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines?pivots=deployment-language-arm-template)
+- [**Microsoft.Network/azureFirewalls**](/azure/templates/microsoft.network/azureFirewalls?pivots=deployment-language-arm-template)
## Deploy the template
firewall Quick Create Ipgroup Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-ipgroup-bicep.md
+
+ Title: 'Quickstart: Create an Azure Firewall and IP Groups - Bicep'
+description: In this quickstart, you learn how to use a Bicep file to create an Azure Firewall and IP Groups.
+++++ Last updated : 08/25/2022+++
+# Quickstart: Create an Azure Firewall and IP Groups - Bicep
+
+In this quickstart, you use a Bicep file to deploy an Azure Firewall with sample IP Groups used in a network rule and application rule. An IP Group is a top-level resource that allows you to define and group IP addresses, ranges, and subnets into a single object. IP Group is useful for managing IP addresses in Azure Firewall rules. You can either manually enter IP addresses or import them from a file.
++
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Review the Bicep file
+
+This Bicep file creates an Azure Firewall and IP Groups, along with the necessary resources to support the Azure Firewall.
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azurefirewall-create-with-ipgroups-and-linux-jumpbox).
++
+Multiple Azure resources are defined in the Bicep file:
+
+- [**Microsoft.Network/ipGroups**](/azure/templates/microsoft.network/ipGroups?pivots=deployment-language-bicep)
+- [**Microsoft.Storage/storageAccounts**](/azure/templates/microsoft.storage/storageAccounts?pivots=deployment-language-bicep)
+- [**Microsoft.Network/routeTables**](/azure/templates/microsoft.network/routeTables?pivots=deployment-language-bicep)
+- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networksecuritygroups?pivots=deployment-language-bicep)
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks?pivots=deployment-language-bicep)
+- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses?pivots=deployment-language-bicep)
+- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces?pivots=deployment-language-bicep)
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines?pivots=deployment-language-bicep)
+- [**Microsoft.Network/azureFirewalls**](/azure/templates/microsoft.network/azureFirewalls?pivots=deployment-language-bicep)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
+
+
+
+You'll be prompted to enter the following values:
+
+- **Admin Username**: Type username for the administrator user account
+- **Admin Password**: Type an administrator password or key
+
+When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to validate the deployment and review the deployed resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+To learn about the Bicep syntax and properties for a firewall in a Bicep file, see [Microsoft.Network azureFirewalls template reference](/azure/templates/microsoft.network/azurefirewalls?pivots=deployment-language-bicep).
+
+## Clean up resources
+
+When you no longer need them, use the Azure portal, Azure CLI, or Azure PowerShell to remove the resource group, firewall, and all related resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Deploy and configure Azure Firewall in a hybrid network using the Azure portal](tutorial-hybrid-portal.md)
firewall Quick Create Ipgroup Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-ipgroup-template.md
The template used in this quickstart is from [Azure Quickstart Templates](https:
Multiple Azure resources are defined in the template: -- [**Microsoft.Network/ipGroups**](/azure/templates/microsoft.network/ipGroups)-- [**Microsoft.Storage/storageAccounts**](/azure/templates/microsoft.storage/storageAccounts)-- [**Microsoft.Network/routeTables**](/azure/templates/microsoft.network/routeTables)-- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networksecuritygroups)-- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)-- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses)-- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces)-- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines)-- [**Microsoft.Network/azureFirewalls**](/azure/templates/microsoft.network/azureFirewalls)
+- [**Microsoft.Network/ipGroups**](/azure/templates/microsoft.network/ipGroups?pivots=deployment-language-arm-template)
+- [**Microsoft.Storage/storageAccounts**](/azure/templates/microsoft.storage/storageAccounts?pivots=deployment-language-arm-template)
+- [**Microsoft.Network/routeTables**](/azure/templates/microsoft.network/routeTables?pivots=deployment-language-arm-template)
+- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networksecuritygroups?pivots=deployment-language-arm-template)
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks?pivots=deployment-language-arm-template)
+- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses?pivots=deployment-language-arm-template)
+- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces?pivots=deployment-language-arm-template)
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines?pivots=deployment-language-arm-template)
+- [**Microsoft.Network/azureFirewalls**](/azure/templates/microsoft.network/azureFirewalls?pivots=deployment-language-arm-template)
## Deploy the template
firewall Quick Create Multiple Ip Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-multiple-ip-template.md
Title: 'Quickstart: Create an Azure Firewall with multiple public IP addresses - Resource Manager template'
-description: In this quickstart, you learn how to use a Azure Resource Manager template (ARM template) to create an Azure Firewall with multiple public IP addresses.
+description: In this quickstart, you learn how to use an Azure Resource Manager template (ARM template) to create an Azure Firewall with multiple public IP addresses.
The template used in this quickstart is from [Azure Quickstart Templates](https:
Multiple Azure resources are defined in the template: -- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networksecuritygroups)-- [**Microsoft.Network/publicIPPrefix**](/azure/templates/microsoft.network/publicipprefixes)-- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses)-- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)-- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines)-- [**Microsoft.Storage/storageAccounts**](/azure/templates/microsoft.storage/storageAccounts)-- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces)-- [**Microsoft.Network/azureFirewalls**](/azure/templates/microsoft.network/azureFirewalls)-- [**Microsoft.Network/routeTables**](/azure/templates/microsoft.network/routeTables)
+- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networksecuritygroups?pivots=deployment-language-arm-template)
+- [**Microsoft.Network/publicIPPrefix**](/azure/templates/microsoft.network/publicipprefixes?pivots=deployment-language-arm-template)
+- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses?pivots=deployment-language-arm-template)
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks?pivots=deployment-language-arm-template)
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines?pivots=deployment-language-arm-template)
+- [**Microsoft.Storage/storageAccounts**](/azure/templates/microsoft.storage/storageAccounts?pivots=deployment-language-arm-template)
+- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces?pivots=deployment-language-arm-template)
+- [**Microsoft.Network/azureFirewalls**](/azure/templates/microsoft.network/azureFirewalls?pivots=deployment-language-arm-template)
+- [**Microsoft.Network/routeTables**](/azure/templates/microsoft.network/routeTables?pivots=deployment-language-arm-template)
## Deploy the template
Deploy the ARM template to Azure:
In the Azure portal, review the deployed resources. Note the firewall public IP addresses.
-Use Remote Desktop Connection to connect to the firewall public IP addresses. Successful connections demonstrates firewall NAT rules that allow the connection to the backend servers.
+Use Remote Desktop Connection to connect to the firewall public IP addresses. Successful connections demonstrate firewall NAT rules that allow the connection to the backend servers.
## Clean up resources
-When you no longer need the resources that you created with the firewall, delete the resource group. This removes the firewall and all the related resources.
+When you no longer need the resources that you created with the firewall, delete the resource group. Deleting the resource group removes the firewall and all the related resources.
To delete the resource group, call the `Remove-AzResourceGroup` cmdlet:
frontdoor Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/billing.md
+
+ Title: Understand Azure Front Door billing
+description: Learn how you're billed when you use Azure Front Door.
+++++ Last updated : 08/25/2022+++
+# Understand Azure Front Door billing
+
+Azure Front Door provides a rich set of features for your internet-facing workloads. Front Door helps you to accelerate your application's performance, improves your security, and provides you with tools to inspect and modify your HTTP traffic.
+
+Front Door's billing model includes several components. Front Door charges a base fee for each profile that you deploy. You're also charged for requests and data transfer based on your usage. *Billing meters* collect information about your Front Door usage. Your monthly Azure bill aggregates the billing information across the month and applies the pricing to determine the amount you need to pay.
+
+This article explains how Front Door pricing works so that you can understand and predict your monthly Azure Front Door bill.
+
+For Azure Front Door pricing information, see [Azure Front Door pricing](https://azure.microsoft.com/pricing/details/frontdoor/).
+
+> [!TIP]
+> The Azure pricing calculator helps you to calculate a pricing estimate for your requirements. Use the [pre-created pricing calculator estimate](https://azure.com/e/bdc0d6531fbb4760bf5cdd520af1e4cc?azure-portal=true) as a starting point, and customize it for your own solution.
+
+> [!NOTE]
+> This article explains how billing works for Azure Front Door Standard and Premium SKUs. For information about Azure Front Door (classic), see [Azure Front Door pricing](https://azure.microsoft.com/pricing/details/frontdoor/).
+
+## Base fees
+
+Each Front Door profile incurs an hourly fee. You're billed for each hour, or partial hour, that your profile is deployed. The rate you're charged depends on the Front Door SKU that you deploy.
+
+A single Front Door profile can contain multiple [endpoints](endpoint.md). You're not billed extra for each endpoint.
+
+You don't pay extra fees to use features like [traffic acceleration](front-door-traffic-acceleration.md), [response caching](front-door-caching.md), [response compression](front-door-caching.md#file-compression), the [rules engine](front-door-rules-engine.md), [Front Door's inherent DDoS protection](front-door-ddos.md), and [custom web application firewall (WAF) rules](web-application-firewall.md#custom-rules). If you use Front Door Premium, you also don't pay extra fees to use [managed WAF rule sets](web-application-firewall.md#managed-rules) or [Private Link origins](private-link.md).
+
+## Request processing and traffic fees
+
+Each request that goes through Front Door incur request processing and traffic fees:
++
+Each part of the request process is billed separately:
+
+1. Number of requests from client to Front Door
+1. Data transfer from Front Door edge to origin
+1. Data transfer from origin to Front Door (non-billable)
+1. Data transfer from Front Door to client
+
+The following sections describe each of these request components in more detail.
+
+### Number of requests from client to Front Door
+
+Front Door charges a fee for the number of requests that are received at a Front Door edge location for your profile. Front Door identifies requests by using the `Host` header on the HTTP request. If the `Host` header matches one from your Front Door profile, it counts as a request to your profile.
+
+The price is different depending on the geographical region of the Front Door edge location that serves the request. The price is also different for the Standard and Premium SKUs.
+
+### Data transfer from Front Door edge to origin
+
+Front Door charges for the bytes that are sent from the Front Door edge location to your origin server. The price is different depending on the geographical region of the Front Door edge location that serves the request. The location of the origin doesn't affect the price.
+
+The price per gigabyte is lower when you have higher volumes of traffic.
+
+If the request can be served from the Front Door edge location's cache, Front Door doesn't send any request to the origin server, and you aren't billed for this component.
+
+### Data transfer from origin to Front Door
+
+When your origin server processes a request, it sends data back to Front Door so that it can be returned to the client. This traffic not billed by Front Door, even if the origin is in a different region to the Front Door edge location for the request.
+
+If your origin is within Azure, the data egress from the Azure origin to Front Door isn't charged. However, you should determine whether those Azure services might bill you to process your requests.
+
+If your origin is outside of Azure, you might incur charges from other network providers.
+
+### Data transfer from Front Door to client
+
+Front Door charges for the bytes that are sent from the Front Door edge location back to the client. The price is different depending on the geographical region of the Front Door edge location that serves the request.
+
+If a response is compressed, Front Door only charges for the compressed data.
+
+## Private Link origins
+
+When you use the Premium SKU, Front Door can [connect to your origin by using Private Link](private-link.md).
+
+Front Door Premium has a higher base fee and request processing fee. You don't pay extra for Private Link traffic compared to traffic that uses an origin's public endpoint.
+
+When you configure a Private Link origin, you select a region for the private endpoint to use. A [subset of Azure regions support Private Link traffic for Front Door](private-link.md#region-availability). If the region you select is different to the region the origin is deployed to, you won't be charged extra for cross-region traffic. However, the request latency will likely be greater.
+
+## Cross-region traffic
+
+Some of the Front Door billing meters have different rates depending on the location of the Front Door edge location that processes a request. Usually, [the Front Door edge location that processes a request is the one that's closest to the client](front-door-traffic-acceleration.md#select-the-front-door-edge-location-for-the-request-anycast), which helps to reduce latency and maximize performance.
+
+Front Door charges for traffic from the edge location to the origin. Traffic is charged at different rates depending on the location of the Front Door edge location. If your origin is in a different Azure region, you aren't billed extra for inter-region traffic.
+
+## Example scenarios
+
+### Example 1: Azure origin without caching
+
+Contoso hosts their website on Azure App Service, which runs in the West US region. Contoso has deployed Front Door with the standard SKU. They have disabled caching.
+
+Suppose a request from a client in California is sent to the Contoso website, sending a 1 KB request and receiving a 100 KB response:
++
+The following billing meters are incremented:
+
+| Meter | Incremented by | Billing region |
+|-|-|-|
+| Number of requests from client to Front Door | 1 | North America |
+| Data transfer from Front Door edge to origin | 1 KB | North America |
+| Data transfer from Front Door to client | 100 KB | North America |
+
+Azure App Service might charge other fees.
+
+### Example 2: Azure origin with compression enabled
+
+Suppose Contoso updates their Front Door configuration to enable [content compression](front-door-caching.md#file-compression). Now, the same response as in example 1 might be able to be compressed down to 30 KB:
++
+The following billing meters are incremented:
+
+| Meter | Incremented by | Billing region |
+|-|-|-|
+| Number of requests from client to Front Door | 1 | North America |
+| Data transfer from Front Door edge to origin | 1 KB | North America |
+| Data transfer from Front Door to client | 30 KB | North America |
+
+Azure App Service might charge other fees.
+
+### Example 3: Request served from cache
+
+Suppose a second request arrives at the same Front Door edge location and a valid cached response is available:
++
+The following billing meters are incremented:
+
+| Meter | Incremented by | Billing region |
+|-|-|-|
+| Number of requests from client to Front Door | 1 | North America |
+| Data transfer from Front Door edge to origin | *none when request is served from cache* | |
+| Data transfer from Front Door to client | 30 KB | North America |
+
+### Example 4: Cross-region traffic
+
+Suppose a request to Contoso's website comes from a client in Australia, and can't be served from cache:
++
+The following billing meters are incremented:
+
+| Meter | Incremented by | Billing region |
+|-|-|-|
+| Number of requests from client to Front Door | 1 | Australia |
+| Data transfer from Front Door edge to origin | 1 KB | Australia |
+| Data transfer from Front Door to client | 30 KB | Australia |
+
+### Example 5: Non-Azure origin
+
+Fabrikam runs an eCommerce site on another cloud provider. Their site is hosted in Europe. They Azure Front Door to serve the traffic. They haven't enabled caching or compression.
+
+Suppose a request from a client is sent to the Fabrikam website from a client in New York. The client sends a 2 KB request and receives a 350 KB response:
++
+The following billing meters are incremented:
+
+| Meter | Incremented by | Billing region |
+|-|-|-|
+| Number of requests from client to Front Door | 1 | North America |
+| Data transfer from Front Door edge to origin | 2 KB | North America |
+| Data transfer from Front Door to client | 350 KB | North America |
+
+The external cloud provider might charge other fees.
+
+### Example 6: Request blocked by web application firewall
+
+When a request is blocked by the web application firewall (WAF), it isn't sent to the origin. However, Front Door charges the request, and also charges to send a response.
+
+Suppose a Front Door profile includes a custom WAF rule to block requests from a specific IP address in South America. The WAF is configured with a custom error response page, which is 1 KB in size. If a client from the blocked IP address sends a 1 KB request:
++
+The following billing meters are incremented:
+
+| Meter | Incremented by | Billing region |
+|-|-|-|
+| Number of requests from client to Front Door | 1 | South America |
+| Data transfer from Front Door edge to origin | *none* | South America |
+| Data transfer from Front Door to client | 1 KB | South America |
+
+## Next steps
+
+Learn how to [create an Front Door profile](create-front-door-portal.md).
hdinsight Apache Hadoop Run Samples Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-run-samples-linux.md
description: Get started using MapReduce samples in jar files included in HDInsi
Previously updated : 12/12/2019 Last updated : 08/26/2022 # Run the MapReduce examples included in HDInsight
hdinsight Apache Hbase Migrate New Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-migrate-new-version.md
description: Learn how to migrate Apache HBase clusters in Azure HDInsight to a
Previously updated : 05/06/2021 Last updated : 08/26/2022 # Migrate an Apache HBase cluster to a new version
hdinsight Hdinsight 50 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-50-component-versioning.md
Last updated 08/25/2022
In this article, you learn about the open-source components and their versions in Azure HDInsight 5.0.
-Starting June 1, 2022, we have started rolling out a new version of HDInsight 5.0, this version is backward compatible with HDInsight 4.0. All new open-source releases will be added as incremental releases on HDInsight 5.0.
+Starting June 1, 2022, we have started rolling out a new version of HDInsight 5.0, this version is backward compatible with HDInsight 4.0. All new open-source releases will be added as incremental releases on HDInsight 5.0.
## Open-source components available with HDInsight version 5.0
The Open-source component versions associated with HDInsight 5.0 are listed in t
| Component | HDInsight 5.0 | HDInsight 4.0 | ||| --|
-|Apache Spark | 3.1.2 | 2.4.4, 3.1 |
+|Apache Spark | 3.1.2 | 2.4.4|
|Apache Hive | 3.1.2 | 3.1.2 |
-|Apache Kafka | 2.4.1 | 2.1.1, 2.4.1(Preview) |
+|Apache Kafka | 2.4.1 | 2.1.1|
|Apache Hadoop |3.1.1 | 3.1.1 |
-|Apache Tez | 0.9.1 | 0.9.1 |
-|Apache Pig | 0.16.0 | 0.16.1 |
+|Apache Tez |0.9.1 | 0.9.1 |
+|Apache Pig | 0.16.1 | 0.16.1 |
|Apache Ranger | 1.1.0 | 1.1.0 | |Apache Sqoop | 1.5.0 | 1.5.0 | |Apache Oozie | 4.3.1 | 4.3.1 |
you need to select this version Interactive Query 3.1 (HDI 5.0).
## Kafka
-**Known Issue** ΓÇô Current ARM template supports only 4.0 even though it shows 5.0 image in portal Cluster creation may fail with the following error message if you select version 5.0 in the UI.
+**Known Issue ΓÇô** Current ARM template supports only 4.0 even though it shows 5.0 image in portal Cluster creation may fail with the following error message if you select version 5.0 in the UI.
`HDI Version'5.0" is not supported for clusterType ''Kafka" and component Version ΓÇÿ2.4'.,Cluster component version is not applicable for HDI version: 5.0 cluster type: KAFKA (Code: BadRequest)` We're working on this issue, and a fix will be rolled out shortly. ### Upcoming version upgrades.
-HDInsight team is working on upgrading other open-source components.
+HDInsight team is working on upgrading other open-source components.
+ 1. Spark 3.2.0 1. Kafka 3.2.1
-1. HBase 2.4.9
+1. HBase 2.4.11
## Next steps - [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md) - [Enterprise Security Package](./enterprise-security-package.md) - [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md)+
hdinsight Hdinsight Apps Install Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apps-install-applications.md
description: Learn how to install third-party Apache Hadoop applications on Azur
Previously updated : 06/17/2019 Last updated : 08/26/2022 # Install third-party Apache Hadoop applications on Azure HDInsight
hdinsight Hdinsight Delete Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-delete-cluster.md
description: Information on the various ways that you can delete an Azure HDInsi
Previously updated : 11/29/2019 Last updated : 08/26/2022 # Delete an HDInsight cluster using your browser, PowerShell, or the Azure CLI
hdinsight Apache Kafka Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-quickstart-powershell.md
description: In this quickstart, you learn how to create an Apache Kafka cluster
Previously updated : 06/12/2019 Last updated : 08/26/2022 #Customer intent: I need to create a Kafka cluster so that I can use it to process streaming data
hdinsight Apache Kafka Quickstart Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-quickstart-resource-manager-template.md
description: In this quickstart, you learn how to create an Apache Kafka cluster
Previously updated : 03/13/2020 Last updated : 08/26/2022 #Customer intent: I need to create a Kafka cluster so that I can use it to process streaming data
hdinsight Optimize Hive Ambari https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/optimize-hive-ambari.md
Title: Optimize Apache Hive with Apache Ambari in Azure HDInsight
description: Use the Apache Ambari web UI to configure and optimize Apache Hive. Previously updated : 05/04/2020 Last updated : 08/26/2022 # Optimize Apache Hive with Apache Ambari in Azure HDInsight
hdinsight Apache Spark Perf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-perf.md
Title: Optimize Spark jobs for performance - Azure HDInsight
description: Show common strategies for the best performance of Apache Spark clusters in Azure HDInsight. Previously updated : 08/21/2020 Last updated : 08/26/2022 # Optimize Apache Spark applications in HDInsight
hdinsight Apache Spark Structured Streaming Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-structured-streaming-overview.md
description: How to use Spark Structured Streaming applications on HDInsight Spa
Previously updated : 12/24/2019 Last updated : 08/26/2022 # Overview of Apache Spark Structured Streaming
healthcare-apis Find Identity Object Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/find-identity-object-ids.md
$(Get-AzureADUser -Filter "UserPrincipalName eq 'myuser@contoso.com'").ObjectId
or you can use the Azure CLI: ```azurecli-interactive
-az ad user show --id myuser@contoso.com --query objectId --out tsv
+az ad user show --id myuser@contoso.com --query id --out tsv
``` ## Find service principal object ID
$(Get-AzureADServicePrincipal -Filter "DisplayName eq 'testapp'").ObjectId
If you're using the Azure CLI, you can use: ```azurecli-interactive
-az ad sp show --id XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX --query objectId --out tsv
+az ad sp show --id XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX --query id --out tsv
``` ## Find a security group object ID
Where `mygroup` is the name of the group you're interested in.
If you're using the Azure CLI, you can use: ```azurecli-interactive
-az ad group show --group "mygroup" --query objectId --out tsv
+az ad group show --group "mygroup" --query id --out tsv
``` ## Next steps
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data.md
A `$convert-data` API call packages the health data for conversion inside a JSON
| Parameter Name | Description | Accepted values | | -- | -- | -- | | `inputData` | Data payload to be converted to FHIR. | For `Hl7v2`: string <br> For `Ccda`: XML <br> For `Json`: JSON |
-| `inputDataType` | Type of data input. | ```HL7v2```, ``Ccda``, ``Json`` |
+| `inputDataType` | Type of data input. | ```Hl7v2```, ``Ccda``, ``Json`` |
| `templateCollectionReference` | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection in [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/). The reference is to an image containing Liquid templates to use for conversion. This can be a reference either to default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting them on ACR, and registering to the FHIR service. | For ***default/sample*** templates: <br> **HL7v2** templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br> **C-CDA** templates: <br> ``microsofthealth/ccdatemplates:default`` <br> **JSON** templates: <br> ``microsofthealth/jsontemplates:default`` <br><br> For ***custom*** templates: <br> `<RegistryServer>/<imageName>@<imageDigest>`, `<RegistryServer>/<imageName>:<imageTag>` | | `rootTemplate` | The root template to use while transforming the data. | For **HL7v2**:<br> "ADT_A01", "ADT_A02", "ADT_A03", "ADT_A04", "ADT_A05", "ADT_A08", "ADT_A11", "ADT_A13", "ADT_A14", "ADT_A15", "ADT_A16", "ADT_A25", "ADT_A26", "ADT_A27", "ADT_A28", "ADT_A29", "ADT_A31", "ADT_A47", "ADT_A60", "OML_O21", "ORU_R01", "ORM_O01", "VXU_V04", "SIU_S12", "SIU_S13", "SIU_S14", "SIU_S15", "SIU_S16", "SIU_S17", "SIU_S26", "MDM_T01", "MDM_T02"<br><br> For **C-CDA**:<br> "CCD", "ConsultationNote", "DischargeSummary", "HistoryandPhysical", "OperativeNote", "ProcedureNote", "ProgressNote", "ReferralNote", "TransferSummary" <br><br> For **JSON**: <br> "ExamplePatient", "Stu3ChargeItem" <br> |
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview-of-search.md
The Fast Healthcare Interoperability Resources (FHIR&#174;) specification defines an API for querying resources in a FHIR server database. This article will guide you through some key aspects of querying data in FHIR. For complete details about the FHIR search API, refer to the HL7 [FHIR Search](https://www.hl7.org/fhir/search.html) documentation.
-Throughout this article, we'll demonstrate FHIR search syntax in example API calls with the placeholder `{{FHIR_URL}}` to represent the FHIR server URL. In the case of the FHIR service in Azure Health Data Services, this URL would be `https://<WORKSPACE-NAME>-<FHIR-SERVICE-NAME>.fhir.azurehealthcareapis.com`.
+Throughout this article, we'll demonstrate FHIR search syntax in example API calls with the `{{FHIR_URL}}` placeholder to represent the FHIR server URL. In the case of the FHIR service in Azure Health Data Services, this URL would be `https://<WORKSPACE-NAME>-<FHIR-SERVICE-NAME>.fhir.azurehealthcareapis.com`.
FHIR searches can be against a specific resource type, a specified [compartment](https://www.hl7.org/fhir/compartmentdefinition.html), or all resources in the FHIR server database. The simplest way to execute a search in FHIR is to use a `GET` request. For example, if you want to pull all `Patient` resources in the database, you could use the following request:
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview.md
# What is the FHIR service in Azure Health Data Services?
-The FHIR service in Azure Health Data Services enables rapid exchange of health data using the Fast Healthcare Interoperability Resources (FHIR®) data standard. Offered as a managed Platform-as-a-Service (PaaS) for the storage and exchange of FHIR data, the FHIR service makes it easy for anyone working with health data to securely manage Protected Health Information ([PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html)) in the cloud.
+The FHIR service in Azure Health Data Services enables rapid exchange of health data using the Fast Healthcare Interoperability Resources (FHIR®) data standard. Offered as a managed Platform-as-a-Service (PaaS), the FHIR service makes it easy for anyone working with health data to securely store and exchange Protected Health Information ([PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html)) in the cloud.
The FHIR service offers the following:
The FHIR service offers the following:
- Secure management of Protected Health Information (PHI) in a compliant cloud environment - SMART on FHIR for mobile and web clients - Controlled access to FHIR data at scale with Azure Active Directory-backed Role-Based Access Control (RBAC)-- Audit log tracking for access, creation, modification, and reads within the FHIR service data store
+- Audit log tracking for access, creation, and modification within the FHIR service data store
The FHIR service allows you to quickly create and deploy a FHIR server in just minutes to leverage the elastic scale of the cloud for ingesting, persisting, and querying FHIR data. The Azure services that power the FHIR service are designed for high performance no matter how much data you're working with.
The FHIR API provisioned in the FHIR service enables any FHIR-compliant system t
## Leveraging the power of your data with FHIR
-The healthcare industry is rapidly adopting [FHIR®](https://hl7.org/fhir) as the industry-wide standard for health data storage, querying, and exchange. FHIR provides a robust, extensible data model with standardized semantics that all FHIR-compliant systems can use interchangeably. With FHIR, organizations can unify disparate electronic health record systems (EHRs) and other health data repositories – allowing for all data to be persisted and exchanged in a single, universal format. With the addition of SMART on FHIR, user-facing mobile and web-based applications can securely interact with FHIR data – opening a new range of possibilities for health data access. Most of all, FHIR simplifies the process of assembling large health datasets for research – providing a path for researchers and clinicians to unlock health insights through machine learning and analytics.
+The healthcare industry is rapidly adopting [FHIR®](https://hl7.org/fhir) as the industry-wide standard for health data storage, querying, and exchange. FHIR provides a robust, extensible data model with standardized semantics that all FHIR-compliant systems can use interchangeably. With FHIR, organizations can unify disparate electronic health record systems (EHRs) and other health data repositories – allowing all data to be persisted and exchanged in a single, universal format. With the addition of SMART on FHIR, user-facing mobile and web-based applications can securely interact with FHIR data – opening a new range of possibilities for patient and provider access to PHI. Most of all, FHIR simplifies the process of assembling large health datasets for research – enabling researchers and clinicians to apply machine learning and analytics at scale for gaining new health insights.
### Securely manage health data in the cloud
-The FHIR service in Azure Health Data Services makes FHIR data available to clients through a FHIR RESTful API ΓÇô an implementation of the HL7 FHIR API specification. Provisioned as a managed PaaS offering in Azure, the FHIR service gives organizations a scalable and secure environment for the storage and exchange of Protected Health Information (PHI) in the native FHIR format.
+The FHIR service in Azure Health Data Services makes FHIR data available to clients through a RESTful API. This API is an implementation of the HL7 FHIR API specification. As a managed PaaS offering in Azure, the FHIR service gives organizations a scalable and secure environment for the storage and exchange of Protected Health Information (PHI) in the native FHIR format.
### Free up your resources to innovate
-You could invest resources building and running your own FHIR server, but with the FHIR service in Azure Health Data Services, Microsoft handles setting up the server's components, ensuring all compliance requirements are met so you can focus on building innovative solutions.
+You could invest resources building and maintaining your own FHIR server, but with the FHIR service in Azure Health Data Services, Microsoft handles setting up the server's components, ensuring all compliance requirements are met so you can focus on building innovative solutions.
### Enable interoperability with FHIR
With the FHIR service, you control your data ΓÇô at scale. The FHIR service's Ro
### Secure your data
-As part of the Azure family of services, the FHIR service protects your organization's PHI with an unparalleled level of security. In Azure Health Data Services, your FHIR data is isolated to a unique database per FHIR service instance and protected with multi-region failover. On top of this, FHIR service implements a layered, in-depth defense and advanced threat protection for your data ΓÇô giving you peace of mind that your organization's PHI is guarded by Azure's industry-leading security.
+As part of the Azure family of services, the FHIR service protects your organization's PHI with an unparalleled level of security. In Azure Health Data Services, your FHIR data is isolated to a unique database per FHIR service instance and protected with multi-region failover. On top of this, FHIR service implements a layered, in-depth defense and advanced threat detection for your data ΓÇô giving you peace of mind that your organization's PHI is guarded by Azure's industry-leading security.
## Applications for the FHIR service FHIR servers are essential for interoperability of health data. The FHIR service is designed as a managed FHIR server with a RESTful API for connecting to a broad range of client systems and applications. Some of the key use cases for the FHIR service are listed below: -- **Startup App Development:** Customers developing a patient- or provider-centric app (mobile or web) can leverage FHIR service as a fully managed backend for their health data transactions. The FHIR service enables secure transfer of PHI, and with SMART on FHIR, app developers can take advantage of the robust identities management in Azure AD for authorization of FHIR RESTful API actions.
+- **Startup App Development:** Customers developing a patient- or provider-centric app (mobile or web) can leverage FHIR service as a fully managed backend for health data transactions. The FHIR service enables secure transfer of PHI, and with SMART on FHIR, app developers can take advantage of the robust identities management in Azure AD for authorization of FHIR RESTful API actions.
- **Healthcare Ecosystems:** While EHRs exist as the primary ΓÇÿsource of truthΓÇÖ in many clinical settings, it isn't uncommon for providers to have multiple databases that arenΓÇÖt connected to one another (often because the data is stored in different formats). Utilizing the FHIR service as a conversion layer between these systems allows organizations to standardize data in the FHIR format. Ingesting and persisting in FHIR enables health data querying and exchange across multiple disparate systems. -- **Research:** Health researchers have embraced the FHIR standard as it gives the community a shared data model and removes barriers to assembling large datasets for machine learning and analytics. With the FHIR service's data conversion and PHI de-identification capabilities, researchers can prepare HIPAA-compliant secondary-use data before sending it to Azure machine learning and analytics pipelines. The FHIR service's audit logging and alert mechanisms also play an important role in research workflows.
+- **Research:** Health researchers have embraced the FHIR standard as it gives the community a shared data model and removes barriers to assembling large datasets for machine learning and analytics. With the FHIR service's data conversion and PHI de-identification capabilities, researchers can prepare HIPAA-compliant data for secondary use before sending the data to Azure machine learning and analytics pipelines. The FHIR service's audit logging and alert mechanisms also play an important role in research workflows.
## FHIR platforms from Microsoft FHIR capabilities from Microsoft are available in three configurations:
-* The **FHIR service** is a managed platform as a service (PaaS) that operates as part of Azure Health Data Services. In addition to the FHIR service, Azure Health Data Services includes managed services for other types of health data, such as the DICOM service for medical imaging data and the MedTech service for medical IoT data. All services (FHIR service, DICOM service, and MedTech service) can be connected and administered within an Azure Health Data Services workspace.
+* The **FHIR service** is a managed platform as a service (PaaS) that operates as part of Azure Health Data Services. In addition to the FHIR service, Azure Health Data Services includes managed services for other types of health data such as the DICOM service for medical imaging data and the MedTech service for medical IoT data. All services (FHIR service, DICOM service, and MedTech service) can be connected and administered within an Azure Health Data Services workspace.
* **Azure API for FHIR** is a managed FHIR server offered as a PaaS in Azure ΓÇô easily provisioned in the Azure portal. Azure API for FHIR is not part of Azure Health Data Services and lacks some of the features of the FHIR service. * **FHIR Server for Azure**, an open-source FHIR server that can be deployed into your Azure subscription, is available on GitHub at https://github.com/Microsoft/fhir-server.
-For use cases that require customizing a FHIR server or that require access to the underlying services ΓÇô such as access to the database without going through the FHIR API, developers should choose the open-source FHIR Server for Azure. For implementation of a turnkey, production-ready FHIR API with a provisioned database backend (i.e., data can only be accessed through the FHIR API - not the database directly), developers should choose the FHIR service.
+For use cases that require customizing a FHIR server with admin access to the underlying services (e.g., access to the database without going through the FHIR API), developers should choose the open-source FHIR Server for Azure. For implementation of a turnkey, production-ready FHIR API with a provisioned database backend (i.e., data can only be accessed through the FHIR API - not the database directly), developers should choose the FHIR service.
## Next Steps
healthcare-apis Iot Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-overview.md
In order to implement MedTech service, you need to have an Azure subscription an
### Devices
-When the PaaS deployment is completed, high-velocity and low-velocity patient medical data can be collected from a wide range of JSON-compatible IoMT devices, systems, and formats.
+After the PaaS deployment is completed, high-velocity and low-velocity patient medical data can be collected from a wide range of JSON-compatible IoMT devices, systems, and formats.
### Event Hubs service
iot-central Tutorial Create Telemetry Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-create-telemetry-rules.md
To create a telemetry rule, the device template must include at least one teleme
1. In the left pane, select **Rules**.
-1. If you haven't created any rules yet, you see the following screen:
-
- :::image type="content" source="media/tutorial-create-telemetry-rules/rules-landing-page.png" alt-text="Screenshot that shows the empty list of rules":::
- 1. Select **+ New** to add a new rule. 1. Enter the name _Temperature monitor_ to identify the rule and press Enter.
Conditions define the criteria that the rule monitors. In this tutorial, you con
1. Next, choose **Is greater than** as the **Operator** and enter _70_ as the **Value**.
- :::image type="content" source="media/tutorial-create-telemetry-rules/condition-filled-out.png" alt-text="Screenshot that shows the temperature condition for the rule":::
- 1. Optionally, you can set a **Time aggregation**. When you select a time aggregation, you must also select an aggregation type, such as average or sum from the aggregation drop-down. * Without aggregation, the rule triggers for each telemetry data point that meets the condition. For example, if you configure the rule to trigger when temperature is above 70 then the rule triggers almost instantly when the device temperature exceeds this value.
iot-central Tutorial Use Device Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-use-device-groups.md
To analyze the telemetry for a device group:
1. Choose **Data explorer** on the left pane and select **Create a query**.
-1. Select the **Contoso devices** device group you created. Then add both the **Temperature** and **Humidity** telemetry types:
-
- :::image type="content" source="media/tutorial-use-device-groups/create-analysis.png" alt-text="Screenshot that shows the telemetry types selected for analysis":::
+1. Select the **Contoso devices** device group you created. Then add both the **Temperature** and **Humidity** telemetry types.
Use the ellipsis icons next to the telemetry types to select an aggregation type. The default is **Average**. Use **Group by** to change how the aggregate data is shown. For example, if you split by device ID you see a plot for each device when you select **Analyze**.
-1. Select **Analyze** to view the average telemetry values:
-
- :::image type="content" source="media/tutorial-use-device-groups/view-analysis.png" alt-text="Screenshot that shows average values for all the Contoso devices":::
+1. Select **Analyze** to view the average telemetry values.
You can customize the view, change the time period shown, and export the data as CSV or view data as table.
iot-central Tutorial Connected Waste Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-connected-waste-management.md
The dashboard consists of different tiles:
* **Waste monitoring area map**: This tile uses Azure Maps, which you can configure directly in Azure IoT Central. The map tile displays device [location](../core/howto-use-location-data.md). Try to hover over the map and try the controls over the map, like zoom-in, zoom-out, or expand.
- :::image type="content" source="media/tutorial-connectedwastemanagement/connected-waste-management-dashboard-map.png" alt-text="Screenshot of Connected Waste Management Template Dashboard map.":::
--- * **Fill, odor, weight level bar chart**: You can visualize one or multiple kinds of device telemetry data in a bar chart. You can also expand the bar chart. :::image type="content" source="media/tutorial-connectedwastemanagement/connected-waste-management-dashboard-bar-chart.png" alt-text="Screenshot of Connected Waste Management Template Dashboard bar chart..":::
To view the device template:
1. In Azure IoT Central, from the left pane of your app, select **Device templates**.
- :::image type="content" source="media/tutorial-connectedwastemanagement/connected-waste-management-device-template.png" alt-text="Screenshot showing the list of device templates in the application.":::
-- 1. In the **Device templates** list, select **Connected Waste Bin**. 1. Examine the device template capabilities. You can see that it defines sensors like **Fill level**, **Odor meter**, **Weight**, and **Location**.
The Connected waste management application has two simulated devices associated
### View the devices
-1. From the left pane of Azure IoT Central, select **Device**.
-
- :::image type="content" source="media/tutorial-connectedwastemanagement/connected-waste-management-devices.png" alt-text="Screenshot of Connected Waste Management Template devices.":::
-
+1. From the left pane of Azure IoT Central, select **Device**.
1. Select **Connected Waste Bin** device. :::image type="content" source="media/tutorial-connectedwastemanagement/connected-waste-management-devices-bin-1.png" alt-text="Screenshot of Connected Waste Management Template Device Properties."::: - Explore the **Device Properties** and **Device Dashboard** tabs. > [!NOTE]
The Connected waste management application has four sample rules.
1. From the left pane of Azure IoT Central, select **Rules**.
- :::image type="content" source="media/tutorial-connectedwastemanagement/connected-waste-management-rules.png" alt-text="Screenshot of Connected Waste Management Template Rules.":::
-- 1. Select **Bin full alert**. :::image type="content" source="media/tutorial-connectedwastemanagement/connected-waste-management-bin-full-alert.png" alt-text="Screenshot of Bin full alert.":::
iot-central Tutorial Water Consumption Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-water-consumption-monitoring.md
An active Azure subscription. If you don't have an Azure subscription, create a
Create the application using following steps:
-1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Government** tab:
- :::image type="content" source="media/tutorial-waterconsumptionmonitoring/iot-central-government-tab-overview1.png" alt-text="Application template":::
+1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Government** tab.
1. Select **Create app** under **Water consumption monitoring**.
The dashboard consists of different kinds of tiles:
* **Water distribution area map**: The map uses Azure Maps, which you can configure directly in Azure IoT Central. The map tile displays the device [location](../core/howto-use-location-data.md). Hover over the map and try the controls over the map, like *zoom in*, *zoom out*, or *expand*.
- :::image type="content" source="media/tutorial-waterconsumptionmonitoring/water-consumption-monitoring-dashboard-map.png" alt-text="Water consumption monitoring dashboard map":::
- * **Average water flow line chart** and **Environmental condition line chart**: You can visualize one or multiple device telemetries plotted as a line chart over a desired time range. * **Average valve pressure heatmap chart**: You can choose the heatmap visualization type of device telemetry data you want to see distributed over a time range with a color index. * **Reset alert thresholds content tile**: You can include call-to-action content tiles and embed a link to an action page. In this case, the reset alert threshold takes you to the application **Jobs**, where you can run updates to device properties. You'll explore this option later in the [Configure jobs](../government/tutorial-water-consumption-monitoring.md#configure-jobs) section of this tutorial.
You can customize views in the dashboard for operators.
1. Select **Edit** to customize the **Wide World water consumption dashboard**. You can customize the dashboard by selecting the **Edit** menu. After the dashboard is in **edit** mode, you can add new tiles or you can configure it.
- :::image type="content" source="media/tutorial-waterconsumptionmonitoring/water-consumption-monitoring-edit-dashboard.png" alt-text="Edit dashboard":::
- To learn more, see [Create and customize dashboards](../core/howto-manage-dashboards.md). ## Explore the device template
To view the device template:
1. Select **Device templates** on the left pane of your application in Azure IoT Central. In the **Device templates** list, you'll see two device templates, **Smart Valve** and **Flow meter**.
- ![Device template](./media/tutorial-waterconsumptionmonitoring/water-consumption-monitoring-device-template.png)
- 1. Select the **Flow meter** device template, and familiarize yourself with the device capabilities. ![Device template Flow meter](./media/tutorial-waterconsumptionmonitoring/water-consumption-monitoring-device-template-flow-meter.png)
In Azure IoT Central, you can create simulated devices to test your device templ
1. Select **Devices** > **All devices** on the left pane.
- :::image type="content" source="media/tutorial-waterconsumptionmonitoring/water-consumption-monitoring-devices.png" alt-text="All devices pane":::
- 1. Select **Smart Valve 1**. :::image type="content" source="media/tutorial-waterconsumptionmonitoring/water-consumption-monitor-device-1.png" alt-text="Smart Valve 1":::
The water consumption monitoring application you created has three preconfigured
1. Select **Rules** on the left pane.
- :::image type="content" source="media/tutorial-waterconsumptionmonitoring/water-consumption-monitoring-rules.png" alt-text="Rules pane":::
- 1. Select **High water flow alert**, which is one of the preconfigured rules in the application. :::image type="content" source="media/tutorial-waterconsumptionmonitoring/water-consumption-monitoring-high-flow-alert.png" alt-text="High pH alert":::
iot-central Tutorial Water Quality Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-water-quality-monitoring.md
An active Azure subscription. If you don't have an Azure subscription, create a
Create the application using following steps:
-1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Government** tab:
- :::image type="content" source="media/tutorial-waterqualitymonitoring/iot-central-government-tab-overview1.png" alt-text="Application template":::
+1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Government** tab.
1. Select **Create app** under **Water quality monitoring**.
As a builder, you can customize views on the dashboard for use by operators.
1. Select **Edit** to customize the **Wide World water quality dashboard** pane. You can customize the dashboard by selecting commands on the **Edit** menu. After the dashboard is in edit mode, you can add new tiles, or you can configure the existing files.
- :::image type="content" source="media/tutorial-waterqualitymonitoring/edit-dashboard.png" alt-text="Edit your dashboard.":::
- 1. Select **+ New** to create a new dashboard that you can configure. You can have multiple dashboards and can navigate among them from the dashboard menu. ## Explore a water quality monitoring device template
The water quality monitoring application you created from the application templa
1. Select **Devices** on the leftmost pane of your application.
- :::image type="content" source="media/tutorial-waterqualitymonitoring/water-quality-monitoring-devices.png" alt-text="Devices":::
- 1. Select one simulated device. :::image type="content" source="media/tutorial-waterqualitymonitoring/water-quality-monitor-device1.png" alt-text="Select device 1":::
The water quality monitoring application you created has two preconfigured rules
1. Select **Rules** on the leftmost pane of your application.
- :::image type="content" source="media/tutorial-waterqualitymonitoring/water-quality-monitoring-rules.png" alt-text="Rules":::
- 1. Select **High pH alert**, which is one of the preconfigured rules in the application. :::image type="content" source="media/tutorial-waterqualitymonitoring/water-quality-monitoring-high-ph-alert.png" alt-text="The high pH alert rule.":::
iot-central Tutorial Continuous Patient Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/healthcare/tutorial-continuous-patient-monitoring.md
An active Azure subscription. If you don't have an Azure subscription, create a
## Create application
-1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Healthcare** tab:
- :::image type="content" source="media/app-manager-health.png" alt-text="Application template":::
+1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Healthcare** tab.
1. Select **Create app** under **Continuous patient monitoring**.
After deploying the application template, you'll first land on the **Lamna in-pa
* Change the **patient status** of your device to indicate if the device is being used for an in-patient or remote scenario. - You can also select **Go to remote patient dashboard** to see the Burkville Hospital operator dashboard. This dashboard contains a similar set of actions, telemetry, and information. You can also see multiple devices in use and choose to **update the firmware** on each. :::image type="content" source="media/lamna-remote.png" alt-text="Remote operator dashboard":::
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
An active Azure subscription. If you don't have an Azure subscription, create a
Create the application using following steps:
-1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab:
- :::image type="content" source="media/tutorial-in-store-analytics-create-app/iotc-retail-homepage.png" alt-text="Connected logistics template":::
+1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab.
1. Select **Create app** under **In-store analytics - checkout**.
To create a custom theme:
1. Optionally, replace the default **Browser colors** by adding HTML hexadecimal color codes. For the **Header**, add *#008575*. For the **Accent**, add *#A1F3EA*.
-1. Select **Save**.
-
- :::image type="content" source="media/tutorial-in-store-analytics-create-app/select-application-logo.png" alt-text="Azure IoT Central customized logo.":::
+1. Select **Save**.
After you save, the application updates the browser colors, the logo in the masthead, and the browser icon.
To add a RuuviTag device template to your application:
1. Select **Next: Review**.
- :::image type="content" source="media/tutorial-in-store-analytics-create-app/ruuvitag-device-template.png" alt-text="Screenshot that highlights the Next: Customize button.":::
- 1. Select **Create**. The application adds the RuuviTag device template. 1. Select **Device templates** on the left pane. The page displays all device templates included in the application template, and the RuuviTag device template you just added.
iot-central Tutorial In Store Analytics Customize Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-customize-dashboard.md
To customize the dashboard, you have to edit the default dashboard in your appli
1. Select **Dashboard settings** and enter **Name** for your dashboard and select **Save**.
- :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/dashboard-edit.png" alt-text="Azure IoT Central edit dashboard.":::
- :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/new-dashboard.png" alt-text="Azure IoT Central new dashboard.":::
To customize the image tile that displays a brand image on the dashboard:
1. Select **Edit** on the dashboard toolbar.
-1. Select **Edit** on the image tile that displays the Northwind brand image.
-
- :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/brand-image-edit.png" alt-text="Azure IoT Central edit brand image.":::
+1. Select **Edit** on the image tile that displays the Northwind brand image.
1. Change the **Title**. The title appears when a user hovers over the image.
To remove tiles that you don't plan to use in your application:
1. Select **ellipsis** and **Delete** to remove the following tiles: **Back to all zones**, **Visit store dashboard**, **Occupancy**, **Warm-up checkout zone**, **Cool-down checkout zone**, **Occupancy sensor settings**, **Thermostat sensor settings**, and **Environment conditions** and all three tiles associated with **Checkout 3**. The Contoso store dashboard doesn't use these tiles.
- :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/delete-tiles.png" alt-text="Azure IoT Central delete tiles.":::
-- 1. Select **Save**. Removing unused tiles frees up space in the edit page, and simplifies the dashboard view for operators. After you remove unused tiles, rearrange the remaining tiles to create an organized layout. The new layout includes space for tiles you add in a later step.
To add tiles to display environmental data from the RuuviTag sensors:
1. Select `Relative humidity` and `temperature` in the **Telemetry** list. These are the telemetry items that display for each zone on the tile.
-1. Select **Combine**.
-
- :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/add-zone1-ruuvi.png" alt-text="Azure IoT Central add RuuviTag tile 1.":::
-
- A new tile appears to display combined humidity and temperature telemetry for the selected sensor.
+1. Select **Combine**. A new tile appears to display combined humidity and temperature telemetry for the selected sensor.
1. Select **Configure** on the new tile for the RuuviTag sensor.
iot-central Tutorial In Store Analytics Export Data Visualize Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-export-data-visualize-insights.md
Now you have an event hub, you can configure your **In-store analytics - checkou
1. Select **Create** and then **Save**. 1. On the **Telemetry export** page, wait for the export status to change to **Healthy**.
-The data export may take a few minutes to start sending telemetry to your event hub. You can see the status of the export on the **Data exports** page:
-
+The data export may take a few minutes to start sending telemetry to your event hub. You can see the status of the export on the **Data exports** page.
## Create the Power BI datasets
iot-central Tutorial Iot Central Connected Logistics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-connected-logistics.md
An active Azure subscription. If you don't have an Azure subscription, create a
Create the application using following steps:
-1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab:
-
- :::image type="content" source="media/tutorial-iot-central-connected-logistics/iotc-retail-homepage.png" alt-text="Connected logistics template":::
+1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab.
1. Select **Create app** under **Connected Logistics**.
Create the application using following steps:
* **Billing Info**: The directory, Azure subscription, and region details are required to provision the resources. * **Create**: Select create at the bottom of the page to deploy your application.
- :::image type="content" source="media/tutorial-iot-central-connected-logistics/connected-logistics-app-create.png" alt-text="Connected logistics application template":::
-
- :::image type="content" source="media/tutorial-iot-central-connected-logistics/connected-logistics-app-create-billinginfo.png" alt-text="Connected logistics billing info":::
- ## Walk through the application The following sections walk you through the key features of the application.
iot-central Tutorial Iot Central Digital Distribution Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-digital-distribution-center.md
An active Azure subscription. If you don't have an Azure subscription, create a
Create the application using following steps:
-1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab:
-
- :::image type="content" source="media/tutorial-iot-central-ddc/iotc-retail-home-page.png" alt-text="Screenshot showing how to create an app.":::
+1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab.
1. Select **Create app** under **digital distribution center**.
iot-central Tutorial Iot Central Smart Inventory Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-smart-inventory-management.md
An active Azure subscription. If you don't have an Azure subscription, create a
Create the application using the following steps:
-1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab:
-
- :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/iotc-retail-home-page.png" alt-text="Screenshot showing how to create an app from the smart inventory management application template":::
+1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab.
1. Select **Create app** under **smart inventory management**.
iot-central Tutorial Micro Fulfillment Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-micro-fulfillment-center.md
An active Azure subscription. If you don't have an Azure subscription, create a
Create the application using following steps:
-1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the navigation bar and then select the **Retail** tab:
-
- :::image type="content" source="media/tutorial-micro-fulfillment-center-app/iotc-retail-homepage-mfc.png" alt-text="Screenshot showing how to create an app.":::
+1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the navigation bar and then select the **Retail** tab.
1. Select **Create app** under **micro-fulfillment center**.
iot-develop About Iot Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/about-iot-sdks.md
The SDKs are available in **multiple languages** providing the flexibility to ch
| Language | Package | Source | Quickstarts | Samples | Reference | | :-- | :-- | :-- | :-- | :-- | :-- | | **.NET** | [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-csharp) | [Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp) | [Reference](/dotnet/api/microsoft.azure.devices.client) |
-| **Python** | [pip](https://pypi.org/project/azure-iot-device/) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-device/samples) | [Reference](/python/api/azure-iot-device) |
+| **Python** | [pip](https://pypi.org/project/azure-iot-device/) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples) | [Reference](/python/api/azure-iot-device) |
| **Node.js** | [npm](https://www.npmjs.com/package/azure-iot-device) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-nodejs) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/device/samples) | [Reference](/javascript/api/azure-iot-device/) | | **Java** | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-device-client) | [GitHub](https://github.com/Azure/azure-iot-sdk-java) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-java) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/master/device/iot-device-samples) | [Reference](/java/api/com.microsoft.azure.sdk.iot.device) | | **C** | [packages](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#getting-the-sdk) | [GitHub](https://github.com/Azure/azure-iot-sdk-c) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-ansi-c) | [Samples](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples) | [Reference](/azure/iot-hub/iot-c-sdk-ref/) |
iot-edge How To Install Iot Edge Ubuntuvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-install-iot-edge-ubuntuvm.md
The Azure IoT Edge runtime is what turns a device into an IoT Edge device. The r
To learn more about how the IoT Edge runtime works and what components are included, see [Understand the Azure IoT Edge runtime and its architecture](iot-edge-runtime.md). :::moniker range="iotedge-2018-06"
-This article lists the steps to deploy an Ubuntu 18.04 LTS virtual machine with the Azure IoT Edge runtime installed and configured using a pre-supplied device connection string. The deployment is accomplished using a [cloud-init](../virtual-machines/linux/using-cloud-init.md) based [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) maintained in the [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/master) project repository.
+This article lists the steps to deploy an Ubuntu 18.04 LTS virtual machine with the Azure IoT Edge runtime installed and configured using a pre-supplied device connection string. The deployment is accomplished using a [cloud-init](../virtual-machines/linux/using-cloud-init.md) based [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) maintained in the [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.1) project repository.
+
+On first boot, the virtual machine [installs the latest version of the Azure IoT Edge runtime via cloud-init](https://github.com/Azure/iotedge-vm-deploy/blob/1.1/cloud-init.txt). It also sets a supplied connection string before the runtime starts, allowing you to easily configure and connect the IoT Edge device without the need to start an SSH or remote desktop session.
:::moniker-end :::moniker range=">=iotedge-2020-11" This article lists the steps to deploy an Ubuntu 20.04 LTS virtual machine with the Azure IoT Edge runtime installed and configured using a pre-supplied device connection string. The deployment is accomplished using a [cloud-init](../virtual-machines/linux/using-cloud-init.md) based [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) maintained in the [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.3) project repository.
-On first boot, the virtual machine [installs the latest version of the Azure IoT Edge runtime via cloud-init](https://github.com/Azure/iotedge-vm-deploy/blob/master/cloud-init.txt). It also sets a supplied connection string before the runtime starts, allowing you to easily configure and connect the IoT Edge device without the need to start an SSH or remote desktop session.
+On first boot, the virtual machine [installs the latest version of the Azure IoT Edge runtime via cloud-init](https://github.com/Azure/iotedge-vm-deploy/blob/1.3/cloud-init.txt). It also sets a supplied connection string before the runtime starts, allowing you to easily configure and connect the IoT Edge device without the need to start an SSH or remote desktop session.
## Deploy using Deploy to Azure Button
The [Deploy to Azure Button](../azure-resource-manager/templates/deploy-to-azure
1. We will deploy an Azure IoT Edge enabled Linux VM using the iotedge-vm-deploy Azure Resource Manager template. To begin, click the button below: :::moniker range="iotedge-2018-06"
- [![Deploy to Azure Button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2Fmaster%2FedgeDeploy.json)
+ [![Deploy to Azure Button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2F1.1%2FedgeDeploy.json)
:::moniker-end :::moniker range=">=iotedge-2020-11" [![Deploy to Azure Button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2F1.3%2FedgeDeploy.json)
The [Deploy to Azure Button](../azure-resource-manager/templates/deploy-to-azure
```azurecli-interactive az deployment group create \ --resource-group IoTEdgeResources \
- --template-uri "https://aka.ms/iotedge-vm-deploy" \
+ --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.1/edgeDeploy.json" \
--parameters dnsLabelPrefix='my-edge-vm1' \ --parameters adminUsername='<REPLACE_WITH_USERNAME>' \ --parameters deviceConnectionString=$(az iot hub device-identity connection-string show --device-id <REPLACE_WITH_DEVICE-NAME> --hub-name <REPLACE-WITH-HUB-NAME> -o tsv) \
The [Deploy to Azure Button](../azure-resource-manager/templates/deploy-to-azure
#Create a VM using the iotedge-vm-deploy script az deployment group create \ --resource-group IoTEdgeResources \
- --template-uri "https://aka.ms/iotedge-vm-deploy" \
+ --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.1/edgeDeploy.json" \
--parameters dnsLabelPrefix='my-edge-vm1' \ --parameters adminUsername='<REPLACE_WITH_USERNAME>' \ --parameters deviceConnectionString=$(az iot hub device-identity connection-string show --device-id <REPLACE_WITH_DEVICE-NAME> --hub-name <REPLACE-WITH-HUB-NAME> -o tsv) \
iot-edge Quickstart Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/quickstart-linux.md
This section uses an Azure Resource Manager template to create a new virtual mac
<!-- 1.1 --> :::moniker range="iotedge-2018-06"
-Use the following CLI command to create your IoT Edge device based on the prebuilt [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy) template.
+Use the following CLI command to create your IoT Edge device based on the prebuilt [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.1) template.
* For bash or Cloud Shell users, copy the following command into a text editor, replace the placeholder text with your information, then copy into your bash or Cloud Shell window: ```azurecli-interactive az deployment group create \ --resource-group IoTEdgeResources \
- --template-uri "https://aka.ms/iotedge-vm-deploy" \
+ --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.1/edgeDeploy.json" \
--parameters dnsLabelPrefix='<REPLACE_WITH_VM_NAME>' \ --parameters adminUsername='azureUser' \ --parameters deviceConnectionString=$(az iot hub device-identity connection-string show --device-id myEdgeDevice --hub-name <REPLACE_WITH_HUB_NAME> -o tsv) \
Use the following CLI command to create your IoT Edge device based on the prebui
```azurecli az deployment group create ` --resource-group IoTEdgeResources `
- --template-uri "https://aka.ms/iotedge-vm-deploy" `
+ --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.1/edgeDeploy.json" `
--parameters dnsLabelPrefix='<REPLACE_WITH_VM_NAME>' ` --parameters adminUsername='azureUser' ` --parameters deviceConnectionString=$(az iot hub device-identity connection-string show --device-id myEdgeDevice --hub-name <REPLACE_WITH_HUB_NAME> -o tsv) `
iot-edge Tutorial Machine Learning Edge 05 Configure Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-machine-learning-edge-05-configure-edge-device.md
For this tutorial, we register the new device identity by using Visual Studio Co
## Deploy an Azure virtual machine
-We use an Ubuntu 18.04 LTS virtual machine with the Azure IoT Edge runtime installed and configured. The deployment uses an [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) maintained in the [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy) project repository. It provisions the IoT Edge device your registered in the previous step using the connection string you supply in the template.
+We use an Ubuntu 18.04 LTS virtual machine with the Azure IoT Edge runtime installed and configured. The deployment uses an [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) maintained in the [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.1) project repository. It provisions the IoT Edge device your registered in the previous step using the connection string you supply in the template.
You can deploy the virtual machine using the Azure portal or Azure CLI. We will show the Azure portal steps. See [Run Azure IoT Edge on Ubuntu Virtual Machines](how-to-install-iot-edge-ubuntuvm.md) for more information.
You can deploy the virtual machine using the Azure portal or Azure CLI. We will
1. To use the `iotedge-vm-deploy` ARM template to deploy your Ubuntu 18.04 LTS virtual machine, click the button below:
- [![Deploy to Azure Button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2Fmaster%2FedgeDeploy.json)
+ [![Deploy to Azure Button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2F1.1%2FedgeDeploy.json)
1. On the newly launched window, fill in the available form fields.
iot-fundamentals Howto Use Iot Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/howto-use-iot-explorer.md
Title: Install and use Azure IoT explorer | Microsoft Docs
description: Install the Azure IoT explorer tool and use it to interact with IoT Plug and Play devices connected to IoT hub. Although this article focuses on working with IoT Plug and Play devices, you can use the tool with any device connected to your hub. Previously updated : 11/10/2020 Last updated : 08/23/2022
iot-fundamentals Iot Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-glossary.md
Previously updated : 11/02/2021 Last updated : 08/26/2022 # Generated from YAML source.
iot-fundamentals Iot Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-introduction.md
Previously updated : 01/15/2020 Last updated : 08/23/2022 #Customer intent: As a newcomer to IoT, I want to understand what IoT is, what services are available, and examples of business cases so I can figure out where to start.
iot-fundamentals Iot Phone App How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-phone-app-how-to.md
Previously updated : 05/27/2021 Last updated : 08/24/2022
iot-fundamentals Iot Security Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-security-architecture.md
Previously updated : 11/29/2021 Last updated : 08/26/2022 # Internet of Things (IoT) security architecture
iot-fundamentals Iot Security Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-security-best-practices.md
Previously updated : 10/09/2018 Last updated : 08/26/2022 # Security best practices for Internet of Things (IoT)
iot-fundamentals Iot Security Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-security-deployment.md
Previously updated : 03/08/2019 Last updated : 08/24/2022 # Secure your Internet of Things (IoT) deployment
iot-fundamentals Iot Security Ground Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-security-ground-up.md
Previously updated : 10/09/2018 Last updated : 08/24/2022 # Security for Internet of Things (IoT) from the ground up
iot-fundamentals Iot Services And Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-services-and-technologies.md
Previously updated : 01/15/2020 Last updated : 08/25/2022
iot-fundamentals Iot Solution Apaas Paas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-solution-apaas-paas.md
Previously updated : 02/03/2022 Last updated : 08/24/2022
iot-fundamentals Iot Solution Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-solution-options.md
Previously updated : 02/03/2022 Last updated : 08/23/2022
iot-fundamentals Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/security-recommendations.md
Previously updated : 11/13/2019 Last updated : 08/24/2022
iot-hub-device-update Create Update Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/create-update-group.md
Tags can also be added or updated in Device twin or Module Twin directly.
1. Log into [Azure portal](https://portal.azure.com) and navigate to your IoT Hub.
-2. From 'IoT Devices' or 'IoT Edge' on the left navigation pane find your IoT device and navigate to the Device Twin, or the Device Update Module and then its Module Twin (this will be available if Device Update agent is set up as a Module Identity).
+2. From **Devices** or **IoT Edge** on the left navigation pane find your IoT device and navigate to the Device Twin, or the Device Update Module and then its Module Twin (this will be available if Device Update agent is set up as a Module Identity).
3. In the Device Twin or Module Twin, delete any existing Device Update tag value by setting them to null.
Tags can also be added or updated in Device twin or Module Twin directly.
2. Select the IoT Hub you previously connected to your Device Update instance.
-3. Select the Updates option under Device Management from the left-hand navigation bar.
+3. Select the **Updates** option under **Device Management** from the left-hand navigation bar.
+
+4. Select the **Groups and Deployments** tab at the top of the page.
-4. Select the Groups and Deployments tab at the top of the page.
:::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot of ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
-5. Select the "Add group" button to create a new group.
+5. Select **Add group** to create a new group.
+ :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot of device group addition." lightbox="media/create-update-group/add-group.png":::
-6. Select an IoT Hub tag and Device Class from the list and then select Create group.
+6. Select an IoT Hub tag and Device Class from the list and then select **Create group**.
+ :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot of tag selection." lightbox="media/create-update-group/select-tag.png"::: 7. Once the group is created, you will see that the update compliance chart and groups list are updated. Update compliance chart shows the count of devices in various states of compliance: On latest update, New updates available, and Updates in Progress. [Learn about update compliance.](device-update-compliance.md)+ :::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot of update compliance view." lightbox="media/create-update-group/updated-view.png"::: 8. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they will show up in a corresponding invalid group. You can deploy the best available update to the new user-defined group from this view by clicking on the "Deploy" button next to the group. See Next Step: Deploy Update for more details.
Tags can also be added or updated in Device twin or Module Twin directly.
1. Navigate to your newly created group and click on the group name. 2. A list of devices that are part of the group will be shown along with their device update properties. In this view, you can also see the update compliance information for all devices that are members of the group. Update compliance chart shows the count of devices in various states of compliance: On latest update, New updates available and Updates in Progress.+ :::image type="content" source="media/create-update-group/group-details.png" alt-text="Screenshot of device group details view." lightbox="media/create-update-group/group-details.png"::: 3. You can also click on each individual device within a group to be redirected to the device details page in IoT Hub.+ :::image type="content" source="media/create-update-group/device-details.png" alt-text="Screenshot of device details view." lightbox="media/create-update-group/device-details.png"::: ## Next Steps
iot-hub-device-update Device Update Azure Real Time Operating System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-azure-real-time-operating-system.md
Learn more about [Azure RTOS](/azure/rtos/).
1. Keep the device application running from the previous step. 1. Sign in to the [Azure portal](https://portal.azure.com) and go to the IoT hub.
-1. On the left pane, under **IoT Devices**, find your IoT device and go to the device twin.
+1. On the left pane, select **Devices**. Find your IoT device and go to the device twin.
1. In the device twin, delete any existing Device Update tag values by setting them to null. 1. Add a new Device Update tag value to the root JSON object, as shown:
iot-hub-device-update Device Update Control Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-control-access.md
A combination of roles can be used to provide the right level of access. For exa
## Configuring access for Azure Device Update service principal in the IoT Hub
-Device Update for IoT Hub communicates with the IoT Hub for deployments and manage updates at scale. In order to enable Device Update to do this, users need to set Contributor access for Azure Device Update Service Principal in the IoT Hub permissions.
+Device Update for IoT Hub communicates with the IoT Hub for deployments and manage updates at scale. In order to enable Device Update to do this, users need to set IoT Hub Data Contributor Contributor access for Azure Device Update Service Principal in the IoT Hub permissions.
Below actions will be blocked, after 9/28/22, if these permissions are not set: * Create Deployment
Below actions will be blocked, after 9/28/22, if these permissions are not set:
1. Go to the **IoT Hub** connected to your Device Update Instance. Click **Access Control(IAM)** 2. Click **+ Add** -> **Add role assignment**
-3. Under Role tab, select **Contributor**
+3. Under Role tab, select **IoT Hub Data Contributor**
4. Click **Next**. For **Assign access to**, select **User, group, or service principal**. Click **+ Select Members**, search for '**Azure Device Update**' 5. Click **Next** -> **Review + Assign**
iot-hub-device-update Device Update Raspberry Pi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-raspberry-pi.md
Now, add the device to IoT Hub. From within IoT Hub, a connection string is gene
1. From the Azure portal, start IoT Hub. 1. Create a new device.
-1. On the left pane, select **IoT Devices**. Then select **New**.
+1. On the left pane, select **Devices**. Then select **New**.
1. Under **Device ID**, enter a name for the device. Ensure that the **Autogenerate keys** checkbox is selected. 1. Select **Save**. On the **Devices** page, the device you created should be in the list. 1. Get the device connection string by using one of two options:
Here are two examples for the `du-config.json` and the `du-diagnostics-config.js
## Connect the device in Device Update for IoT Hub
-1. On the left pane, select **IoT Devices**.
+1. On the left pane, select **Devices**.
1. Select the link with your device name. 1. At the top of the page, select **Device Twin** if you're connecting directly to Device Update by using the IoT device identity. Otherwise, select the module you created and select its module twin. 1. Under the **reported** section of the **Device Twin** properties, look for the Linux kernel version.
Use that version number in the later "Import the update" section.
## Add a tag to your device 1. Sign in to the [Azure portal](https://portal.azure.com) and go to the IoT hub.
-1. On the left pane, under **IoT Devices** or **IoT Edge**, find your IoT device and go to the device twin or module twin.
+1. On the left pane, under **Devices** or **IoT Edge**, find your IoT device and go to the device twin or module twin.
1. In the module twin of the Device Update agent module, delete any existing Device Update tag values by setting them to null. If you're using the device identity with the Device Update agent, make these changes on the device twin. 1. Add a new Device Update tag value, as shown:
iot-hub-device-update Device Update Simulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-simulator.md
After the Device Update agent is running on an IoT device, you must add the devi
1. From the Azure portal, start the Device Update for IoT Hub. 1. Create a new device.
-1. On the left pane, go to **IoT Devices**. Then select **New**.
+1. On the left pane, go to **Devices**. Then select **New**.
1. Under **Device ID**, enter a name for the device. Ensure that the **Autogenerate keys** checkbox is selected. 1. Select **Save**. 1. Now, you're returned to the **Devices** page and the device you created should be in the list. Select that device.
Read the license terms prior to using the agent. Your installation and use const
## Add a tag to your device 1. Sign in to the [Azure portal](https://portal.azure.com) and go to the IoT hub.
-1. From **IoT Devices** or **IoT Edge** on the left pane, find your IoT device and go to the device twin or module twin.
+1. From **Devices** or **IoT Edge** on the left pane, find your IoT device and go to the device twin or module twin.
1. In the module twin of the Device Update agent module, delete any existing Device Update tag values by setting them to null. If you're using the device identity with a Device Update agent, make these changes on the device twin. 1. Add a new Device Update tag value, as shown:
lab-services Azure Polices For Lab Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/azure-polices-for-lab-services.md
+
+ Title: Azure Policies for Lab Services
+description: This article describes the policies available for Azure Lab Services.
+++ Last updated : 08/15/2022++
+# WhatΓÇÖs new with Azure Policy for Lab Services?
+
+Azure Policy helps you manage and prevent IT issues by applying policy definitions that enforce rules and effects for your resource. Azure Lab Services has added four built-in Azure policies. This article summarizes the new policies available in the August 2022 Update for Azure Lab Services.
+
+1. Lab Services should enable all options for auto shutdown
+1. Lab Services should not allow template virtual machines for labs
+1. Lab Services should require non-admin user for labs
+1. Lab Services should restrict allowed virtual machine SKU sizes
+
+For a full list of built-in policies, including policies for Lab Services, see [Azure Policy built-in policy definitions](/azure/governance/policy/samples/built-in-policies#lab-services).
++++
+## Lab Services should enable all options for auto shutdown
+
+This policy enforces that all [shutdown options](how-to-configure-auto-shutdown-lab-plans.md) are enabled while creating the lab. During policy assignment, lab administrators can choose the following effects.
+
+|**Effect**|**Behavior**|
+|--|--|
+|**Audit**|Labs will show on the [compliance dashboard](/azure/governance/policy/assign-policy-portal#identify-non-compliant-resources) as non-compliant when all shutdown options are not enabled for a lab. |
+|**Deny**|Lab creation will fail if all shutdown options are not enabled. |
+
+## Lab Services should not allow template virtual machines for labs
+
+This policy can be used to restrict [customization of lab templates](tutorial-setup-lab.md). When you create a new lab, you can select to *Create a template virtual machine* or *Use virtual machine image without customization*. If this policy is enabled, only *Use virtual machine image without customization* is allowed. During policy assignment, lab administrators can choose the following effects.
+
+|**Effect**|**Behavior**|
+|--|--|
+|**Audit**|Labs will show on the [compliance dashboard](/azure/governance/policy/assign-policy-portal#identify-non-compliant-resources) as non-compliant when a template virtual machine is used for a lab.|
+|**Deny**|Lab creation to fail if ΓÇ£create a template virtual machineΓÇ¥ option is used for a lab.|
+
+## Lab Services require non-admin user for labs
+
+This policy is used to enforce using non-admin accounts while creating a lab. With the August 2022 Update, you can choose to add a non-admin account to the VM image. This new feature allows you to keep separate credentials for VM admin and non-admin users. For more information to create a lab with a non-admin user, see [Tutorial: Create and publish a lab](tutorial-setup-lab.md#create-a-lab), which shows how to give a student non-administrator account rather than default administrator account on the ΓÇ£Virtual machine credentialsΓÇ¥ page of the new lab wizard.
+
+During the policy assignment, the lab administrator can choose the following effects.
+
+|**Effect**|**Behavior**|
+|--|--|
+|**Audit**|Labs show on the [compliance dashboard](/azure/governance/policy/assign-policy-portal#identify-non-compliant-resources) as non-compliant when non-admin accounts is not used while creating the lab.|
+|**Deny**|Lab creation will fail if ΓÇ£Give lab users a non-admin account on their virtual machinesΓÇ¥ is not checked while creating a lab.|
+
+## Lab Services should restrict allowed virtual machine SKU sizes
+This policy is used to enforce which SKUs can be used while creating the lab. For example, a lab administrator might want to prevent educators from creating labs with GPU SKUs since they are not needed for any classes being taught. This policy would allow lab administrators to enforce which SKUs can be used while creating the lab.
+During the policy assignment, the Lab Administrator can choose the following effects.
+
+|**Effect**|**Behavior**|
+|--|--|
+|**Audit**|Labs show on the [compliance dashboard](/azure/governance/policy/assign-policy-portal#identify-non-compliant-resources) as non-compliant when a non-allowed SKU is used while creating the lab.|
+|**Deny**|Lab creation will fail if SKU chosen while creating a lab is not allowed as per the policy assignment.|
+
+## Next steps
+
+See the following articles:
+- [How to use the Lab Services should restrict allowed virtual machine SKU sizes Azure policy](how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy.md)
+- [Built-in Policies](/azure/governance/policy/samples/built-in-policies#lab-services)
+- [What is Azure policy?](/azure/governance/policy/overview)
lab-services How To Use Restrict Allowed Virtual Machine Sku Sizes Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy.md
+
+ Title: How to restrict the virtual machine sizes allowed for labs
+description: Learn how to use the Lab Services should restrict allowed virtual machine SKU sizes Azure Policy to restrict educators to specified virtual machine sizes for their labs.
+++ Last updated : 08/23/2022++
+# How to restrict the virtual machine sizes allowed for labs
+
+In this how to, you'll learn how to use the *Lab Services should restrict allowed virtual machine SKU sizes* Azure policy to control the SKUs available to educators when they're creating labs. In this example, you'll see how a lab administrator can allow only non-GPU SKUs, so educators can create only non-GPU SKU labs.
++
+## Configure the policy
+
+1. In the [Azure portal](https://portal.azure.com), go to your subscription.
+
+1. From the left menu, under **Settings**, select **Policies**.
+
+1. Under **Authoring**, select **Assignments**.
+
+1. Select **Assign Policy**.
+ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy.png" alt-text="Screenshot showing the Policy Compliance dashboard with Assign policy highlighted.":::
+
+1. Select the **Scope** which you would like to assign the policy to, and then select **Select**.
+ You can also select a resource group if you need the policy to apply more granularly.
+ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-basics-scope.png" alt-text="Screenshot showing the Scope pane with subscription highlighted.":::
+
+1. Select Policy Definition. In Available definitions, search for *Lab Services*, select **Lab Services should restrict allowed virtual machine SKU sizes** and then select **Select**.
+ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-basics-definitions.png" alt-text="Screenshot showing the Available definitions pane with Lab Services should restrict allowed virtual machine SKU sizes highlighted. ":::
+
+1. On the Basics tab, select **Next**.
+
+1. On the Parameters tab, clear **Only show parameters that need input or review** to show all parameters.
+ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-parameters.png" alt-text="Screenshot showing the Parameters tab with Only show parameters that need input or review highlighted. ":::
+
+1. The **Allowed SKU names** parameter shows the SKUs allowed when the policy is applied. By default all the available SKUs are allowed. You must clear the check boxes for any SKU that you don't wish to allow educators to use to create labs. In this example, only the following non-GPU SKUs are allowed:
+ - CLASSIC_FSV2_2_4GB_128_S_SSD
+ - CLASSIC_FSV2_4_8GB_128_S_SSD
+ - CLASSIC_FSV2_8_16GB_128_S_SSD
+ - CLASSIC_DSV4_4_16GB_128_P_SSD
+ - CLASSIC_DSV4_8_32GB_128_P_SSD
+
+ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-parameters-vms.png" alt-text="Screenshot showing the Allowed SKUs.":::
+
+ Use the table below to determine which SKU names to apply.
+
+ |SKU Name|VM Size|VM Size Details|
+ |--|--|--|
+ |CLASSIC_FSV2_2_4GB_128_S_SSD| Small |2vCPUs, 4 GB RAM, 128 GB, Standard SSD
+ |CLASSIC_FSV2_4_8GB_128_S_SSD| Medium |4vCPUs, 8 GB RAM, 128 GB, Standard SSD
+ |CLASSIC_FSV2_8_16GB_128_S_SSD| Large |8vCPUs, 16 GB RAM, 128 GB, Standard SSD
+ |CLASSIC_DSV4_4_16GB_128_P_SSD| Medium (Nested virtualization) |4 vCPUs, 16 GB RAM, 128 GB, Premium SSD
+ |CLASSIC_DSV4_8_32GB_128_P_SSD| Large (Nested virtualization) |8vCPUs, 32 GB RAM, 128 GB, Premium SSD
+ |CLASSIC_NCSV3_6_112GB_128_S_SSD| Small GPU (Compute) |6vCPUs, 112 GB RAM, 128 GB, Standard SSD
+ |CLASSIC_NVV4_8_28GB_128_S_SSD| Small GPU (Visualization) |8vCPUs, 28 GB RAM, 128 GB, Standard SSD
+ |CLASSIC_NVV3_12_112GB_128_S_SSD| Medium GPU (Visualization) |12vCPUs, 112 GB RAM, 128 GB, Standard SSD
+
+1. In **Effect**, select **Deny**. Selecting deny will prevent a lab from being created if an educator tries to use a GPU SKU.
+ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-parameters-effect.png" alt-text="Screenshot showing the effect list.":::
+
+1. Select **Next**.
+
+1. On the Remediation tab, select **Next**.
+ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-remediation.png" alt-text="Screenshot showing the Remediation tab with Next highlighted.":::
+
+1. On the Non-compliance tab, in **Non-compliance messages**, enter a non-compliance message of your choice like ΓÇ£Selected SKU is not allowedΓÇ¥, and then select **Next**.
+ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-message.png" alt-text="Screenshot showing the Non-compliance tab with an example non-compliance message.":::
+
+1. On the Review + Create tab, select **Create** to create the policy assignment.
+ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-review-create.png" alt-text="Screenshot showing the Review and Create tab.":::
+
+You've created a policy assignment for *Lab Services should restrict allowed virtual machine SKU sizes* and allowed only the use of non-GPU SKUs for labs. Attempting to create a lab with any other SKU will fail.
+
+> [!NOTE]
+> New policy assignments can take up to 30 minutes to take effect.
+
+## Exclude resources
+
+When applying a built-in policy, you can choose to exclude certain resources, with the exception of lab plans. For example, if the scope of your policy assignment is a subscription, you can exclude resources in a specified resource group. Exclusions are configured using the Exclusions property on the Basics tab when creating a policy definition.
+++
+## Exclude a lab plan
+
+Lab plans cannot be excluded using the Exclusions property on the Basics tab. To exclude a lab plan from a policy assignment, you first need to get the lab plan resource ID, and then use it to specify the lab pan you want to exclude on the Parameters tab.
+
+### Locate and copy lab plan resource ID
+Use the following steps to locate and copy the resource ID so that you can paste it into the exclusion configuration.
+1. In the [Azure portal](https://portal.azure.com), go to the lab plan you want to exclude.
+
+1. Under Settings, select Properties, and then copy the **Resource ID**.
+ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/resource-id.png" alt-text="Screenshot showing the lab plan properties with resource ID highlighted.":::
+
+### Enter the lab plan to exclude in the policy
+Now you have a lab plan resource ID, you can use it to exclude the lab plan as you assign the policy.
+1. On the Parameters tab, clear **Only show parameters that need input or review**.
+1. For **Lab Plan ID to exclude**, enter the lab plan resource ID you copied earlier.
+ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-exclude-lab-plan-id.png" alt-text="Screenshot showing the Parameter tab with Lab Plan ID to exclude highlighted.":::
++
+## Next steps
+See the following articles:
+- [WhatΓÇÖs new with Azure Policy for Lab Services?](azure-polices-for-lab-services.md)
+- [Built-in Policies](/azure/governance/policy/samples/built-in-policies#lab-services)
+- [What is Azure policy?](/azure/governance/policy/overview)
+
lighthouse Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/architecture.md
Title: Azure Lighthouse architecture description: Learn about the relationship between tenants in Azure Lighthouse, and the resources created in the customer's tenant that enable that relationship. Previously updated : 06/09/2022 Last updated : 08/26/2022
Azure Lighthouse helps service providers simplify customer engagement and onboar
This topic discusses the relationship between tenants in Azure Lighthouse, and the resources created in the customer's tenant that enable that relationship.
+> [!NOTE]
+> Onboarding a customer to Azure Lighthouse requires a deployment by a non-guest account in the customer's tenant who has a role with the `Microsoft.Authorization/roleAssignments/write` permission, such as [Owner](../../role-based-access-control/built-in-roles.md#owner), for the subscription being onboarded (or which contains the resource groups that are being onboarded).
+ ## Delegation resources created in the customer tenant When a customerΓÇÖs subscription or resource group is onboarded to Azure Lighthouse, two resources are created: the **registration definition** and the **registration assignment**. You can use [APIs and management tools](cross-tenant-management-experience.md#apis-and-management-tool-support) to access these resources, or work with them [in the Azure portal](../how-to/view-manage-customers.md).
load-balancer Quickstart Load Balancer Standard Public Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-bicep.md
+
+ Title: "Quickstart: Create a public load balancer - Bicep"
+
+description: This quickstart shows how to create a load balancer by using a Bicep file.
+
+documentationcenter: na
+++
+ na
+ Last updated : 08/17/2022++
+#Customer intent: I want to create a load balancer by using a Bicep file so that I can load balance internet traffic to VMs.
++
+# Quickstart: Create a public load balancer to load balance VMs by using a Bicep file
+
+Load balancing provides a higher level of availability and scale by spreading incoming requests across multiple virtual machines (VMs).
+
+This quickstart shows you how to deploy a standard load balancer to load balance virtual machines.
+
+Using a Bicep file takes fewer steps comparing to other deployment methods.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/load-balancer-standard-create/).
+
+Load balancer and public IP SKUs must match. When you create a standard load balancer, you must also create a new standard public IP address that is configured as the frontend for the standard load balancer. If you want to create a basic load balancer, use [this template](https://azure.microsoft.com/resources/templates/2-vms-loadbalancer-natrules/). Microsoft recommends using standard SKU for production workloads.
++
+Multiple Azure resources have been defined in the bicep file:
+
+- [**Microsoft.Network/loadBalancers**](/azure/templates/microsoft.network/loadbalancers)
+- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses): for the load balancer, bastion host, and for each of the three virtual machines.
+- [**Microsoft.Network/bastionHosts**](/azure/templates/microsoft.network/bastionhosts)
+- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networksecuritygroups)
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines) (3).
+- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces) (3).
+- [**Microsoft.Compute/virtualMachine/extensions**](/azure/templates/microsoft.compute/virtualmachines/extensions) (3): use to configure the Internet Information Server (IIS), and the web pages.
+
+To find more Bicep files or ARM templates that are related to Azure Load Balancer, see [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Network&pageNumber=1&sort=Popular).
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location centralus
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location centralus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
+
+
+
+ > [!NOTE]
+ > The Bicep file deployment creates three availability zones. Availability zones are supported only in [certain regions](../availability-zones/az-overview.md). Use one of the supported regions. If you aren't sure, enter **centralus**.
+
+ You will be prompted to enter the following values:
+
+ - **projectName**: used for generating resource names.
+ - **adminUsername**: virtual machine administrator username.
+ - **adminPassword**: virtual machine administrator password.
+
+It takes about 10 minutes to deploy the Bicep file.
+
+## Review deployed resources
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Resource groups** from the left pane.
+
+1. Select the resource group that you created in the previous section. The default resource group name is **exampleRG**.
+
+1. Select the load balancer. Its default name is the project name with **-lb** appended.
+
+1. Copy only the IP address part of the public IP address, and then paste it into the address bar of your browser.
+
+ :::image type="content" source="./media/quickstart-load-balancer-standard-public-template/azure-standard-load-balancer-resource-manager-template-deployment-public-ip.png" alt-text="Screenshot of Azure standard load balancer Resource Manager template public IP.":::
+
+ The browser displays the default page of the Internet Information Services (IIS) web server.
+
+ :::image type="content" source="./media/quickstart-load-balancer-standard-public-template/load-balancer-test-web-page.png" alt-text="Screenshot of IIS web server.":::
+
+To see the load balancer distribute traffic across all three VMs, you can force a refresh of your web browser from the client machine.
+
+## Clean up resources
+
+When you no longer need them, delete the:
+
+- Resource group
+- Load balancer
+- Related resources
+
+Go to the Azure portal, select the resource group that contains the load balancer, and then select **Delete resource group**.
+
+## Next steps
+
+In this quickstart, you:
+
+- Created a virtual network for the load balancer and virtual machines.
+- Created an Azure Bastion host for management.
+- Created a standard load balancer and attached VMs to it.
+- Configured the load-balancer traffic rule, and the health probe.
+- Tested the load balancer.
+
+To learn more, continue to the tutorials for Azure Load Balancer.
+
+> [!div class="nextstepaction"]
+> [Azure Load Balancer tutorials](./quickstart-load-balancer-standard-public-portal.md)
logic-apps Logic Apps Enterprise Integration Rosettanet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-rosettanet.md
Previously updated : 06/22/2019 Last updated : 08/25/2022 # Exchange RosettaNet messages for B2B enterprise integration in Azure Logic Apps + [RosettaNet](https://resources.gs1us.org) is a non-profit consortium that has established standard processes for sharing business information. These standards are commonly used for supply chain processes and are widespread in the semiconductor, electronics, and logistics industries. The RosettaNet consortium creates and maintains Partner Interface Processes (PIPs), which provide common business process definitions for all RosettaNet message exchanges. RosettaNet is based on XML and defines message guidelines, interfaces for business processes, and implementation frameworks for communication between companies. In [Azure Logic Apps](../logic-apps/logic-apps-overview.md), the RosettaNet connector helps you create integration solutions that support RosettaNet standards. The connector is based on RosettaNet Implementation Framework (RNIF) version 2.0.01. RNIF is an open network application framework that enables business partners to collaboratively run RosettaNet PIPs. This framework defines the message structure, the need for acknowledgments, Multipurpose Internet Mail Extensions (MIME) encoding, and the digital signature.
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
Using **Azure Machine Learning**, you can design and run your automated ML train
1. **Specify the source and format of the labeled training data**: Numpy arrays or Pandas dataframe
-1. **Configure the compute target for model training**, such as your [local computer, Azure Machine Learning Computes, remote VMs, or Azure Databricks](how-to-set-up-training-targets.md).
- 1. **Configure the automated machine learning parameters** that determine how many iterations over different models, hyperparameter settings, advanced preprocessing/featurization, and what metrics to look at when determining the best model. 1. **Submit the training job.**
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md
Azure Machine Learning has varying support across different compute targets. A t
[!INCLUDE [aml-compute-target-train](../../includes/aml-compute-target-train.md)]
-Learn more about how to [submit a training job to a compute target](how-to-set-up-training-targets.md).
## <a name="deploy"></a> Compute targets for inference
For more information, see [set up compute targets for model training and deploym
## Next steps Learn how to:
-* [Use a compute target to train your model](how-to-set-up-training-targets.md)
* [Deploy your model to a compute target](how-to-deploy-managed-online-endpoints.md)
machine-learning Concept Deep Learning Vs Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-deep-learning-vs-machine-learning.md
The following articles show you more options for using open-source deep learning
- [Classify handwritten digits by using a TensorFlow model](./how-to-train-tensorflow.md?WT.mc_id=docs-article-lazzeri) - [Classify handwritten digits by using a TensorFlow estimator and Keras](./how-to-train-keras.md?WT.mc_id=docs-article-lazzeri)--- [Classify handwritten digits by using a Chainer model](./how-to-set-up-training-targets.md?WT.mc_id=docs-article-lazzeri)
machine-learning Concept Distributed Training https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-distributed-training.md
In model parallelism, worker nodes only need to synchronize the shared parameter
## Next steps
-* Learn how to [use compute targets for model training](how-to-set-up-training-targets.md) with the Python SDK.
* For a technical example, see the [reference architecture scenario](/azure/architecture/reference-architectures/ai/training-deep-learning). * Find tips for MPI, TensorFlow, and PyTorch in the [Distributed GPU training guide](how-to-train-distributed-gpu.md)
machine-learning Concept Enterprise Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-enterprise-security.md
Previously updated : 07/28/2022 Last updated : 08/26/2022 # Enterprise security and governance for Azure Machine Learning
Azure Machine Learning uses a variety of compute resources and data stores on th
When deploying models as web services, you can enable transport-layer security (TLS) to encrypt data in transit. For more information, see [Configure a secure web service](./v1/how-to-secure-web-service.md).
+## Data exfiltration prevention (preview)
+
+Azure Machine Learning has several inbound and outbound network dependencies. Some of these dependencies can expose a data exfiltration risk by malicious agents within your organization. These risks are associated with the outbound requirements to Azure Storage, Azure Front Door, and Azure Monitor. For recommendations on mitigating this risk, see the [Azure Machine Learning data exfiltration prevention](how-to-prevent-data-loss-exfiltration.md) article.
+ ## Vulnerability scanning [Microsoft Defender for Cloud](../security-center/security-center-introduction.md) provides unified security management and advanced threat protection across hybrid cloud workloads. For Azure machine learning, you should enable scanning of your [Azure Container Registry](../container-registry/container-registry-intro.md) resource and Azure Kubernetes Service resources. For more information, see [Azure Container Registry image scanning by Defender for Cloud](../security-center/defender-for-container-registries-introduction.md) and [Azure Kubernetes Services integration with Defender for Cloud](../security-center/defender-for-kubernetes-introduction.md).
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
With MLflow Tracking you can connect Azure Machine Learning as the backend of yo
* [Track ML experiments and models running locally or in the cloud](how-to-use-mlflow-cli-runs.md) with MLflow in Azure Machine Learning. * [Track Azure Databricks ML experiments](how-to-use-mlflow-azure-databricks.md) with MLflow in Azure Machine Learning.
-* [Track Azure Synapse Analytics ML experiments](how-to-use-mlflow-azure-databricks.md) with MLflow in Azure Machine Learning.
+* [Track Azure Synapse Analytics ML experiments](how-to-use-mlflow-azure-synapse.md) with MLflow in Azure Machine Learning.
> [!IMPORTANT] > - MLflow in R support is limited to tracking experiment's metrics, parameters and models on Azure Machine Learning jobs. RStudio or Jupyter Notebooks with R kernels are not supported. Model registries are not supported using the MLflow R SDK. As an alternative, use Azure ML CLI or Azure ML studio for model registration and management. View the following [R example about using the MLflow tracking client with Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/r).
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
Microsoft Power BI supports using machine learning models for data analytics. Fo
Machine Learning gives you the capability to track the end-to-end audit trail of all your machine learning assets by using metadata. For example: -- Machine Learning [integrates with Git](how-to-set-up-training-targets.md#gitintegration) to track information on which repository, branch, and commit your code came from.
+- Machine Learning [integrates with Git](concept-train-model-git-integration.md) to track information on which repository, branch, and commit your code came from.
- [Machine Learning datasets](how-to-create-register-datasets.md) help you track, profile, and version data. - [Interpretability](how-to-machine-learning-interpretability.md) allows you to explain your models, meet regulatory compliance, and understand how models arrive at a result for specific input. - Machine Learning Job history stores a snapshot of the code, data, and computes used to train a model.
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-machine-learning-model.md
You may start with a run configuration for your local computer, and then switch
* [What is a run configuration?](v1/concept-azure-machine-learning-architecture.md#run-configurations) * [Tutorial: Train your first ML model](tutorial-1st-experiment-sdk-train.md) * [Examples: Jupyter Notebook and Python examples of training models](https://github.com/Azure/azureml-examples)
-* [How to: Configure a training run](how-to-set-up-training-targets.md)
+* [How to: Configure a training run](v1/how-to-set-up-training-targets.md)
### Automated Machine Learning
You can use the VS Code extension to run and manage your training jobs. See the
## Next steps
-Learn how to [Configure a training run](how-to-set-up-training-targets.md).
+Learn how to [Configure a training run](v1/how-to-set-up-training-targets.md).
machine-learning Concept Train Model Git Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-model-git-integration.md
run.properties['azureml.git.commit']
## Next steps
-* [Use compute targets for model training](how-to-set-up-training-targets.md)
+* [Use compute targets for model training](v1/how-to-set-up-training-targets.md)
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
The hosts in the following tables are owned by Microsoft, and provide services r
| Integrated notebook | \<storage\>.blob.core.windows.net | TCP | 443 | | Integrated notebook | graph.microsoft.com | TCP | 443 | | Integrated notebook | \*.aznbcontent.net | TCP | 443 |
+| AutoML NLP | automlresources-prod.azureedge.net | TCP | 443 |
+| AutoML NLP | aka.ms | TCP | 443 |
+
+> [!NOTE]
+> AutoML NLP is currently only supported in Azure public regions.
# [Azure Government](#tab/gov)
machine-learning How To Attach Kubernetes Anywhere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-anywhere.md
In the above example, replace `<instance_type_name>` with the name of the instan
## Next steps - [Train models with CLI v2](how-to-train-cli.md)-- [Train models with Python SDK](how-to-set-up-training-targets.md)
+- [Train models with Python SDK](how-to-train-sdk.md)
- [Deploy model with an online endpoint (CLI v2)](./how-to-deploy-managed-online-endpoints.md) - [Use batch endpoint for batch scoring (CLI v2)](./how-to-use-batch-endpoint.md)
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
If your Azure Machine Learning compute cluster appears stuck at resizing (0 -> 0
Use your compute cluster to:
-* [Submit a training run](how-to-set-up-training-targets.md)
+* [Submit a training run](v1/how-to-set-up-training-targets.md)
* [Run batch inference](./tutorial-pipeline-batch-scoring-classification.md).
machine-learning How To Create Attach Compute Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-studio.md
To detach your compute use the following steps:
## Next steps
-* Use the compute resource to [submit a training run](how-to-set-up-training-targets.md).
+* Use the compute resource to [submit a training run](how-to-train-sdk.md).
* Learn how to [efficiently tune hyperparameters](how-to-tune-hyperparameters.md) to build better models. * Once you have a trained model, learn [how and where to deploy models](how-to-deploy-managed-online-endpoints.md). * [Use Azure Machine Learning with Azure Virtual Networks](./how-to-network-security-overview.md)
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
To create a compute instance, you'll need permissions for the following actions:
* [Access the compute instance terminal](how-to-access-terminal.md) * [Create and manage files](how-to-manage-files.md) * [Update the compute instance to the latest VM image](concept-vulnerability-management.md#compute-instance)
-* [Submit a training job](how-to-set-up-training-targets.md)
+* [Submit a training job](v1/how-to-set-up-training-targets.md)
machine-learning How To Customize Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-customize-compute-instance.md
Logs from the setup script execution appear in the logs folder in the compute in
* [Access the compute instance terminal](how-to-access-terminal.md) * [Create and manage files](how-to-manage-files.md) * [Update the compute instance to the latest VM image](concept-vulnerability-management.md#compute-instance)
-* [Submit a training run](how-to-set-up-training-targets.md)
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-generate-automl-training-code.md
For more information about AzureML environments, see [the Environment class docu
Since the generated code isnΓÇÖt driven by automated ML anymore, instead of creating an `AutoMLConfig` and then passing it to `experiment.submit()`, you need to create a [`ScriptRunConfig`](/python/api/azureml-core/azureml.core.scriptrunconfig) and provide the generated code (script.py) to it.
-The following example contains the parameters and regular dependencies needed to run `ScriptRunConfig`, such as compute, environment, etc. For more information on how to use ScriptRunConfig, see [Configure and submit training runs](how-to-set-up-training-targets.md).
+The following example contains the parameters and regular dependencies needed to run `ScriptRunConfig`, such as compute, environment, etc. For more information on how to use ScriptRunConfig, see [Configure and submit training runs](v1/how-to-set-up-training-targets.md).
```python from azureml.core import ScriptRunConfig
machine-learning How To Link Synapse Ml Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-link-synapse-ml-workspaces.md
ws.compute_targets['Synapse Spark pool alias']
* [How to data wrangle with Azure Synapse (preview)](v1/how-to-data-prep-synapse-spark-pool.md). * [How to use Apache Spark in your machine learning pipeline with Azure Synapse (preview)](v1/how-to-use-synapsesparkstep.md)
-* [Train a model](how-to-set-up-training-targets.md).
+* [Train a model](v1/how-to-set-up-training-targets.md).
* [How to securely integrate Azure Synapse and Azure Machine Learning workspaces](how-to-private-endpoint-integration-synapse.md).
machine-learning How To Manage Optimize Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-optimize-cost.md
Azure Machine Learning Compute supports reserved instances inherently. If you pu
## Train locally
-When prototyping and running training jobs that are small enough to run on your local computer, consider training locally. Using the Python SDK, setting your compute target to `local` executes your script locally. For more information, see [Configure and submit training jobs](how-to-set-up-training-targets.md#select-a-compute-target).
+When prototyping and running training jobs that are small enough to run on your local computer, consider training locally. Using the Python SDK, setting your compute target to `local` executes your script locally.
Visual Studio Code provides a full-featured environment for developing your machine learning applications. Using the Azure Machine Learning visual Visual Studio Code extension and Docker, you can run and debug locally. For more information, see [interactive debugging with Visual Studio Code](how-to-debug-visual-studio-code.md).
machine-learning How To Migrate From Estimators To Scriptrunconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-migrate-from-estimators-to-scriptrunconfig.md
This article covers common considerations when migrating from Estimators to Scri
Azure Machine Learning documentation and samples have been updated to use [ScriptRunConfig](/python/api/azureml-core/azureml.core.script_run_config.scriptrunconfig) for job configuration and submission. For information on using ScriptRunConfig, refer to the following documentation:
-* [Configure and submit training jobs](how-to-set-up-training-targets.md)
+* [Configure and submit training jobs](v1/how-to-set-up-training-targets.md)
* [Configuring PyTorch training jobs](how-to-train-pytorch.md) * [Configuring TensorFlow training jobs](how-to-train-tensorflow.md) * [Configuring scikit-learn training jobs](how-to-train-scikit-learn.md)
src.run_config
## Next steps
-* [Configure and submit training jobs](how-to-set-up-training-targets.md)
+* [Configure and submit training jobs](v1/how-to-set-up-training-targets.md)
machine-learning How To Prevent Data Loss Exfiltration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prevent-data-loss-exfiltration.md
+
+ Title: Configure data exfiltration prevention
+
+description: 'How to configure data exfiltration prevention for your storage accounts.'
+++++++ Last updated : 08/26/2022++
+# Azure Machine Learning data exfiltration prevention (Preview)
+
+<!-- Learn how to use a [Service Endpoint policy](/azure/virtual-network/virtual-network-service-endpoint-policies-overview) to prevent data exfiltration from storage accounts in your Azure Virtual Network that are used by Azure Machine Learning. -->
+
+Azure Machine Learning has several inbound and outbound dependencies. Some of these dependencies can expose a data exfiltration risk by malicious agents within your organization. This document explains how to minimize data exfiltration risk by limiting inbound and outbound requirements.
+
+* __Inbound__: Azure Machine Learning compute instance and compute cluster have two inbound requirements: the `batchnodemanagement` (ports 29876-29877) and `azuremachinelearning` (port 44224) service tags. You can control this inbound traffic by using a network security group. It's difficult to disguise Azure service IPs, so there's low data exfiltration risk. You can also configure the compute to not use a public IP, which removes inbound requirements.
+
+* __Outbound__: If malicious agents don't have write access to outbound destination resources, they can't use that outbound for data exfiltration. Azure Active Directory, Azure Resource Manager, Azure Machine Learning, and Microsoft Container Registry belong to this category. On the other hand, Storage and AzureFrontDoor.frontend can be used for data exfiltration.
+
+ * __Storage Outbound__: This requirement comes from compute instance and compute cluster. A malicious agent can use this outbound rule to exfiltrate data by provisioning and saving data in their own storage account. You can remove data exfiltration risk by using an Azure Service Endpoint Policy and Azure Batch's simplified node communication architecture.
+
+ * __AzureFrontDoor.frontend outbound__: Azure Front Door is required by the Azure Machine Learning studio UI and AutoML. To narrow down the list of possible outbound destinations to just those required by Azure ML, allowlist the following fully qualified domain names (FQDN) on your firewall.
+
+ - `ml.azure.com`
+ - `automlresources-prod.azureedge.net`
+
+## Prerequisites
+
+* An Azure subscription
+* An Azure Virtual Network (VNet)
+* An Azure Machine Learning workspace with a private endpoint that connects to the VNet.
+ * The storage account used by the workspace must also connect to the VNet using a private endpoint.
+
+## Limitations
+
+* Data exfiltration prevention isn't supported with an Azure Machine Learning compute cluster or compute instance configured for __no public IP__.
+
+## 1. Opt in to the preview
+
+> [!IMPORTANT]
+> Before opting in to this preview, you must have created a workspace and a compute instance on the subscription you plan to use. You can delete the compute instance and/or workspace after creating them.
+
+Use the form at [https://forms.office.com/r/1TraBek7LV](https://forms.office.com/r/1TraBek7LV) to opt in to this Azure Machine Learning preview. Microsoft will contact you once your subscription has been allowlisted to the preview.
+
+> [!TIP]
+> It may take one to two weeks to allowlist your subscription.
+
+## 2. Allow inbound & outbound network traffic
+
+### Inbound
+
+> [!IMPORTANT]
+> The following information __modifies__ the guidance provided in the [Inbound traffic](how-to-secure-training-vnet.md#inbound-traffic) section of the "Secure training environment with virtual networks" article.
+
+__Inbound__ traffic from the service tag `BatchNodeManagement.<region>` or equivalent IP addresses is __not required__.
+
+### Outbound
+
+> [!IMPORTANT]
+> The following information is __in addition__ to the guidance provided in the [Secure training environment with virtual networks](how-to-secure-training-vnet.md) and [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md) articles.
+
+Select the configuration that you're using:
+
+# [Network security group](#tab/nsg)
+
+__Allow__ outbound traffic over __TCP port 443__ to the following service tags. Replace `<region>` with the Azure region that contains your compute cluster or instance:
+
+* `BatchNodeManagement.<region>`
+* `Storage.<region>` - A Service Endpoint Policy will be applied in a later step to limit outbound traffic.
+
+# [Firewall](#tab/firewall)
+
+__Allow__ outbound traffic over __TCP port 443__ to the following FQDNs. Replace instances of `<region>` with the Azure region that contains your compute cluster or instance:
+
+* `<region>.batch.azure.com`
+* `<region>.service.batch.com`
+* `*.blob.core.windows.net` - A Service Endpoint Policy will be applied in a later step to limit outbound traffic.
+* `*.queue.core.windows.net` - A Service Endpoint Policy will be applied in a later step to limit outbound traffic.
+* `*.table.core.windows.net` - A Service Endpoint Policy will be applied in a later step to limit outbound traffic.
+
+> [!IMPORTANT]
+> If you use one firewall for multiple Azure services, having outbound storage rules impacts other services. In this case, limit thee source IP of the outbound storage rule to the address space of the subnet that contains your compute instance and compute cluster resources. This limits the rule to the compute resources in the subnet.
+++
+## 3. Enable storage endpoint for the subnet
+
+1. From the [Azure portal](https://portal.azure.com), select the __Azure Virtual Network__ for your Azure ML workspace.
+1. From the left of the page, select __Subnets__ and then select the subnet that contains your compute cluster/instance resources.
+1. In the form that appears, expand the __Services__ dropdown and then __enable Microsoft.Storage__. Select __Save__ to save these changes.
+
+## 4. Create the service endpoint policy
+
+1. From the [Azure portal](https://portal.azure.com), add a new __Service Endpoint Policy__. On the __Basics__ tab, provide the required information and then select __Next__.
+1. On the __Policy definitions__ tab, perform the following actions:
+ 1. Select __+ Add a resource__, and then provide the following information:
+
+ > [!TIP]
+ > * At least one storage account resource must be listed in the policy.
+ > * If you are adding multiple storage accounts, and the _default storage account_ for your workspace is configured with a private endpoint, you do not need to include it in the policy.
+
+ * __Service__: Microsoft.Storage
+ * __Scope__: Select the scope. For example, select __Single account__ if you want to limit the network traffic to one storage account.
+ * __Subscription__: The Azure subscription that contains the storage account.
+ * __Resource group__: The resource group that contains the storage account.
+ * __Resource__: The storage account.
+
+ Select __Add__ to add the resource information.
+ 1. Select __+ Add an alias__, and then select `/services/Azure/MachineLearning` as the __Server Alias__ value. Select __Add__ to add thee alias.
+1. Select __Review + Create__, and then select __Create__.
++
+## 5. Curated environments
+
+When using Azure ML curated environments, make sure to use the latest environment version. The container registry for the environment must also be `mcr.microsoft.com`. To check the container registry, use the following steps:
+
+1. From [Azure ML studio](https://ml.azure.com), select your workspace and then select __Environments__.
+1. Verify that the __Azure container registry__ begins with a value of `mcr.microsoft.com`.
+
+ > [!IMPORTANT]
+ > If the container registry is `viennaglobal.azurecr.io` you cannot use the curated environment with the data exfiltration preview. Try upgrading to the latest version of the curated environment.
+
+1. When using `mcr.microsoft.com`, you must also allow outbound configuration to the following resources. Select the configuration option that you're using:
+
+ # [Network security group](#tab/nsg)
+
+ __Allow__ outbound traffic over __TCP port 443__ to the following service tags. Replace `<region>` with the Azure region that contains your compute cluster or instance.
+
+ * `MicrosoftContainerRegistry.<region>`
+ * `AzureFrontDoor.FirstParty`
+
+ # [Firewall](#tab/firewall)
+
+ __Allow__ outbound traffic over __TCP port 443__ to the following FQDNs. Replace instances of `<region>` with the Azure region that contains your compute cluster or instance:
+
+ * `mcr.microsoft.com`
+ * `*.data.mcr.microsoft.com`
+
+
+
+## Next steps
+
+For more information, see the following articles:
+
+* [How to configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md)
+* [Azure Batch simplified node communication](/azure/batch/simplified-compute-node-communication)
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
In this article you learn how to secure the following training compute resources
> [!WARNING] > If you are using a __private endpoint-enabled workspace__, creating the cluster in a different region is __not supported__.
+* An Azure Machine Learning workspace requires outbound access to `storage.<region>/*.blob.core.windows.net` on the public internet, where `<region>` is the Azure region of the workspace. This outbound access is required by Azure Machine Learning compute cluster and compute instance. Both are based on Azure Batch, and need to access a storage account provided by Azure Batch on the public network.
+
+ By using a Service Endpoint Policy, you can mitigate this vulnerability. This feature is currently in preview. For more information, see the [Azure Machine Learning data exfiltration prevention](how-to-prevent-data-loss-exfiltration.md) article.
+ ### Azure Databricks * In addition to the __databricks-private__ and __databricks-public__ subnets used by Azure Databricks, the __default__ subnet created for the virtual network is also required.
except ComputeTargetException:
-When the creation process finishes, you train your model by using the cluster in an experiment. For more information, see [Select and use a compute target for training](how-to-set-up-training-targets.md).
+When the creation process finishes, you train your model by using the cluster in an experiment.
[!INCLUDE [low-pri-note](../../includes/machine-learning-low-pri-vm.md)]
If you don't want to use the default outbound rules and you do want to limit the
### Attach the VM or HDInsight cluster
-Attach the VM or HDInsight cluster to your Azure Machine Learning workspace. For more information, see [Set up compute targets for model training](how-to-set-up-training-targets.md).
+Attach the VM or HDInsight cluster to your Azure Machine Learning workspace. For more information, see [Manage compute resources for model training and deployment in studio](how-to-create-attach-compute-studio.md).
## Next steps
machine-learning How To Train Keras https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-keras.md
src = ScriptRunConfig(source_directory=script_folder,
environment=keras_env) ```
-For more information on configuring jobs with ScriptRunConfig, see [Configure and submit training runs](how-to-set-up-training-targets.md).
+For more information on configuring jobs with ScriptRunConfig, see [Configure and submit training runs](v1/how-to-set-up-training-targets.md).
> [!WARNING] > If you were previously using the TensorFlow estimator to configure your Keras training jobs, please note that Estimators have been deprecated as of the 1.19.0 SDK release. With Azure ML SDK >= 1.15.0, ScriptRunConfig is the recommended way to configure training jobs, including those using deep learning frameworks. For common migration questions, see the [Estimator to ScriptRunConfig migration guide](how-to-migrate-from-estimators-to-scriptrunconfig.md).
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-pytorch.md
src = ScriptRunConfig(source_directory=project_folder,
> [!WARNING] > Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory . Instead, access your data using an Azure ML [dataset](v1/how-to-train-with-datasets.md).
-For more information on configuring jobs with ScriptRunConfig, see [Configure and submit training runs](how-to-set-up-training-targets.md).
+For more information on configuring jobs with ScriptRunConfig, see [Configure and submit training runs](v1/how-to-set-up-training-targets.md).
> [!WARNING] > If you were previously using the PyTorch estimator to configure your PyTorch training jobs, please note that Estimators have been deprecated as of the 1.19.0 SDK release. With Azure ML SDK >= 1.15.0, ScriptRunConfig is the recommended way to configure training jobs, including those using deep learning frameworks. For common migration questions, see the [Estimator to ScriptRunConfig migration guide](how-to-migrate-from-estimators-to-scriptrunconfig.md).
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-tensorflow.md
src = ScriptRunConfig(source_directory=script_folder,
> [!WARNING] > Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory . Instead, access your data using an Azure ML [dataset](v1/how-to-train-with-datasets.md).
-For more information on configuring jobs with ScriptRunConfig, see [Configure and submit training runs](how-to-set-up-training-targets.md).
+For more information on configuring jobs with ScriptRunConfig, see [Configure and submit training runs](v1/how-to-set-up-training-targets.md).
> [!WARNING] > If you were previously using the TensorFlow estimator to configure your TensorFlow training jobs, please note that Estimators have been deprecated as of the 1.19.0 SDK release. With Azure ML SDK >= 1.15.0, ScriptRunConfig is the recommended way to configure training jobs, including those using deep learning frameworks. For common migration questions, see the [Estimator to ScriptRunConfig migration guide](how-to-migrate-from-estimators-to-scriptrunconfig.md).
machine-learning How To Train With Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-custom-image.md
print(compute_target.get_status().serialize())
For this tutorial, use the training script *train.py* on [GitHub](https://github.com/Azure/azureml-examples/blob/main/python-sdk/workflows/train/fastai/pets/src/train.py). In practice, you can take any custom training script and run it, as is, with Azure Machine Learning.
-Create a `ScriptRunConfig` resource to configure your job for running on the desired [compute target](how-to-set-up-training-targets.md).
+Create a `ScriptRunConfig` resource to configure your job for running on the desired [compute target](v1/how-to-set-up-training-targets.md).
```python from azureml.core import ScriptRunConfig
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
By default, dual-tracking is configured for you when you linked your Azure Datab
Linking your ADB workspace to your Azure Machine Learning workspace enables you to track your experiment data in the Azure Machine Learning workspace and Azure Databricks workspace at the same time. This is referred as Dual-tracking.
+> [!WARNING]
+> Dual-tracking in a [private link enabled Azure Machine Learning workspace](how-to-configure-private-link.md) is not supported by the moment. Configure [exclusive tracking with your Azure Machine Learning workspace](#tracking-exclusively-on-azure-machine-learning-workspace) instead.
+>
+> [!WARNING]
+> Dual-tracking in not supported in Azure China by the moment. Configure [exclusive tracking with your Azure Machine Learning workspace](#tracking-exclusively-on-azure-machine-learning-workspace) instead.
+ To link your ADB workspace to a new or existing Azure Machine Learning workspace, 1. Sign in to [Azure portal](https://portal.azure.com). 1. Navigate to your ADB workspace's **Overview** page.
To link your ADB workspace to a new or existing Azure Machine Learning workspace
![Link Azure DB and Azure Machine Learning workspaces](./media/how-to-use-mlflow-azure-databricks/link-workspaces.png)
-> [!WARNING]
-> Dual-tracking in a [private link enabled Azure Machine Learning workspace](how-to-configure-private-link.md) is not supported by the moment. Configure [exclusive tracking with your Azure Machine Learning workspace](#tracking-exclusively-on-azure-machine-learning-workspace) instead.
- After you link your Azure Databricks workspace with your Azure Machine Learning workspace, MLflow Tracking is automatically set to be tracked in all of the following places: * The linked Azure Machine Learning workspace.
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
For details about how to log metrics, parameters and artifacts in a run using ML
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-Remote runs (jobs) let you train your models in a more robust and repetitive way. They can also leverage more powerful computes, such as Machine Learning Compute clusters. See [Use compute targets for model training](how-to-set-up-training-targets.md) to learn about different compute options.
-
+Remote runs (jobs) let you train your models in a more robust and repetitive way. They can also leverage more powerful computes, such as Machine Learning Compute clusters. See [What are compute targets in Azure Machine Learning?](concept-compute-target.md) to learn about different compute options.
When submitting runs using jobs, Azure Machine Learning automatically configures MLflow to work with the workspace the job is running in. This means that there is no need to configure the MLflow tracking URI. On top of that, experiments are automatically named based on the details of the job.
machine-learning Resource Limits Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-limits-capacity.md
- Title: Service limits-
-description: Service limits used for capacity planning and maximum limits on requests and responses for Azure Machine Learning.
------- Previously updated : 07/13/2022--
-# Service limits in Azure Machine Learning
-
-This section lists basic limits and throttling thresholds in Azure Machine Learning.
-
-To learn how increase resource quotas, see ["Manage and increase quotas for resources"](how-to-manage-quotas.md)
-
-> [!Important]
-> Azure Machine Learning doesn't store or process your data outside of the region where you deploy.
--
-## Workspaces
-| Limit | Value |
-| | |
-| Workspace name | 2-32 characters |
-
-## Runs
-| Limit | Value |
-| | |
-| Runs per workspace | 10 million |
-| RunId/ParentRunId | 256 characters |
-| DataContainerId | 261 characters |
-| DisplayName |256 characters|
-| Description |5,000 characters|
-| Number of properties |50 |
-| Length of property key |100 characters |
-| Length of property value |1,000 characters |
-| Number of tags |50 |
-| Length of tag key |100 |
-| Length of tag value |1,000 characters |
-| CancelUri / CompleteUri / DiagnosticsUri |1,000 characters |
-| Error message length |3,000 characters |
-| Warning message length |300 characters |
-| Number of input datasets |200 |
-| Number of output datasets |20 |
--
-## Metrics
-| Limit | Value |
-| | |
-| Metric names per run |50|
-| Metric rows per metric name |10 million|
-| Columns per metric row |15|
-| Metric column name length |255 characters |
-| Metric column value length |255 characters |
-| Metric rows per batch uploaded | 250 |
-
-> [!NOTE]
-> If you are hitting the limit of metric names per run because you are formatting variables into the metric name, consider instead to use a row metric where one column is the variable value and the second column is the metric value.
-
-## Artifacts
-
-| Limit | Value |
-| | |
-| Number of artifacts per run |10 million|
-| Max length of artifact path |5,000 characters |
-
-## Limit increases
-
-Some limits can be increased for individual workspaces by [contacting support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest/).
-
-## Next steps
--- [Configure your Azure Machine Learning environment](how-to-configure-environment.md)-- Learn how increase resource quotas in ["Manage and increase quotas for resources"](how-to-manage-quotas.md).-
machine-learning Concept Automated Ml V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-automated-ml-v1.md
Using **Azure Machine Learning**, you can design and run your automated ML train
1. **Specify the source and format of the labeled training data**: Numpy arrays or Pandas dataframe
-1. **Configure the compute target for model training**, such as your [local computer, Azure Machine Learning Computes, remote VMs, or Azure Databricks with SDK v1](../how-to-set-up-training-targets.md).
+1. **Configure the compute target for model training**, such as your [local computer, Azure Machine Learning Computes, remote VMs, or Azure Databricks with SDK v1](how-to-set-up-training-targets.md).
1. **Configure the automated machine learning parameters** that determine how many iterations over different models, hyperparameter settings, advanced preprocessing/featurization, and what metrics to look at when determining the best model. 1. **Submit the training job.**
machine-learning Concept Azure Machine Learning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-azure-machine-learning-architecture.md
A run configuration defines how a script should be run in a specified compute ta
A run configuration can be persisted into a file inside the directory that contains your training script. Or it can be constructed as an in-memory object and used to submit a run.
-For example run configurations, see [Configure a training run](../how-to-set-up-training-targets.md).
+For example run configurations, see [Configure a training run](how-to-set-up-training-targets.md).
### Snapshots
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-model-management-and-deployment.md
Microsoft Power BI supports using machine learning models for data analytics. Fo
Azure ML gives you the capability to track the end-to-end audit trail of all of your ML assets by using metadata. -- Azure ML [integrates with Git](../how-to-set-up-training-targets.md#gitintegration) to track information on which repository / branch / commit your code came from.
+- Azure ML [integrates with Git](../concept-train-model-git-integration.md) to track information on which repository / branch / commit your code came from.
- [Azure ML Datasets](how-to-create-register-datasets.md) help you track, profile, and version data. - [Interpretability](../how-to-machine-learning-interpretability.md) allows you to explain your models, meet regulatory compliance, and understand how models arrive at a result for given input. - Azure ML Run history stores a snapshot of the code, data, and computes used to train a model.
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-access-data.md
Azure Data Factory provides efficient and resilient data transfer with more than
## Next steps * [Create an Azure machine learning dataset](how-to-create-register-datasets.md)
-* [Train a model](../how-to-set-up-training-targets.md)
+* [Train a model](how-to-set-up-training-targets.md)
* [Deploy a model](how-to-deploy-and-where.md)
machine-learning How To Attach Compute Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-attach-compute-targets.md
With Azure Machine Learning, you can train your model on various resources or en
## Local computer
-When you use your local computer for **training**, there is no need to create a compute target. Just [submit the training run](../how-to-set-up-training-targets.md) from your local machine.
+When you use your local computer for **training**, there is no need to create a compute target. Just [submit the training run](how-to-set-up-training-targets.md) from your local machine.
When you use your local computer for **inference**, you must have Docker installed. To perform the deployment, use [LocalWebservice.deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.local.localwebservice#deploy-configuration-port-none-) to define the port that the web service will use. Then use the normal deployment process as described in [Deploy models with Azure Machine Learning](how-to-deploy-and-where.md).
See these notebooks for examples of training with various compute targets:
## Next steps
-* Use the compute resource to [configure and submit a training run](../how-to-set-up-training-targets.md).
+* Use the compute resource to [configure and submit a training run](how-to-set-up-training-targets.md).
* [Tutorial: Train and deploy a model](../tutorial-train-deploy-notebook.md) uses a managed compute target to train a model. * Learn how to [efficiently tune hyperparameters](../how-to-tune-hyperparameters.md) to build better models. * Once you have a trained model, learn [how and where to deploy models](../how-to-deploy-managed-online-endpoints.md).
machine-learning How To Connect Data Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-connect-data-ui.md
Use your datasets in your machine learning experiments for training ML models. [
* [A step-by-step example of training with TabularDatasets and automated machine learning](../tutorial-first-experiment-automated-ml.md).
-* [Train a model](../how-to-set-up-training-targets.md).
+* [Train a model](how-to-set-up-training-targets.md).
* For more dataset training examples, see the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-attach-compute-cluster.md
If your Azure Machine Learning compute cluster appears stuck at resizing (0 -> 0
Use your compute cluster to:
-* [Submit a training run](../how-to-set-up-training-targets.md)
+* [Submit a training run](how-to-set-up-training-targets.md)
* [Run batch inference](../tutorial-pipeline-batch-scoring-classification.md).
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-manage-compute-instance.md
To create a compute instance, you'll need permissions for the following actions:
* [Access the compute instance terminal](../how-to-access-terminal.md) * [Create and manage files](../how-to-manage-files.md) * [Update the compute instance to the latest VM image](../concept-vulnerability-management.md#compute-instance)
-* [Submit a training run](../how-to-set-up-training-targets.md)
+* [Submit a training run](how-to-set-up-training-targets.md)
machine-learning How To Data Prep Synapse Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-data-prep-synapse-spark-pool.md
See the example notebooks for more concepts and demonstrations of the Azure Syna
## Next steps
-* [Train a model](../how-to-set-up-training-targets.md).
+* [Train a model](how-to-set-up-training-targets.md).
* [Train with Azure Machine Learning dataset](how-to-train-with-datasets.md).
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-log-view-metrics.md
# Log & view metrics and log files v1 + > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning Python SDK you are using:"] > * [v1](how-to-log-view-metrics.md) > * [v2 (preview)](../how-to-log-view-metrics.md)
This folder contains information about the user generated logs. This folder is o
#### system_logs folder
-This folder contains the logs generated by Azure Machine Learning and it will be closed by default.The logs generated by the system are grouped into different folders, based on the stage of the job in the runtime.
+This folder contains the logs generated by Azure Machine Learning and it will be closed by default. The logs generated by the system are grouped into different folders, based on the stage of the job in the runtime.
#### Other folders
machine-learning How To Set Up Training Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-set-up-training-targets.md
+
+ Title: Configure a training job
+
+description: Train your machine learning model on various training environments (compute targets). You can easily switch between training environments.
++++++ Last updated : 10/21/2021++++
+# Configure and submit training jobs
++
+In this article, you learn how to configure and submit Azure Machine Learning jobs to train your models. Snippets of code explain the key parts of configuration and submission of a training script. Then use one of the [example notebooks](#notebook-examples) to find the full end-to-end working examples.
+
+When training, it is common to start on your local computer, and then later scale out to a cloud-based cluster. With Azure Machine Learning, you can run your script on various compute targets without having to change your training script.
+
+All you need to do is define the environment for each compute target within a **script job configuration**. Then, when you want to run your training experiment on a different compute target, specify the job configuration for that compute.
+
+## Prerequisites
+
+* If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today
+* The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/install) (>= 1.13.0)
+* An [Azure Machine Learning workspace](../how-to-manage-workspace.md), `ws`
+* A compute target, `my_compute_target`. [Create a compute target](../how-to-create-attach-compute-studio.md)
+
+## What's a script run configuration?
+A [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) is used to configure the information necessary for submitting a training job as part of an experiment.
+
+You submit your training experiment with a ScriptRunConfig object. This object includes the:
+
+* **source_directory**: The source directory that contains your training script
+* **script**: The training script to run
+* **compute_target**: The compute target to run on
+* **environment**: The environment to use when running the script
+* and some additional configurable options (see the [reference documentation](/python/api/azureml-core/azureml.core.scriptrunconfig) for more information)
+
+## Train your model
+
+The code pattern to submit a training job is the same for all types of compute targets:
+
+1. Create an experiment to run
+1. Create an environment where the script will run
+1. Create a ScriptRunConfig, which specifies the compute target and environment
+1. Submit the job
+1. Wait for the job to complete
+
+Or you can:
+
+* Submit a HyperDrive run for [hyperparameter tuning](../how-to-tune-hyperparameters.md).
+* Submit an experiment via the [VS Code extension](../tutorial-train-deploy-image-classification-model-vscode.md#train-the-model).
+
+## Create an experiment
+
+Create an [experiment](concept-azure-machine-learning-architecture.md#experiments) in your workspace. An experiment is a light-weight container that helps to organize job submissions and keep track of code.
+
+```python
+from azureml.core import Experiment
+
+experiment_name = 'my_experiment'
+experiment = Experiment(workspace=ws, name=experiment_name)
+```
+
+## Select a compute target
+
+Select the compute target where your training script will run on. If no compute target is specified in the ScriptRunConfig, or if `compute_target='local'`, Azure ML will execute your script locally.
+
+The example code in this article assumes that you have already created a compute target `my_compute_target` from the "Prerequisites" section.
+
+>[!Note]
+>Azure Databricks is not supported as a compute target for model training. You can use Azure Databricks for data preparation and deployment tasks.
++
+## Create an environment
+Azure Machine Learning [environments](../concept-environments.md) are an encapsulation of the environment where your machine learning training happens. They specify the Python packages, Docker image, environment variables, and software settings around your training and scoring scripts. They also specify runtimes (Python, Spark, or Docker).
+
+You can either define your own environment, or use an Azure ML curated environment. [Curated environments](../how-to-use-environments.md#use-a-curated-environment) are predefined environments that are available in your workspace by default. These environments are backed by cached Docker images which reduces the job preparation cost. See [Azure Machine Learning Curated Environments](../resource-curated-environments.md) for the full list of available curated environments.
+
+For a remote compute target, you can use one of these popular curated environments to start with:
+
+```python
+from azureml.core import Workspace, Environment
+
+ws = Workspace.from_config()
+myenv = Environment.get(workspace=ws, name="AzureML-Minimal")
+```
+
+For more information and details about environments, see [Create & use software environments in Azure Machine Learning](how-to-use-environments.md).
+
+### Local compute target
+
+If your compute target is your **local machine**, you are responsible for ensuring that all the necessary packages are available in the Python environment where the script runs. Use `python.user_managed_dependencies` to use your current Python environment (or the Python on the path you specify).
+
+```python
+from azureml.core import Environment
+
+myenv = Environment("user-managed-env")
+myenv.python.user_managed_dependencies = True
+
+# You can choose a specific Python environment by pointing to a Python path
+# myenv.python.interpreter_path = '/home/johndoe/miniconda3/envs/myenv/bin/python'
+```
+
+## Create the script job configuration
+
+Now that you have a compute target (`my_compute_target`, see [Prerequisites](#prerequisites) and environment (`myenv`, see [Create an environment](#create-an-environment)), create a script job configuration that runs your training script (`train.py`) located in your `project_folder` directory:
+
+```python
+from azureml.core import ScriptRunConfig
+
+src = ScriptRunConfig(source_directory=project_folder,
+ script='train.py',
+ compute_target=my_compute_target,
+ environment=myenv)
+
+# Set compute target
+# Skip this if you are running on your local computer
+script_run_config.run_config.target = my_compute_target
+```
+
+If you do not specify an environment, a default environment will be created for you.
+
+If you have command-line arguments you want to pass to your training script, you can specify them via the **`arguments`** parameter of the ScriptRunConfig constructor, e.g. `arguments=['--arg1', arg1_val, '--arg2', arg2_val]`.
+
+If you want to override the default maximum time allowed for the job, you can do so via the **`max_run_duration_seconds`** parameter. The system will attempt to automatically cancel the job if it takes longer than this value.
+
+### Specify a distributed job configuration
+If you want to run a [distributed training](../how-to-train-distributed-gpu.md) job, provide the distributed job-specific config to the **`distributed_job_config`** parameter. Supported config types include [MpiConfiguration](/python/api/azureml-core/azureml.core.runconfig.mpiconfiguration), [TensorflowConfiguration](/python/api/azureml-core/azureml.core.runconfig.tensorflowconfiguration), and [PyTorchConfiguration](/python/api/azureml-core/azureml.core.runconfig.pytorchconfiguration).
+
+For more information and examples on running distributed Horovod, TensorFlow and PyTorch jobs, see:
+
+* [Train TensorFlow models](../how-to-train-tensorflow.md#distributed-training)
+* [Train PyTorch models](../how-to-train-pytorch.md#distributed-training)
+
+## Submit the experiment
+
+```python
+run = experiment.submit(config=src)
+run.wait_for_completion(show_output=True)
+```
+
+> [!IMPORTANT]
+> When you submit the training job, a snapshot of the directory that contains your training scripts is created and sent to the compute target. It is also stored as part of the experiment in your workspace. If you change files and submit the job again, only the changed files will be uploaded.
+>
+> [!INCLUDE [amlinclude-info](../../../includes/machine-learning-amlignore-gitignore.md)]
+>
+> For more information about snapshots, see [Snapshots](concept-azure-machine-learning-architecture.md#snapshots).
+
+> [!IMPORTANT]
+> **Special Folders**
+> Two folders, *outputs* and *logs*, receive special treatment by Azure Machine Learning. During training, when you write files to folders named *outputs* and *logs* that are relative to the root directory (`./outputs` and `./logs`, respectively), the files will automatically upload to your job history so that you have access to them once your job is finished.
+>
+> To create artifacts during training (such as model files, checkpoints, data files, or plotted images) write these to the `./outputs` folder.
+>
+> Similarly, you can write any logs from your training job to the `./logs` folder. To utilize Azure Machine Learning's [TensorBoard integration](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/tensorboard/export-run-history-to-tensorboard/export-run-history-to-tensorboard.ipynb) make sure you write your TensorBoard logs to this folder. While your job is in progress, you will be able to launch TensorBoard and stream these logs. Later, you will also be able to restore the logs from any of your previous jobs.
+>
+> For example, to download a file written to the *outputs* folder to your local machine after your remote training job:
+> `run.download_file(name='outputs/my_output_file', output_file_path='my_destination_path')`
+
+## Git tracking and integration
+
+When you start a training job where the source directory is a local Git repository, information about the repository is stored in the job history. For more information, see [Git integration for Azure Machine Learning](../concept-train-model-git-integration.md).
+
+## Notebook examples
+
+See these notebooks for examples of configuring jobs for various training scenarios:
+* [Training on various compute targets](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training)
+* [Training with ML frameworks](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks)
+* [tutorials/img-classification-part1-training.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/tutorials/image-classification-mnist-data/img-classification-part1-training.ipynb)
++
+## Troubleshooting
+
+* **AttributeError: 'RoundTripLoader' object has no attribute 'comment_handling'**: This error comes from the new version (v0.17.5) of `ruamel-yaml`, an `azureml-core` dependency, that introduces a breaking change to `azureml-core`. In order to fix this error, please uninstall `ruamel-yaml` by running `pip uninstall ruamel-yaml` and installing a different version of `ruamel-yaml`; the supported versions are v0.15.35 to v0.17.4 (inclusive). You can do this by running `pip install "ruamel-yaml>=0.15.35,<0.17.5"`.
++
+* **Job fails with `jwt.exceptions.DecodeError`**: Exact error message: `jwt.exceptions.DecodeError: It is required that you pass in a value for the "algorithms" argument when calling decode()`.
+
+ Consider upgrading to the latest version of azureml-core: `pip install -U azureml-core`.
+
+ If you are running into this issue for local jobs, check the version of PyJWT installed in your environment where you are starting jobs. The supported versions of PyJWT are < 2.0.0. Uninstall PyJWT from the environment if the version is >= 2.0.0. You may check the version of PyJWT, uninstall and install the right version as follows:
+ 1. Start a command shell, activate conda environment where azureml-core is installed.
+ 2. Enter `pip freeze` and look for `PyJWT`, if found, the version listed should be < 2.0.0
+ 3. If the listed version is not a supported version, `pip uninstall PyJWT` in the command shell and enter y for confirmation.
+ 4. Install using `pip install 'PyJWT<2.0.0'`
+
+ If you are submitting a user-created environment with your job, consider using the latest version of azureml-core in that environment. Versions >= 1.18.0 of azureml-core already pin PyJWT < 2.0.0. If you need to use a version of azureml-core < 1.18.0 in the environment you submit, make sure to specify PyJWT < 2.0.0 in your pip dependencies.
++
+ * **ModuleErrors (No module named)**: If you are running into ModuleErrors while submitting experiments in Azure ML, the training script is expecting a package to be installed but it isn't added. Once you provide the package name, Azure ML installs the package in the environment used for your training job.
+
+ If you are using Estimators to submit experiments, you can specify a package name via `pip_packages` or `conda_packages` parameter in the estimator based on from which source you want to install the package. You can also specify a yml file with all your dependencies using `conda_dependencies_file`or list all your pip requirements in a txt file using `pip_requirements_file` parameter. If you have your own Azure ML Environment object that you want to override the default image used by the estimator, you can specify that environment via the `environment` parameter of the estimator constructor.
+
+ Azure ML maintained docker images and their contents can be seen in [AzureML Containers](https://github.com/Azure/AzureML-Containers).
+ Framework-specific dependencies are listed in the respective framework documentation:
+ * [Chainer](/python/api/azureml-train-core/azureml.train.dnn.chainer#remarks)
+ * [PyTorch](/python/api/azureml-train-core/azureml.train.dnn.pytorch#remarks)
+ * [TensorFlow](/python/api/azureml-train-core/azureml.train.dnn.tensorflow#remarks)
+ * [SKLearn](/python/api/azureml-train-core/azureml.train.sklearn.sklearn#remarks)
+
+ > [!Note]
+ > If you think a particular package is common enough to be added in Azure ML maintained images and environments please raise a GitHub issue in [AzureML Containers](https://github.com/Azure/AzureML-Containers).
+
+* **NameError (Name not defined), AttributeError (Object has no attribute)**: This exception should come from your training scripts. You can look at the log files from Azure portal to get more information about the specific name not defined or attribute error. From the SDK, you can use `run.get_details()` to look at the error message. This will also list all the log files generated for your job. Please make sure to take a look at your training script and fix the error before resubmitting your job.
++
+* **Job or experiment deletion**: Experiments can be archived by using the [Experiment.archive](/python/api/azureml-core/azureml.core.experiment%28class%29#archive--)
+method, or from the Experiment tab view in Azure Machine Learning studio client via the "Archive experiment" button. This action hides the experiment from list queries and views, but does not delete it.
+
+ Permanent deletion of individual experiments or jobs is not currently supported. For more information on deleting Workspace assets, see [Export or delete your Machine Learning service workspace data](how-to-export-delete-data.md).
+
+* **Metric Document is too large**: Azure Machine Learning has internal limits on the size of metric objects that can be logged at once from a training job. If you encounter a "Metric Document is too large" error when logging a list-valued metric, try splitting the list into smaller chunks, for example:
+
+ ```python
+ run.log_list("my metric name", my_metric[:N])
+ run.log_list("my metric name", my_metric[N:])
+ ```
+
+ Internally, Azure ML concatenates the blocks with the same metric name into a contiguous list.
+
+* **Compute target takes a long time to start**: The Docker images for compute targets are loaded from Azure Container Registry (ACR). By default, Azure Machine Learning creates an ACR that uses the *basic* service tier. Changing the ACR for your workspace to standard or premium tier may reduce the time it takes to build and load images. For more information, see [Azure Container Registry service tiers](../../container-registry/container-registry-skus.md).
+
+## Next steps
+
+* [Tutorial: Train and deploy a model](tutorial-1st-experiment-sdk-train.md) uses a managed compute target to train a model.
+* See how to train models with specific ML frameworks, such as [Scikit-learn](../how-to-train-scikit-learn.md), [TensorFlow](../how-to-train-tensorflow.md), and [PyTorch](../how-to-train-pytorch.md).
+* Learn how to [efficiently tune hyperparameters](../how-to-tune-hyperparameters.md) to build better models.
+* Once you have a trained model, learn [how and where to deploy models](../how-to-deploy-managed-online-endpoints.md).
+* View the [ScriptRunConfig class](/python/api/azureml-core/azureml.core.scriptrunconfig) SDK reference.
+* [Use Azure Machine Learning with Azure Virtual Networks](../how-to-network-security-overview.md)
machine-learning How To Tune Hyperparameters V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-tune-hyperparameters-v1.md
To [configure your hyperparameter tuning](/python/api/azureml-train-core/azureml
The ScriptRunConfig is the training script that will run with the sampled hyperparameters. It defines the resources per job (single or multi-node), and the compute target to use. > [!NOTE]
->The compute target used in `script_run_config` must have enough resources to satisfy your concurrency level. For more information on ScriptRunConfig, see [Configure training runs](../how-to-set-up-training-targets.md).
+>The compute target used in `script_run_config` must have enough resources to satisfy your concurrency level. For more information on ScriptRunConfig, see [Configure training runs](how-to-set-up-training-targets.md).
Configure your hyperparameter tuning experiment:
machine-learning How To Use Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-mlflow.md
For details about how to log metrics, parameters and artifacts in a run using ML
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-Remote runs (jobs) let you train your models in a more robust and repetitive way. They can also leverage more powerful computes, such as Machine Learning Compute clusters. See [Use compute targets for model training](../how-to-set-up-training-targets.md) to learn about different compute options.
+Remote runs (jobs) let you train your models in a more robust and repetitive way. They can also leverage more powerful computes, such as Machine Learning Compute clusters. See [Use compute targets for model training](how-to-set-up-training-targets.md) to learn about different compute options.
When submitting runs, Azure Machine Learning automatically configures MLflow to work with the workspace the run is running in. This means that there is no need to configure the MLflow tracking URI. On top of that, experiments are automatically named based on the details of the experiment submission.
marketplace Azure Vm Plan Pricing And Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-plan-pricing-and-availability.md
Previously updated : 04/15/2022 Last updated : 08/22/2022 # Configure pricing and availability for a virtual machine offer
Private offers aren't supported with Azure subscriptions established through a r
## Hide plan
-If your virtual machine is meant to be used only indirectly when it's referenced through another solution template or managed application, select this check box to publish the virtual machine but hide it from customers who might be searching or browsing for it directly.
+A hidden plan is not visible on Azure Marketplace and can only be deployed through another Solution Template, Managed Application, Azure CLI or Azure PowerShell. Hiding a plan is useful when trying to limit exposure to customers that would normally be searching or browsing for it directly via Azure Marketplace. By selecting this checkbox all virtual machine images associated to your plan will be hidden from Azure Marketplace storefront.
-Any Azure customer can deploy the offer using either PowerShell or CLI. If you wish to make this offer available to a limited set of customers, then set the plan to **Private**.
-
-Hidden plans don't generate preview links. However, you can test them by [following these steps](azure-vm-create-faq.yml#how-do-i-test-a-hidden-preview-image-).
+> [!NOTE]
+> A hidden plan is different from a private plan. When a plan is publicly available but hidden, it is still available for any Azure customer to deploy via Solution Template, Managed Application, Azure CLI or Azure PowerShell. However, a plan can be both hidden and private in which case only the customers configured in the private audience can deploy via these methods. If you wish to make the plan available to a limited set of customers, then set the plan to **Private**.
-Select **Save draft** before continuing to the next tab in the left-nav Plan menu, **Technical configuration**.
+> [!IMPORTANT]
+> Hidden plans don't generate preview links. However, you can test them by [following these steps](/azure/marketplace/azure-vm-faq).
## Next steps
marketplace Marketplace Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-virtual-machines.md
Previously updated : 06/29/2022 Last updated : 08/22/2022 # Plan a virtual machine offer
Private plans restrict the discovery and deployment of your solution to a specif
For more information, see [Plans and pricing for commercial marketplace offers](plans-pricing.md) and [Private offers in the Microsoft commercial marketplace](private-offers.md).
+### Hidden plans
+
+A hidden plan is not visible on Azure Marketplace and can only be deployed through a Solution Template, Managed Application, Azure CLI or Azure PowerShell. Hiding a plan is useful when trying to limit exposure to customers that would normally be searching or browsing for it directly via Azure Marketplace.
+
+> [!NOTE]
+> A hidden plan is different from a private plan. When a plan is publicly available but hidden, it is still available for any Azure customer to deploy via Solution Template, Managed Application, Azure CLI or Azure PowerShell. However, a plan can be both hidden and private, in which case only the customers configured in the private audience can deploy via these methods.
+ ### Licensing models As you prepare to publish a new offer, you need to make pricing-related decisions by selecting the appropriate licensing model.
marketplace Plans Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plans-pricing.md
Previously updated : 07/27/2022 Last updated : 08/26/2022 # Plans and pricing for commercial marketplace offers
You must associate a pricing model with each plan for the following offer types.
An offer can have only one pricing model. For example, a SaaS offer cannot have one plan that's flat rate and another plan thatΓÇÖs per user. However, a SaaS offer can have some plans with flat rate with metered billing and other flat rate plans without metered billing. See specific offer documentation for detailed information.
-If you have already set prices for your plan in United States Dollars (USD) and add another market location, the price for the new market will be calculated according to the current exchange rates. After saving your changes, you will see an **Export prices (xlsx)** link that you can use to review and change the price for each market before publishing.
- > [!IMPORTANT] > After your offer is published, the pricing model choice cannot be changed.
+If you have already set prices for your plan in United States Dollars (USD) and add another market location, the price for the new market will be calculated according to the current exchange rates. After saving your changes, you will see an **Export prices (xlsx)** link that you can use to review and change the price for each market before publishing.
+
+> [!TIP]
+> The price shown to customers in the online stores doesn't change unless you update the price in Partner Center and then republish your offer. The rate will be updated when the scheduled price change is live according to [Changing prices in active commercial marketplace offers](price-changes.md).
+ #### Metered billing Flat-rate SaaS offers and managed application offers support metered billing using the marketplace metering service. This is a usage-based billing model that lets you define non-standard units, such as bandwidth or emails, that your customers will pay on a consumption basis. See related documentation to learn more about metered billing for [managed applications](marketplace-metering-service-apis.md) and [SaaS apps](./partner-center-portal/saas-metered-billing.md).
migrate Migrate Support Matrix Hyper V Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v-migration.md
General purpose V2 storage accounts (Hot and Cool tier) | Supported | GPv2 stora
Premium storage | Supported | However, standard storage accounts are recommended to help optimize costs. Region | Same region as virtual machine | Storage account should be in the same region as the virtual machine being protected. Subscription | Can be different from source virtual machines | The Storage account need not be in the same subscription as the source virtual machine(s).
-Azure Storage firewalls for virtual networks | Supported | If you are using firewall enabled replication storage account or target storage account, ensure you [Allow trusted Microsoft services](/azure/storage/common/storage-network-securitytabs=azure-portal#exceptions). Also, ensure that you allow access to at least one subnet of source VNet. **You should allow access from All networks for public endpoint connectivity.**
+Azure Storage firewalls for virtual networks | Supported | If you are using firewall enabled replication storage account or target storage account, ensure you [Allow trusted Microsoft services](/azure/storage/common/storage-network-security#exceptions). Also, ensure that you allow access to at least one subnet of source VNet. **You should allow access from All networks for public endpoint connectivity.**
Soft delete | Not supported | Soft delete is not supported because once it is enabled on replication storage account, it increases cost. Azure Migrate performs very frequent creates/deletes of log files while replicating causing costs to increase. Private endpoint | Supported | Follow the guidance to [set up Azure Migrate with private endpoints](migrate-servers-to-azure-using-private-link.md?pivots=hyperv).
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| Canada East | :heavy_check_mark: | :x: | :heavy_check_mark: | | Central India | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | | Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| China East 3 | :heavy_check_mark: | :x: | :x:|
+| China North 3 | :heavy_check_mark: | :x: | :x:|
| East Asia | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | | East US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | East US 2 | :heavy_check_mark: | :x: $ | :heavy_check_mark: | | France Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| France South | :heavy_check_mark: | :x: | :x: |
+| France South | :heavy_check_mark: | :x: | :heavy_check_mark: |
| Germany West Central | :x: $$ | :x: $ | :x: | | Japan East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Japan West | :heavy_check_mark: | :x: | :heavy_check_mark: | | Jio India West | :heavy_check_mark: (v3 only)| :x: | :x: |
-| Korea Central | :heavy_check_mark: | :heavy_check_mark: ** | :x: |
-| Korea South | :heavy_check_mark: | :x: | :x: |
+| Korea Central | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: |
+| Korea South | :heavy_check_mark: | :x: | :heavy_check_mark: |
| North Central US | :heavy_check_mark: | :x: | :heavy_check_mark: | | North Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Norway East | :heavy_check_mark: | :x: | :x: | | Qatar Central | :heavy_check_mark: | :x: | :x: |
-| South Africa North | :heavy_check_mark: | :heavy_check_mark: ** | :x: |
+| South Africa North | :heavy_check_mark: | :heavy_check_mark: | :x: |
| South Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | South India | :x: $$ | :x: | :heavy_check_mark: | | Southeast Asia | :heavy_check_mark: | :x: $ | :heavy_check_mark: |
One advantage of running your workload in Azure is global reach. The flexible se
| US Gov Virginia | :heavy_check_mark: | :heavy_check_mark: | :x: | | UK South | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | UK West | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| West Central US | :heavy_check_mark: | :x: | :x: |
+| West Central US | :heavy_check_mark: | :x: | :heavy_check_mark: |
| West Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | West US | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| West US 2 | :x: $$ | :x: $ | :x: |
+| West US 2 | :x: $$ | :x: $ | :heavy_check_mark:|
| West US 3 | :heavy_check_mark: | :heavy_check_mark: ** | :x: | $ New Zone-redundant high availability deployments are temporarily blocked in these regions. Already provisioned HA servers are fully supported.
private-5g-core Collect Required Information For Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-service.md
Read [Policy control](policy-control.md) and make sure you're familiar with Azur
## Collect top-level setting values
-Each service has many top-level settings that determine its name and the QoS characteristics it will offer.
+Each service has top-level settings that determine its name and the QoS characteristics, if any, that it will offer.
Collect each of the values in the table below for your service.
Collect each of the values in the table below for your service.
|--|--|--| | The name of the service. This name must only contain alphanumeric characters, dashes, or underscores. You also must not use any of the following reserved strings: *default*; *requested*; *service*. | **Service name** |Yes| | A precedence value that the packet core instance must use to decide between services when identifying the QoS values to offer. This value must be an integer between 0 and 255 and must be unique among all services configured on the packet core instance. A lower value means a higher priority. | **Service precedence** |Yes|+
+### Collect Quality of service (QoS) setting values
+
+You can specify a QoS for this service, or inherit the parent SIM Policy's QoS. If you want to specify a QoS for this service, collect each of the values in the table below.
+
+| Value | Azure portal field name | Included in example ARM template |
+|--|--|--|
| The maximum bit rate (MBR) for uplink traffic (traveling away from user equipment (UEs)) across all SDFs that match data flow policy rules configured on this service. The MBR must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Mbps`. | **Maximum bit rate (MBR) - Uplink** | Yes| | The MBR for downlink traffic (traveling towards UEs) across all SDFs that match data flow policy rules configured on this service. The MBR must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Mbps`. | **Maximum bit rate (MBR) - Downlink** | Yes| | The default Allocation and Retention Policy (ARP) priority level for this service. Flows with a higher ARP priority level preempt flows with a lower ARP priority level. The ARP priority level must be an integer between 1 (highest priority) and 15 (lowest priority). | **Allocation and Retention Priority level** |No. Defaults to 9.|
-| The default 5G QoS Indicator (5QI) or QoS class identifier (QCI) value for this service. The 5QI (for 5G networks) or QCI (for 4G networks) value identifies a set of QoS characteristics that control QoS forwarding treatment for QoS flows or EPS bearers. </br></br>We recommend you choose a 5QI or QCI value that corresponds to a non-GBR QoS flow or EPS bearer. These values are in the following ranges: 5-9; 69-70; 79-80. For more details, see 3GPP TS 23.501 for 5QI or 3GPP TS 23.203 for QCI.</br></br>You can also choose a non-standardized 5QI or QCI value.</p><p>Azure Private 5G Core doesn't support 5QI or QCI values corresponding to GBR or delay-critical GBR QoS flows or EPS bearers. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5G QoS Indicator (5QI)** |No. Defaults to 9.|
+| The default 5G QoS Indicator (5QI) or QoS class identifier (QCI) value for this service. The 5QI (for 5G networks) or QCI (for 4G networks) value identifies a set of QoS characteristics that control QoS forwarding treatment for QoS flows or EPS bearers. </br></br>We recommend you choose a 5QI or QCI value that corresponds to a non-GBR QoS flow or EPS bearer. These values are in the following ranges: 5-9; 69-70; 79-80. For more details, see 3GPP TS 23.501 for 5QI or 3GPP TS 23.203 for QCI.</br></br>You can also choose a non-standardized 5QI or QCI value.</p><p>Azure Private 5G Core doesn't support 5QI or QCI values corresponding to GBR or delay-critical GBR QoS flows or EPS bearers. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5QI/QCI** |No. Defaults to 9.|
| The default preemption capability for QoS flows or EPS bearers for this service. The preemption capability of a QoS flow or EPS bearer controls whether it can preempt another QoS flow or EPS bearer with a lower priority level. You can choose from the following values: </br></br>- **May not preempt** </br>- **May preempt** | **Preemption capability** |No. Defaults to **May not preempt**.| | The default preemption vulnerability for QoS flows or EPS bearers for this service. The preemption vulnerability of a QoS flow or EPS bearer controls whether it can be preempted by another QoS flow or EPS bearer with a higher priority level. You can choose from the following values: </br></br>- **Preemptable** </br>- **Not preemptable** | **Preemption vulnerability** |No. Defaults to **Preemptable**.|
private-5g-core Collect Required Information For Sim Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-sim-policy.md
Collect each of the values in the table below for the network scope.
|The names of the services permitted on the data network. You must have already configured your chosen services. For more information on services, see [Policy control](policy-control.md). | **Service configuration** | No. The SIM policy will only use the service you configure using the same template. | |The maximum bitrate for traffic traveling away from UEs across all non-GBR QoS flows or EPS bearers of a given PDU session or PDN connection. The bitrate must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. | **Session aggregate maximum bit rate - Uplink** | Yes | |The maximum bitrate for traffic traveling towards UEs across all non-GBR QoS flows or EPS bearers of a given PDU session or PDN connection. The bitrate must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. | **Session aggregate maximum bit rate - Downlink** | Yes |
-|The default 5QI (for 5G) or QCI (for 4G) value for this data network. These values identify a set of QoS characteristics that control QoS forwarding treatment for QoS flows or EPS bearers.</br></br>We recommend you choose a 5QI or QCI value that corresponds to a non-GBR QoS flow or EPS bearer. These values are in the following ranges: 5-9; 69-70; 79-80. For more details, see 3GPP TS 23.501 for 5QI or 3GPP TS 23.203 for QCI.</br></br>You can also choose a non-standardized 5QI or QCI value. </br></br>Azure Private 5G Core Preview doesn't support 5QI or QCI values corresponding to GBR or delay-critical GBR QoS flows or EPS bearers. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5G QoS Indicator (5QI)** | No. Defaults to 9. |
+|The default 5QI (for 5G) or QCI (for 4G) value for this data network. These values identify a set of QoS characteristics that control QoS forwarding treatment for QoS flows or EPS bearers.</br></br>We recommend you choose a 5QI or QCI value that corresponds to a non-GBR QoS flow or EPS bearer. These values are in the following ranges: 5-9; 69-70; 79-80. For more details, see 3GPP TS 23.501 for 5QI or 3GPP TS 23.203 for QCI.</br></br>You can also choose a non-standardized 5QI or QCI value. </br></br>Azure Private 5G Core Preview doesn't support 5QI or QCI values corresponding to GBR or delay-critical GBR QoS flows or EPS bearers. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5QI/QCI** | No. Defaults to 9. |
|The default Allocation and Retention Policy (ARP) priority level for this data network. Flows with a higher ARP priority level preempt flows with a lower ARP priority level. The ARP priority level must be an integer between 1 (highest priority) and 15 (lowest priority). | **Allocation and Retention Priority level** | No. Defaults to 1. | |The default preemption capability for QoS flows or EPS bearers on this data network. The preemption capability of a QoS flow or EPS bearer controls whether it can preempt another QoS flow or EPS bearer with a lower priority level. </br></br>You can choose from the following values: </br></br>- **May preempt** </br>- **May not preempt** | **Preemption capability** | No. Defaults to **May not preempt**.| |The default preemption vulnerability for QoS flows or EPS bearers on this data network. The preemption vulnerability of a QoS flow or EPS bearer controls whether it can be preempted by another QoS flow or EPS bearer with a higher priority level. </br></br>You can choose from the following values: </br></br>- **Preemptable** </br>- **Not preemptable** | **Preemption vulnerability** | No. Defaults to **Preemptable**.|
-|The default PDU session or PDN connection type for SIMs using this data network. Azure Private 5G Core will use this type by default if the SIM doesn't request a specific type. </br></br>You can choose from the following values: </br></br>- **IPv4** </br>- **IPv6** | **Default session type** | No. Defaults to **IPv4**.|
-|An additional PDU session or PDN connection type that Azure Private 5G Core supports for this data network. This type must not match the default type mentioned above. </br></br>You can choose from the following values: </br></br>- **IPv4** </br>- **IPv6** | **Additional allowed session types** |No. Defaults to no value.|
+|The default PDU session type for SIMs using this data network. Azure Private 5G Core will use this type by default if the SIM doesn't request a specific type.| **Default session type** | No. Defaults to **IPv4**.|
## Next steps
private-5g-core Configure Service Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/configure-service-azure-portal.md
In this step, you'll configure basic settings for your new service using the Azu
:::image type="content" source="media/configure-service-azure-portal/create-command-bar-option.png" alt-text="Screenshot of the Azure portal. It shows the Create option in the command bar.":::
-1. On the **Basics** configuration tab, use the information you collected in [Collect top-level setting values](collect-required-information-for-service.md#collect-top-level-setting-values) to fill out each of the fields.
+1. On the **Basics** configuration tab, use the information you collected in [Collect top-level setting values](collect-required-information-for-service.md#collect-top-level-setting-values) to fill out each of the fields.
+If you do not want to specify a QoS for this service, turn off the **Configured** toggle. If the toggle is off, the service will inherit the QoS of the parent SIM Policy.
:::image type="content" source="media/configure-service-azure-portal/create-service-basics-tab.png" alt-text="Screenshot of the Azure portal showing the Basics configuration tab for a service.":::
private-5g-core Configure Sim Policy Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/configure-sim-policy-azure-portal.md
# Configure a SIM policy for Azure Private 5G Core Preview - Azure portal
-*SIM policies* allow you to define different sets of policies and interoperability settings that can each be assigned to one or more SIMs. You'll need to assign a SIM policy to a SIM before the user equipment (UE) using that SIM can access the private mobile network. In this how-to-guide, you'll learn how to configure a SIM policy.
+*SIM policies* allow you to define different sets of policies and interoperability settings that can each be assigned to a group of SIMs. The SIM policy also defines the default Quality of Service settings for any services that use the policy. You'll need to assign a SIM policy to a SIM before the user equipment (UE) using that SIM can access the private mobile network. In this how-to-guide, you'll learn how to configure a SIM policy.
## Prerequisites
:::image type="content" source="media/configure-sim-policy-azure-portal/sim-policy-basics-tab.png" alt-text="Screenshot of the Azure portal. It shows the basics tab for a SIM policy. The Add a network scope button is highlighted.":::
-1. Under **Add a network scope** on the right, fill out each of the fields using the information you collected from [Collect information for the network scope](collect-required-information-for-sim-policy.md#collect-information-for-the-network-scope).
+1. Under **Add a network scope** on the right, fill out each of the fields using the information you collected from [Collect information for the network scope](collect-required-information-for-sim-policy.md#collect-information-for-the-network-scope).
+SIM policies also define the default QoS settings for any services that use the policy. You can override the default SIM policy QoS settings on a per-service basis - see [Configure basic settings for the service](configure-service-azure-portal.md#configure-basic-settings-for-the-service).
1. Select **Add**. :::image type="content" source="media/configure-sim-policy-azure-portal/add-a-network-scope.png" alt-text="Screenshot of the Azure portal. It shows the Add a network scope screen. The Add button is highlighted.":::
## Next steps -- [Learn more about policy control](policy-control.md)
+- [Learn more about policy control](policy-control.md)
private-5g-core Default Service Sim Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/default-service-sim-policy.md
The following tables provide the settings for the default service and its associ
||| |The service name. |*Allow_all_traffic* | |A precedence value that the packet core instance must use to decide between services when identifying the QoS values to offer.|*253* |
-|The Maximum Bit Rate (MBR) for uploads across all service data flows that will be included in data flow policy rules configured on this service.|*2 Gbps* |
-|The Maximum Bit Rate (MBR) for downloads across all service data flows that will be included in data flow policy rules configured on this service. |*2 Gbps* |
-|The default QoS Flow Allocation and Retention Policy (ARP) priority level.| *9* |
-|The default 5G QoS Indicator (5QI) value for this service. The 5QI identifies a set of 5G QoS characteristics that control QoS forwarding treatment for QoS Flows, such as limits for Packet Error Rate. | *9* |
-|The default QoS Flow preemption capability for QoS Flows for this service. The preemption capability of a QoS Flow controls whether it can preempt another QoS Flow with a lower priority level. |*May not preempt* |
-|The default QoS Flow preemption vulnerability for QoS Flows for this service. The preemption vulnerability of a QoS Flow controls whether it can be preempted another QoS Flow with a higher priority level. |*Preemptable* |
### Data flow policy rule settings
The following tables provide the settings for the default SIM policy and its ass
|The names of the services permitted on this data network. | *Allow-all-traffic* | |The maximum bitrate for uplink traffic (traveling away from SIMs) across all Non-GBR QoS Flows of a given PDU session on this data network. | *2 Gbps* | |The maximum bitrate for downlink traffic (traveling towards SIMs) across all Non-GBR QoS Flows of a given PDU session on this data network. | *2 Gbps* |
-|The default 5G QoS Indicator (5QI) value for this data network. The 5QI identifies a set of 5G QoS characteristics that control QoS forwarding treatment for QoS Flows, such as limits for Packet Error Rate. | *9* |
+|The default 5G QoS identifier (5QI) or QoS class identifier (QCI) value for this data network. The 5QI or QCI identifies a set of 5G or 4G QoS characteristics that control QoS forwarding treatment for QoS Flows, such as limits for Packet Error Rate. | *9* |
|The default QoS Flow Allocation and Retention Policy (ARP) priority level for this data network. Flows with a higher ARP priority level preempt those with a lower ARP priority level. | *1* | |The default QoS Flow preemption capability for QoS Flows on this data network. The preemption capability of a QoS Flow controls whether it can preempt another QoS Flow with a lower priority level. | *May not preempt* | |The default QoS Flow preemption vulnerability for QoS Flows on this data network. The preemption vulnerability of a QoS Flow controls whether it can be preempted another QoS Flow with a higher priority level. | *Preemptable* |
private-5g-core Policy Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/policy-control.md
A *service* is a representation of a set of QoS characteristics that you want to
Each service includes: -- A set of QoS characteristics that should be applied on SDFs matching the service. The packet core instance will use these characteristics to create a QoS flow or EPS bearer to bind to matching SDFs. You can specify the following QoS settings on a service:
+- One or more *data flow policy rules*, which identify the SDFs to which the service should be applied. You can configure each rule with the following to determine when it's applied and the effect it will have:
+
+ - One or more *data flow templates*, which provide the packet filters that identify the SDFs on which to match. You can match on an SDF's direction, protocol, target IP address, and target port. The target IP address and port refer to the component on the data network's end of the connection.
+ - A traffic control setting, which determines whether the packet core instance should allow or block traffic matching the SDF(s).
+ - A precedence value, which the packet core instance can use to rank data flow policy rules by importance.
+
+- Optionally, a set of QoS characteristics that should be applied on SDFs matching the service. The packet core instance will use these characteristics to create a QoS flow or EPS bearer to bind to matching SDFs. If you don't configure QoS characteristics, the default characteristics of the parent SIM policy will be used instead.
+You can specify the following QoS settings on a service:
- The maximum bit rate (MBR) for uplink traffic (away from the UE) across all matching SDFs. - The MBR for downlink traffic (towards the UE) across all matching SDFs.
Each service includes:
- A preemption capability setting. This setting determines whether the QoS flow or EPS bearer created for this service can preempt another QoS flow or EPS bearer with a lower ARP priority level. - A preemption vulnerability setting. This setting determines whether the QoS flow or EPS bearer created for this service can be preempted by another QoS flow or EPS bearer with a higher ARP priority level. -- One or more *data flow policy rules*, which identify the SDFs to which the service should be applied. You can configure each rule with the following to determine when it's applied and the effect it will have:-
- - One or more *data flow templates*, which provide the packet filters that identify the SDFs on which to match. You can match on an SDF's direction, protocol, target IP address, and target port. The target IP address and port refer to the component on the data network's end of the connection.
- - A traffic control setting, which determines whether the packet core instance should allow or block traffic matching the SDF(s).
- - A precedence value, which the packet core instance can use to rank data flow policy rules by importance.
- ### SIM policies *SIM policies* let you define different sets of policies and interoperability settings that can each be assigned to one or more SIMs. You'll need to assign a SIM policy to a SIM before the UE using that SIM can access the private mobile network.
private-5g-core Tutorial Create Example Set Of Policy Control Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/tutorial-create-example-set-of-policy-control-configuration.md
To create the service:
|**Maximum bit rate (MBR) - Uplink** | `2 Gbps` | |**Maximum bit rate (MBR) - Downlink** | `2 Gbps` | |**Allocation and Retention Priority level** | `2` |
- |**5G QoS Indicator (5QI)** | `9` |
+ |**5QI/QCI** | `9` |
|**Preemption capability** | Select **May not preempt**. | |**Preemption vulnerability** | Select **Not preemptable**. |
To create the service:
|**Maximum bit rate (MBR) - Uplink** | `2 Gbps` | |**Maximum bit rate (MBR) - Downlink** | `2 Gbps` | |**Allocation and Retention Priority level** | `2` |
- |**5G QoS Indicator (5QI)** | `9` |
+ |**5QI/QCI** | `9` |
|**Preemption capability** | Select **May not preempt**. | |**Preemption vulnerability** | Select **Not preemptable**. |
To create the service:
|**Maximum bit rate (MBR) - Uplink** | `10 Mbps` | |**Maximum bit rate (MBR) - Downlink** | `15 Mbps` | |**Allocation and Retention Priority level** | `2` |
- |**5G QoS Indicator (5QI)** | `9` |
+ |**5QI/QCI** | `9` |
|**Preemption capability** | Select **May not preempt**. | |**Preemption vulnerability** | Select **Preemptable**. |
Let's create the SIM policies.
|**Service configuration** | Select **service_restricted_udp_and_icmp** and **service_traffic_limits**. | |**Session aggregate maximum bit rate - Uplink** | `2 Gbps` | |**Session aggregate maximum bit rate - Downlink** | `2 Gbps` |
- |**5G QoS Indicator (5QI)** | `9` |
+ |**5QI/QCI** | `9` |
|**Allocation and Retention Priority level** | `9` | |**Preemption capability** | Select **May not preempt**. | |**Preemption vulnerability** | Select **Preemptable**. | |**Default session type** | Select **IPv4**. |
- |**Additional allowed session types** | Select **IPv6**. |
1. Select **Add**.
Let's create the SIM policies.
|**Service configuration** | Select **service_blocking_udp_from_specific_sources** and **service_traffic_limits**. | |**Session aggregate maximum bit rate - Uplink** | `2 Gbps` | |**Session aggregate maximum bit rate - Downlink** | `2 Gbps` |
- |**5G QoS Indicator (5QI)** | `9` |
+ |**5QI/QCI** | `9` |
|**Allocation and Retention Priority level** | `9` | |**Preemption capability** | Select **May not preempt**. | |**Preemption vulnerability** | Select **Preemptable**. | |**Default session type** | Select **IPv4**. |
- |**Additional allowed session types** | Select **IPv6**. |
1. Select **Add**. 1. On the **Basics** configuration tab, select **Review + create**.
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
A private-link resource is the destination target of a specified private endpoin
| Microsoft Purview | Microsoft.Purview/accounts | portal | | Azure Backup | Microsoft.RecoveryServices/vaults | vault | | Azure Relay | Microsoft.Relay/namespaces | namespace |
-| Azure Cognitive Search | Microsoft.Search/searchServices | search service |
+| Azure Cognitive Search | Microsoft.Search/searchServices | searchService |
| Azure Service Bus | Microsoft.ServiceBus/namespaces | namespace | | Azure SignalR Service | Microsoft.SignalRService/SignalR | signalr | | Azure SignalR Service | Microsoft.SignalRService/webPubSub | webpubsub |
The following information lists the known limitations to the use of private endp
| | | | Effective routes and security rules unavailable for private endpoint network interface. | Effective routes and security rules won't be displayed for the private endpoint NIC in the Azure portal. | | NSG flow logs unsupported. | NSG flow logs unavailable for inbound traffic destined for a private endpoint. |
-| Intermittent drops with zone-redundant storage (ZRS) storage accounts. | Customers that use ZRS storage accounts might see periodic intermittent drops, even with *allow NSG* applied on a storage private-endpoint subnet. |
-| Intermittent drops with Azure Key Vault. | Customers that use Azure Key Vault might see periodic intermittent drops, even with *allow NSG* applied on a Key Vault private-endpoint subnet. |
| The number of address prefixes per NSG is limited. | Having more than 500 address prefixes in an NSG in a single rule is unsupported. | | AllowVirtualNetworkAccess flag | Customers that set virtual network peering on their virtual network (virtual network A) with the *AllowVirtualNetworkAccess* flag set to *false* on the peering link to another virtual network (virtual network B) can't use the *VirtualNetwork* tag to deny traffic from virtual network B accessing private endpoint resources. The customers need to explicitly place a block for virtual network BΓÇÖs address prefix to deny traffic to the private endpoint. | | No more than 50 members in an Application Security Group. | Fifty is the number of IP Configurations that can be tied to each respective ASG thatΓÇÖs coupled to the NSG on the private endpoint subnet. Connection failures may occur with more than 50 members. |
purview Tutorial Atlas 2 2 Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-atlas-2-2-apis.md
You can send a `POST` request to the following endpoint:
POST {{endpoint}}/api/atlas/v2/types/typedefs ```
+>[!TIP]
+> The **applicableEntityTypes** property tells which data types the metadata will be applied to.
+ Sample JSON: ```json
role-based-access-control Check Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/check-access.md
Previously updated : 12/09/2020 Last updated : 08/26/2022 #Customer intent: As a new user, I want to quickly see access for myself, user, group, or application, to make sure they have the appropriate permissions.
Sometimes you need to check what access a user has to a set of Azure resources.
To check the access for a user, you first need to open the Azure resources you want to check access for. Azure resources are organized into levels that are typically called the *scope*. In Azure, you can specify a scope at four levels from broad to narrow: management group, subscription, resource group, and resource.
-![Scope levels for Azure RBAC](../../includes/role-based-access-control/media/scope-levels.png)
+![Diagram that shows scope levels for Azure RBAC.](../../includes/role-based-access-control/media/scope-levels.png)
Follow these steps to open the set of Azure resources that you want to check access for. 1. Open the [Azure portal](https://portal.azure.com).
-1. Open the set of Azure resources, such as **Management groups**, **Subscriptions**, **Resource groups**, or a particular resource.
+1. Open the set of Azure resources you want to check access for, such as **Management groups**, **Subscriptions**, **Resource groups**, or a particular resource.
1. Click the specific resource in that scope. The following shows an example resource group.
- ![Resource group overview](./media/shared/rg-overview.png)
+ ![Screenshot of resource group overview.](./media/shared/rg-overview.png)
## Step 2: Check access for a user
Follow these steps to check the access for a single user, group, service princip
The following shows an example of the Access control (IAM) page for a resource group.
- ![Resource group access control - Check access tab](./media/shared/rg-access-control.png)
+ ![Screenshot of resource group access control and Check access tab.](./media/shared/rg-access-control.png)
-1. On the **Check access** tab, in the **Find** list, select the user, group, service principal, or managed identity you want to check access for.
+1. On the **Check access** tab, click the **Check access** button.
+
+1. In the **Check access** pane, click **User, group, or service principal**.
1. In the search box, enter a string to search the directory for display names, email addresses, or object identifiers.
- ![Check access select list](./media/shared/rg-check-access-select.png)
+ ![Screenshot of Check access select list.](./media/shared/rg-check-access-select.png)
-1. Click the security principal to open the **assignments** pane.
+1. Click the user to open the **assignments** pane.
- On this pane, you can see the access for the selected security principal at this scope and inherited to this scope. Assignments at child scopes are not listed. You see the following assignments:
+ On this pane, you can see the access for the selected user at this scope and inherited to this scope. Assignments at child scopes are not listed. You see the following assignments:
- Role assignments added with Azure RBAC. - Deny assignments added using Azure Blueprints or Azure managed apps. - Classic Service Administrator or Co-Administrator assignments for classic deployments.
- ![Role and deny assignments pane for a user](./media/shared/rg-check-access-assignments-user.png)
+ ![Screenshot of role and deny assignments pane for a user.](./media/shared/rg-check-access-assignments-user.png)
## Step 3: Check your access
Follow these steps to check your access to the previously selected Azure resourc
An assignments pane appears that lists your access at this scope and inherited to this scope. Assignments at child scopes are not listed.
- ![Role and deny assignments pane](./media/check-access/rg-check-access-assignments.png)
+ ![Screenshot of role and deny assignments pane.](./media/check-access/rg-check-access-assignments.png)
## Next steps
role-based-access-control Role Assignments External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-external-users.md
ms.devlang:
Previously updated : 10/15/2021 Last updated : 08/26/2022
If a guest user has been granted access to a directory, but they do not see the
If a guest user has been granted access to a directory, but they do not see the resources they have been granted access to in the Azure portal, make sure the guest user has selected the correct directory. A guest user might have access to multiple directories. To switch directories, in the upper left, click **Settings** > **Directories**, and then click the appropriate directory.
-![Screenshot of Poral setting Directories section in Azure portal.](./media/role-assignments-external-users/directory-switch.png)
+![Screenshot of Portal setting Directories section in Azure portal.](./media/role-assignments-external-users/directory-switch.png)
## Next steps
role-based-access-control Role Assignments List Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-list-portal.md
Previously updated : 11/12/2021 Last updated : 08/26/2022
A quick way to see the roles assigned to a user or group in a subscription is to
You see a list of roles assigned to the selected user or group at various scopes such as management group, subscription, resource group, or resource. This list includes all role assignments you have permission to read.
- ![Role assignments for a user](./media/role-assignments-list-portal/azure-role-assignments-user.png)
+ ![Screenshot of role assignments for a user.](./media/role-assignments-list-portal/azure-role-assignments-user.png)
1. To change the subscription, click the **Subscriptions** list.
Users that have been assigned the [Owner](built-in-roles.md#owner) role for a su
1. Scroll to the **Owners** section to see all the users that have been assigned the Owner role for this subscription.
- ![Subscription Access control - Role assignments tab](./media/role-assignments-list-portal/sub-access-control-role-assignments-owners.png)
+ ![Screenshot of subscription Access control and Role assignments tab.](./media/role-assignments-list-portal/sub-access-control-role-assignments-owners.png)
## List role assignments at a scope
Users that have been assigned the [Owner](built-in-roles.md#owner) role for a su
1. Click the **Role assignments** tab to view all the role assignments at this scope.
- ![Access control - Role assignments tab](./media/role-assignments-list-portal/rg-access-control-role-assignments.png)
+ ![Screenshot of Access control and Role assignments tab.](./media/role-assignments-list-portal/rg-access-control-role-assignments.png)
On the Role assignments tab, you can see who has access at this scope. Notice that some roles are scoped to **This resource** while others are **(Inherited)** from another scope. Access is either assigned specifically to this resource or inherited from an assignment to the parent scope.
To list access for a user, group, service principal, or managed identity, you li
1. Click **Access control (IAM)**.
-1. Click the **Check access** tab.
+ ![Screenshot of resource group access control and Check access tab.](./media/shared/rg-access-control.png)
- ![Resource group access control - Check access tab](./media/role-assignments-list-portal/rg-access-control-check-access.png)
+1. On the **Check access** tab, click the **Check access** button.
-1. In the **Find** list, select the user, group, service principal, or managed identity you want to check access for.
+1. In the **Check access** pane, click **User, group, or service principal** or **Managed identity**.
1. In the search box, enter a string to search the directory for display names, email addresses, or object identifiers.
- ![Check access select list](./media/shared/rg-check-access-select.png)
+ ![Screenshot of Check access select list.](./media/shared/rg-check-access-select.png)
1. Click the security principal to open the **assignments** pane.
To list access for a user, group, service principal, or managed identity, you li
- Deny assignments added using Azure Blueprints or Azure managed apps. - Classic Service Administrator or Co-Administrator assignments for classic deployments.
- ![assignments pane](./media/shared/rg-check-access-assignments-user.png)
+ ![Screenshot of assignments pane.](./media/shared/rg-check-access-assignments-user.png)
## List role assignments for a managed identity
You can list role assignments for system-assigned and user-assigned managed iden
1. In the left menu, click **Identity**.
- ![System-assigned managed identity](./media/shared/identity-system-assigned.png)
+ ![Screenshot of system-assigned managed identity.](./media/shared/identity-system-assigned.png)
1. Under **Permissions**, click **Azure role assignments**. You see a list of roles assigned to the selected system-assigned managed identity at various scopes such as management group, subscription, resource group, or resource. This list includes all role assignments you have permission to read.
- ![Role assignments for a system-assigned managed identity](./media/shared/role-assignments-system-assigned.png)
+ ![Screenshot of role assignments for a system-assigned managed identity.](./media/shared/role-assignments-system-assigned.png)
1. To change the subscription, click the **Subscription** list.
You can list role assignments for system-assigned and user-assigned managed iden
You see a list of roles assigned to the selected user-assigned managed identity at various scopes such as management group, subscription, resource group, or resource. This list includes all role assignments you have permission to read.
- ![Screenshot that shows role assignments for a user-assigned managed identity.](./media/shared/role-assignments-user-assigned.png)
+ ![Screenshot of role assignments for a user-assigned managed identity.](./media/shared/role-assignments-user-assigned.png)
1. To change the subscription, click the **Subscription** list.
You can have up to **2000** role assignments in each subscription. This limit in
The role assignments limit for a subscription is currently being increased. For more information, see [Troubleshoot Azure RBAC](troubleshooting.md#limits).
-![Access control - Number of role assignments chart](./media/role-assignments-list-portal/access-control-role-assignments-chart.png)
+![Screenshot of Access control and number of role assignments chart.](./media/role-assignments-list-portal/access-control-role-assignments-chart.png)
If you are getting close to the maximum number and you try to add more role assignments, you'll see a warning in the **Add role assignment** pane. For ways that you can reduce the number of role assignments, see [Troubleshoot Azure RBAC](troubleshooting.md#limits).
-![Access control - Add role assignment warning](./media/role-assignments-list-portal/add-role-assignment-warning.png)
+![Screenshot of Access control and Add role assignment warning.](./media/role-assignments-list-portal/add-role-assignment-warning.png)
## Download role assignments
Follow these steps to download role assignments at a scope.
1. Click **Download role assignments** to open the Download role assignments pane.
- ![Access control - Download role assignments](./media/role-assignments-list-portal/download-role-assignments.png)
+ ![Screenshot of Access control and Download role assignments.](./media/role-assignments-list-portal/download-role-assignments.png)
1. Use the check boxes to select the role assignments you want to include in the downloaded file.
Follow these steps to download role assignments at a scope.
The following show examples of the output for each file format.
- ![Download role assignments as CSV](./media/role-assignments-list-portal/download-role-assignments-csv.png)
+ ![Screenshot of download role assignments as CSV.](./media/role-assignments-list-portal/download-role-assignments-csv.png)
![Screenshot of the downloaded role assignments as in JSON format.](./media/role-assignments-list-portal/download-role-assignments-json.png)
role-based-access-control Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-portal.md
Previously updated : 10/15/2021 Last updated : 08/26/2022
If you need to assign administrator roles in Azure Active Directory, see [Assign
[!INCLUDE [Scope for Azure RBAC introduction](../../includes/role-based-access-control/scope-intro.md)] For more information, see [Understand scope](scope-overview.md).
-![Diagram showing the scope levels for Azure RBAC.](../../includes/role-based-access-control/media/scope-levels.png)
+![Diagram that shows the scope levels for Azure RBAC.](../../includes/role-based-access-control/media/scope-levels.png)
1. Sign in to the [Azure portal](https://portal.azure.com).
Currently, conditions can be added to built-in or custom role assignments that h
[!INCLUDE [Scope for Azure RBAC introduction](../../includes/role-based-access-control/scope-intro.md)] For more information, see [Understand scope](scope-overview.md).
-![Diagram showing the scope levels for Azure RBAC for classic experience.](../../includes/role-based-access-control/media/scope-levels.png)
+![Diagram that shows the scope levels for Azure RBAC for classic experience.](../../includes/role-based-access-control/media/scope-levels.png)
1. Sign in to the [Azure portal](https://portal.azure.com).
search Cognitive Search Tutorial Blob Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-blob-python.md
If you have unstructured text or images in Azure Blob Storage, an [AI enrichment pipeline](cognitive-search-concept-intro.md) can extract information and create new content for full-text search or knowledge mining scenarios.
-In this Python tutorial, you will learn how to:
+In this Python tutorial, you'll learn how to:
> [!div class="checklist"] > * Set up a development environment
If you don't have an Azure subscription, open a [free account](https://azure.mic
## Overview
-This tutorial uses Python and the [**azure-search-documents** client library](/python/api/overview/azure/search-documents-readme) to create a data source, index, indexer, and skillset.
+This tutorial uses Python and the [Search REST APIs](/rest/api/searchservice/) to create a data source, index, indexer, and skillset.
-The indexer connects to sample data in a blob container that's specified in the data source object, and sends all enriched content to a search index.
+The indexer retrieves sample data in a blob container that's specified in the data source object, and sends all enriched content to a search index.
The skillset is attached to the indexer. It uses built-in skills from Microsoft to find and extract information. Steps in the pipeline include Optical Character Recognition (OCR) on images, language detection on text, key phrase extraction, and entity recognition (organizations). New information created by the pipeline is stored in new fields in an index. Once the index is populated, you can use the fields in queries, facets, and filters. ## Prerequisites
-<!-- * [Visual Studio Code](https://code.visualstudio.com/download) with the [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) -->
-* [Anaconda 3.7](https://www.anaconda.com/distribution/#download-section)
+* [Visual Studio Code](https://code.visualstudio.com/download) with the [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) and Python 3.7 or later
* [Azure Storage](https://azure.microsoft.com/services/storage/) * [Azure Cognitive Search](https://azure.microsoft.com/services/search/) > [!NOTE]
-> You can use the free service for this tutorial. A free search service limits you to three indexes, three indexers, and three data sources. This tutorial creates one of each. Before starting, make sure you have room on your service to accept the new resources.
+> You can use the free search service for this tutorial. A free search service limits you to three indexes, three indexers, and three data sources. This tutorial creates one of each. Before starting, make sure you have room on your service to accept the new resources.
## Download files
-The sample data consists of 14 files of mixed content type that you will upload to Azure Blob Storage in a later step.
+The sample data consists of 14 files of mixed content type that you'll upload to Azure Blob Storage in a later step.
-1. Open this [OneDrive folder](https://1drv.ms/f/s!As7Oy81M_gVPa-LCb5lC_3hbS-4) and on the top-left corner, click **Download** to copy the files to your computer.
+1. Open this [OneDrive folder](https://1drv.ms/f/s!As7Oy81M_gVPa-LCb5lC_3hbS-4) and on the top-left corner, select **Download** to copy the files to your computer.
-1. Right-click the zip file and select **Extract All**. There are 14 files of various types. You'll use 7 for this exercise.
+1. Right-click the zip file and select **Extract All**. There are 14 files of various types.
Optionally, you can also download the source code for this tutorial. Source code can be found at [https://github.com/Azure-Samples/azure-search-python-samples/tree/master/Tutorial-AI-Enrichment](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/Tutorial-AI-Enrichment). ## 1 - Create services
-This tutorial uses Azure Cognitive Search for indexing and queries, Cognitive Services on the backend for AI enrichment, and Azure Blob Storage to provide the data. This tutorial stays under the free allocation of 20 transactions per indexer per day on Cognitive Services, so the only services you need to create are search and storage.
+This tutorial uses Azure Cognitive Search for indexing and queries, Cognitive Services on the backend for AI enrichment, and Azure Blob Storage to provide the data. This tutorial stays under the Cognitive Search free allocation of 20 transactions per indexer per day on Cognitive Services, so the only services you need to create are search and storage.
If possible, create both in the same region and resource group for proximity and manageability. In practice, your Azure Storage account can be in any region. ### Start with Azure Storage
-1. [Sign in to the Azure portal](https://portal.azure.com/) and click **+ Create Resource**.
+1. [Sign in to the Azure portal](https://portal.azure.com/) and select **+ Create Resource**.
1. Search for *storage account* and select Microsoft's Storage Account offering.
If possible, create both in the same region and resource group for proximity and
+ **Account Kind**. Choose the default, *StorageV2 (general purpose v2)*.
-1. Click **Review + Create** to create the service.
+1. Select **Review + Create** to create the service.
-1. Once it's created, click **Go to the resource** to open the Overview page.
+1. Once it's created, select **Go to the resource** to open the Overview page.
-1. Click **Blobs** service.
+1. Select **Blobs** service.
-1. Click **+ Container** to create a container and name it *cog-search-demo*.
+1. Select **+ Container** to create a container and name it *cog-search-demo*.
-1. Select *cog-search-demo* and then click **Upload** to open the folder where you saved the download files. Select all of the non-image files. You should have 7 files. Click **OK** to upload.
+1. Select *cog-search-demo* and then select **Upload** to open the folder where you saved the download files. Select all of the non-image files. You should have 14 files. Select **OK** to upload.
:::image type="content" source="media/cognitive-search-tutorial-blob/sample-files.png" alt-text="Upload sample files" border="false":::
If possible, create both in the same region and resource group for proximity and
AI enrichment is backed by Cognitive Services, including Language service and Computer Vision for natural language and image processing. If your objective was to complete an actual prototype or project, you would at this point provision Cognitive Services (in the same region as Azure Cognitive Search) so that you can attach it to indexing operations.
-Since this tutorial only uses 7 transactions, you can skip resource provisioning because Azure Cognitive Search can connect to Cognitive Services for 20 free transactions per indexer run. The free allocation is sufficient. For larger projects, plan on provisioning Cognitive Services at the pay-as-you-go S0 tier. For more information, see [Attach Cognitive Services](cognitive-search-attach-cognitive-services.md).
+Since this tutorial only uses 14 transactions, you can skip resource provisioning because Azure Cognitive Search can connect to Cognitive Services for 20 free transactions per indexer run. For larger projects, plan on provisioning Cognitive Services at the pay-as-you-go S0 tier. For more information, see [Attach Cognitive Services](cognitive-search-attach-cognitive-services.md).
### Azure Cognitive Search The third component is Azure Cognitive Search, which you can [create in the portal](search-create-service-portal.md) or [find an existing search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in your subscription.
-You can use the Free tier to complete this walkthrough.
+You can use the Free tier to complete this tutorial.
### Copy an admin api-key and URL for Azure Cognitive Search
-To interact with your Azure Cognitive Search service you will need the service URL and an access key.
+To send requests to your Azure Cognitive Search service, you'll need the service URL and an access key.
-1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the name of your search service. You can confirm your service name by reviewing the endpoint URL. If your endpoint URL were `https://mydemo.search.windows.net`, your service name would be `mydemo`.
+1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the name of your search service. You can confirm your service name by reviewing the endpoint URL. If your endpoint URL is `https://mydemo.search.windows.net`, your service name would be `mydemo`.
2. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
All requests require an api-key in the header of every request sent to your serv
## 2 - Start a notebook
-Create the notebook using the following instructions, or download a finished notebook from [Azure-Search-python-samples repo](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/Tutorial-AI-Enrichment).
+Use Visual Studio Code with the Python extension to create a new notebook. Press F1 to open the command palette and then search for "Create: New Jupyter Notebook".
-Use Anaconda Navigator to launch Jupyter Notebook and create a new Python 3 notebook.
+Alternatively, if you downloaded the notebook from [Azure-Search-python-samples repo](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/Tutorial-AI-Enrichment), you can open it in Visual Studio Code.
-In your notebook, run this script to load the libraries used for working with JSON and formulating HTTP requests.
+In your notebook, create a new cell and add this script. It loads the libraries used for working with JSON and formulating HTTP requests.
```python import json
import requests
from pprint import pprint ```
-In the same notebook, define the names for the data source, index, indexer, and skillset. Run this script to set up the names for this tutorial.
+In the next cell, define the names for the data source, index, indexer, and skillset. Run this script to set up the names for this tutorial.
```python # Define the names for the data source, skillset, index and indexer
index_name = "cogsrch-py-index"
indexer_name = "cogsrch-py-indexer" ```
-In the following script, replace the placeholders for your search service (YOUR-SEARCH-SERVICE-NAME) and admin API key (YOUR-ADMIN-API-KEY), and then run it to set up the search service endpoint.
+In a third cell, paste the following script, replacing the placeholders for your search service (YOUR-SEARCH-SERVICE-NAME) and admin API key (YOUR-ADMIN-API-KEY), and then run it to set up the search service endpoint.
```python # Setup the endpoint
params = {
## 3 - Create the pipeline
-In Azure Cognitive Search, AI processing occurs during indexing (or data ingestion). This part of the walk through creates four objects: data source, index definition, skillset, indexer.
+In Azure Cognitive Search, AI processing occurs during indexing (or data ingestion). This part of the tutorial creates four objects: data source, index definition, skillset, indexer.
### Step 1: Create a data source A [data source object](/rest/api/searchservice/create-data-source) provides the connection string to the Blob container containing the sample data files.
-In the following script, replace the placeholder YOUR-BLOB-RESOURCE-CONNECTION-STRING with the connection string for the blob you created in the previous step. Replace the placeholder text for the container. Then, run the script to create a data source named `cogsrch-py-datasource`.
+In the following script, replace the placeholder YOUR-BLOB-RESOURCE-CONNECTION-STRING with the connection string for the blob you created in the previous step. Replace the placeholder YOUR-BLOB-CONTAINER-NAME with the name of your container. Then, run the script to create a data source named `cogsrch-py-datasource`.
```python # Create a data source
print(r.status_code)
The request should return a status code of 201 confirming success.
-In the Azure portal, on the search service dashboard page, verify that the cogsrch-py-datasource appears in the **Data sources** list. Click **Refresh** to update the page.
+In the Azure portal, on the search service dashboard page, verify that the cogsrch-py-datasource appears in the **Data sources** list. Select **Refresh** to update the page.
:::image type="content" source="media/cognitive-search-tutorial-blob-python/py-data-source-tile.png" alt-text="Data sources tile in the portal" border="false"::: ### Step 2: Create a skillset
-In this step, you will define a set of enrichment steps to apply to your data. You call each enrichment step a *skill*, and the set of enrichment steps a *skillset*. This tutorial uses [built-in cognitive skills](cognitive-search-predefined-skills.md) for the skillset:
+In this step, you'll define a set of enrichment steps using [built-in cognitive skills](cognitive-search-predefined-skills.md) from Microsoft:
+ [Entity Recognition](cognitive-search-skill-entity-recognition-v3.md) for extracting the names of organizations from content in the blob container.
For more information about skillset fundamentals, see [How to define a skillset]
### Step 3: Create an index
-In this section, you define the index schema by specifying the fields to include in the searchable index, and setting the search attributes for each field. Fields have a type and can take attributes that determine how the field is used (searchable, sortable, and so forth). Field names in an index are not required to identically match the field names in the source. In a later step, you add field mappings in an indexer to connect source-destination fields. For this step, define the index using field naming conventions pertinent to your search application.
+In this section, you define the index schema by specifying the fields to include in the searchable index, and setting the search attributes for each field. Fields have a type and can take attributes that determine how the field is used (searchable, sortable, and so forth). Field names in an index aren't required to identically match the field names in the source. In a later step, you add field mappings in an indexer to connect source-destination fields. For this step, define the index using field naming conventions pertinent to your search application.
This exercise uses the following fields and field types:
print(r.status_code)
The request should return a status code of 201 confirming success.
-To learn more about defining an index, see [Create Index (Azure Cognitive Search REST API)](/rest/api/searchservice/create-index).
+To learn more about defining an index, see [Create Index (REST API)](/rest/api/searchservice/create-index).
### Step 4: Create and run an indexer
-An [Indexer](/rest/api/searchservice/create-indexer) drives the pipeline. The three components you have created thus far (data source, skillset, index) are inputs to an indexer. Creating the indexer on Azure Cognitive Search is the event that puts the entire pipeline into motion.
+An [Indexer](/rest/api/searchservice/create-indexer) drives the pipeline. The three components you've created thus far (data source, skillset, index) are inputs to an indexer. Creating the indexer on Azure Cognitive Search is the event that puts the entire pipeline into motion.
To tie these objects together in an indexer, you must define field mappings.
In the response, monitor the `"lastResult"` for its `"status"` and `"endTime"` v
:::image type="content" source="media/cognitive-search-tutorial-blob-python/py-indexer-is-created.png" alt-text="Indexer is created" border="false":::
-Warnings are common with some source file and skill combinations and do not always indicate a problem. Many warnings are benign. For example, if you index a JPEG file that does not have text, you will see the warning in this screenshot.
+Warnings are common with some source file and skill combinations and don't always indicate a problem. Many warnings are benign. For example, if you index a JPEG file that doesn't have text, you'll see the warning in this screenshot.
:::image type="content" source="media/cognitive-search-tutorial-blob-python/py-indexer-warning-example.png" alt-text="Example indexer warning" border="false"::: ## 5 - Search
-After indexing is finished, run queries that return the contents of individual fields. By default, Azure Cognitive Search returns the top 50 results. The sample data is small so the default works fine. However, when working with larger data sets, you might need to include parameters in the query string to return more results. For instructions, see [How to page results in Azure Cognitive Search](search-pagination-page-layout.md).
+After indexing is finished, run queries that return the contents of the index or individual fields.
-As a verification step, get the index definition showing all of the fields.
+First, get the index definition showing all of the fields. Visual Studio Code limits the output to 30 lines by default, but provides an option to open the output in a text editor. Use that option to view the full output. The output is the index schema, with the name, type, and attributes of each field.
```python # Query the service for the index definition
r = requests.get(endpoint + "/indexes/" + index_name,
pprint(json.dumps(r.json(), indent=1)) ```
-The results should look similar to the following example. The screenshot only shows a part of the response.
--
-The output is the index schema, with the name, type, and attributes of each field.
-
-Submit a second query for `"*"` to return all contents of a single field, such as `organizations`.
+Next, submit a second query for `"*"` to return all contents of a single field, such as `organizations`. See [Search Documents (REST API)](/rest/api/searchservice/search-documents) for more information about the request.
```python # Query the index to return the contents of organizations
r = requests.get(endpoint + "/indexes/" + index_name +
pprint(json.dumps(r.json(), indent=1)) ```
-The results should look similar to the following example. The screenshot only shows a part of the response.
+If you'd like to continue testing from this notebook, repeat the above commands using other fields: `content`, `languageCode`, `keyPhrases`, and `organizations` in this exercise.
-
-Repeat for additional fields: `content`, `languageCode`, `keyPhrases`, and `organizations` in this exercise. You can return multiple fields via `$select` using a comma-delimited list.
-
-You can use GET or POST, depending on query string complexity and length. For more information, see [Query using the REST API](/rest/api/searchservice/search-documents).
+> [!TIP]
+> A better search experience might be switching to [Search Explorer](search-explorer.md) in the Azure portal or [creating a demo search app](search-create-app-portal.md) from the index you just created.
<a name="reset"></a> ## Reset and rerun
-In the early experimental stages of development, the most practical approach for design iteration is to delete the objects from Azure Cognitive Search and allow your code to rebuild them. Resource names are unique. Deleting an object lets you recreate it using the same name.
+In the early stages of development, it's practical to delete the objects from Azure Cognitive Search and allow your code to rebuild them. Resource names are unique. Deleting an object lets you recreate it using the same name.
You can use the portal to delete indexes, indexers, data sources, and skillsets. When you delete the indexer, you can optionally, selectively delete the index, skillset, and data source at the same time. :::image type="content" source="media/cognitive-search-tutorial-blob-python/py-delete-indexer-delete-all.png" alt-text="Delete search objects in the portal" border="false":::
-You can also delete them using a script. The following script shows how to delete a skillset.
+You can also delete them using a script. The following script deletes a skillset.
```python # delete the skillset
This tutorial demonstrates the basic steps for building an enriched indexing pip
[Built-in skills](cognitive-search-predefined-skills.md) were introduced, along with skillset definitions and a way to chain skills together through inputs and outputs. You also learned that `outputFieldMappings` in the indexer definition is required for routing enriched values from the pipeline into a searchable index on an Azure Cognitive Search service.
-Finally, you learned how to test the results and reset the system for further iterations. You learned that issuing queries against the index returns the output created by the enriched indexing pipeline. In this release, there is a mechanism for viewing internal constructs (enriched documents created by the system). You also learned how to check the indexer status and what objects must be deleted before rerunning a pipeline.
+Finally, you learned how to test the results and reset the system for further iterations. You learned that issuing queries against the index returns the output created by the enriched indexing pipeline. In this release, there's a mechanism for viewing internal constructs (enriched documents created by the system). You also learned how to check the indexer status and what objects must be deleted before rerunning a pipeline.
## Clean up resources
search Search Howto Large Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-large-index.md
Previously updated : 02/28/2022 Last updated : 08/25/2022 # Index large data sets in Azure Cognitive Search
Because the optimal batch size depends on your index and your data, the best app
### Add threads and a retry strategy
-Indexers have built-in thread management, but when you're using the push APIs, your application code will have to manage threads. Make sure there are sufficient threads to make full use of the available capacity.
+Indexers have built-in thread management, but when you're using the push APIs, your application code will have to manage threads. Make sure there are sufficient threads to make full use of the available capacity, especially if you've recently increased partitions or have upgraded to a higher tier search service.
-1. [Increase the number of threads](tutorial-optimize-indexing-push-api.md#use-multiple-threadsworkers) in your client code. As you increase the tier of your search service or increase the partitions, you should also increase the number of concurrent threads so that you can take full advantage of the new capacity.
+1. [Increase the number of concurrent threads](tutorial-optimize-indexing-push-api.md#use-multiple-threadsworkers) in your client code.
-1. As you ramp up the requests hitting the search service, you may encounter [HTTP status codes](/rest/api/searchservice/http-status-codes) indicating the request didn't fully succeed. During indexing, two common HTTP status codes are:
+1. As you ramp up the requests hitting the search service, you might encounter [HTTP status codes](/rest/api/searchservice/http-status-codes) indicating the request didn't fully succeed. During indexing, two common HTTP status codes are:
+ **503 Service Unavailable** - This error means that the system is under heavy load and your request can't be processed at this time.
Default batch sizes are data source specific. Azure SQL Database and Azure Cosmo
### Schedule indexers for long-running processes
-Indexer scheduling is an important mechanism for processing large data sets, and slow-running processes like image analysis in a cognitive search pipeline. Indexer processing operates within a 24-hour window. If processing fails to finish within 24 hours, the behaviors of indexer scheduling can work to your advantage.
+Indexer scheduling is an important mechanism for processing large data sets and for accommodating slow-running processes like image analysis in an enrichment pipeline. Indexer processing operates within a 24-hour window. If processing fails to finish within 24 hours, the behaviors of indexer scheduling can work to your advantage.
By design, scheduled indexing starts at specific intervals, with a job typically completing before resuming at the next scheduled interval. However, if processing does not complete within the interval, the indexer stops (because it ran out of time). At the next interval, processing resumes where it last left off, with the system keeping track of where that occurs.
-In practical terms, for index loads spanning several days, you can put the indexer on a 24-hour schedule. When indexing resumes for the next 24-hour cycle, it restarts at the last known good document. In this way, an indexer can work its way through a document backlog over a series of days until all unprocessed documents are processed. For more information about setting schedules in general, see [Create Indexer REST API](/rest/api/searchservice/Create-Indexer) or see [How to schedule indexers for Azure Cognitive Search](search-howto-schedule-indexers.md).
+In practical terms, for index loads spanning several days, you can put the indexer on a 24-hour schedule. When indexing resumes for the next 24-hour cycle, it restarts at the last known good document. In this way, an indexer can work its way through a document backlog over a series of days until all unprocessed documents are processed. For more information about setting schedules, see [Create Indexer REST API](/rest/api/searchservice/Create-Indexer) or see [How to schedule indexers for Azure Cognitive Search](search-howto-schedule-indexers.md).
<a name="parallel-indexing"></a>
search Search Performance Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-performance-tips.md
Previously updated : 10/04/2021 Last updated : 08/25/2022 # Tips for better performance in Azure Cognitive Search
Last updated 10/04/2021
This article is a collection of tips and best practices that are often recommended for boosting performance. Knowing which factors are most likely to impact search performance can help you avoid inefficiencies and get the most out of your search service. Some key factors include: + Index composition (schema and size)
-+ Query types
++ Query design + Service capacity (tier, and the number of replicas and partitions)
+> [!NOTE]
+> Looking for strategies on high volume indexing? See [Index large data sets in Azure Cognitive Search](search-howto-large-index.md).
+ ## Index size and schema
-Queries run faster on smaller indexes. This is partly a function of having fewer fields to scan, but it's also due to how the system caches content for future queries. After the first query, some content remains in memory where it's searched more efficiently. Because index size tends to grow over time, one best practice is to periodically revisit index composition, both schema and documents, to look for content reduction opportunities. However, if the index is right-sized, the only other calibration you can make is to increase capacity: either by [adding replicas](search-capacity-planning.md#adjust-capacity) or upgrading the service tier. The section ["Tip: Upgrade to a Standard S2 tier"](#tip-upgrade-to-a-standard-s2-tier) shows you how to evaluate the scale up versus scale out decision.
+Queries run faster on smaller indexes. This is partly a function of having fewer fields to scan, but it's also due to how the system caches content for future queries. After the first query, some content remains in memory where it's searched more efficiently. Because index size tends to grow over time, one best practice is to periodically revisit index composition, both schema and documents, to look for content reduction opportunities. However, if the index is right-sized, the only other calibration you can make is to increase capacity: either by [adding replicas](search-capacity-planning.md#adjust-capacity) or upgrading the service tier. The section ["Tip: Upgrade to a Standard S2 tier"](#tip-upgrade-to-a-standard-s2-tier) discusses the scale up versus scale out decision.
-Schema complexity can also adversely effect indexing and query performance. Excessive field attribution builds in limitations and processing requirements. [Complex types](search-howto-complex-data-types.md) take longer to index and query. The next few sections explore each condition.
+Schema complexity can also adversely affect indexing and query performance. Excessive field attribution builds in limitations and processing requirements. [Complex types](search-howto-complex-data-types.md) take longer to index and query. The next few sections explore each condition.
### Tip: Be selective in field attribution
-A common mistake that administrators and developer make when creating a search index is selecting all available properties for the fields, as opposed to only selecting just the properties that are needed. For example, if a field doesn't need to be full text searchable, skip that field when setting the searchable attribute.
+A common mistake that administrators and developers make when creating a search index is selecting all available properties for the fields, as opposed to only selecting just the properties that are needed. For example, if a field doesn't need to be full text searchable, skip that field when setting the searchable attribute.
:::image type="content" source="media/search-performance/perf-selective-field-attributes.png" alt-text="Selective attribution" border="true":::
In some cases, you can avoid these tradeoffs by mapping a complex data structure
:::image type="content" source="media/search-performance/perf-flattened-field-hierarchy.png" alt-text="flattened field structure" border="true":::
-## Types of queries
+## Query design
-The types of queries you send are one of the most important factors for performance, and query optimization can drastically improve performance. When designing queries, think about the following points:
+Query composition and complexity are one of the most important factors for performance, and query optimization can drastically improve performance. When designing queries, think about the following points:
-+ **Number of searchable fields.** Each additional searchable field requires additional work by the search service. You can limit the fields being searched at query time using the "searchFields" parameter. It's best to specify only the fields that you care about to improve performance.
++ **Number of searchable fields.** Each additional searchable field results in more work for the search service. You can limit the fields being searched at query time using the "searchFields" parameter. It's best to specify only the fields that you care about to improve performance.
-+ **Amount of data being returned.** Retrieving a lot of content can make queries slower. When structuring a query, return only those fields that you need to render the results page, and then retrieve remaining fields using the [Lookup API](/rest/api/searchservice/lookup-document) once a user selects a match.
++ **Amount of data being returned.** Retrieving a large amount content can make queries slower. When structuring a query, return only those fields that you need to render the results page, and then retrieve remaining fields using the [Lookup API](/rest/api/searchservice/lookup-document) once a user selects a match. + **Use of partial term searches.** [Partial term searches](search-query-partial-matching.md), such as prefix search, fuzzy search, and regular expression search, are more computationally expensive than typical keyword searches, as they require full index scans to produce results.
As a query uses increasingly [complex filter criteria](search-query-odata-filter
$filter= userid eq 123 or userid eq 234 or userid eq 345 or userid eq 456 or userid eq 567 ```
-In this case, the filter expressions are used to check whether a single field in each document is equal to one of many possible values of a user identity. You are most likely to find this pattern in applications that implement [security trimming](search-security-trimming-for-azure-search.md) (checking a field containing one or more principal IDs against a list of principal IDs representing the user issuing the query).
+In this case, the filter expressions are used to check whether a single field in each document is equal to one of many possible values of a user identity. You're most likely to find this pattern in applications that implement [security trimming](search-security-trimming-for-azure-search.md) (checking a field containing one or more principal IDs against a list of principal IDs representing the user issuing the query).
A more efficient way to execute filters that contain a large number of values is to use [`search.in` function](search-query-odata-search-in-function.md), as shown in this example:
search.in(userid, '123,234,345,456,567', ',')
### Tip: Add partitions for slow individual queries
-When query performance is slowing down in general, adding more replicas frequently solves the issue. But what if the problem is a single query that takes too long to complete? In this scenario, adding replicas will not help, but additional partitions might. A partition splits data across extra computing resources. Two partitions split data in half, a third partition splits it into thirds, and so forth.
+When query performance is slowing down in general, adding more replicas frequently solves the issue. But what if the problem is a single query that takes too long to complete? In this scenario, adding replicas won't help, but more partitions might. A partition splits data across extra computing resources. Two partitions split data in half, a third partition splits it into thirds, and so forth.
-One positive side-effect of adding partitions is that slower queries sometimes perform faster due to parallel computing. We have noted parallelization on low selectivity queries, such as queries that match many documents, or facets providing counts over a large number of documents. Since significant computation is required to score the relevancy of the documents, or to count the numbers of documents, adding extra partitions helps queries complete faster.
+One positive side-effect of adding partitions is that slower queries sometimes perform faster due to parallel computing. We've noted parallelization on low selectivity queries, such as queries that match many documents, or facets providing counts over a large number of documents. Since significant computation is required to score the relevancy of the documents, or to count the numbers of documents, adding extra partitions helps queries complete faster.
To add partitions, use [Azure portal](search-create-service-portal.md), [PowerShell](search-manage-powershell.md), [Azure CLI](search-manage-azure-cli.md), or a management SDK.
To add partitions, use [Azure portal](search-create-service-portal.md), [PowerSh
A service is overburdened when queries take too long or when the service starts dropping requests. If this happens, you can address the problem by upgrading the service or by adding capacity.
-The tier of your search service and the number of replicas/partitions also have a big impact on performance. Each higher tier provides faster CPUs and more memory, both of which have a positive impact on performance.
+The tier of your search service and the number of replicas/partitions also have a large impact on performance. Each progressively higher tier provides faster CPUs and more memory, both of which have a positive impact on performance.
### Tip: Upgrade to a Standard S2 tier
An important benefit of added memory is that more of the index can be cached, re
## Next steps
-Review these additional articles related to service performance.
+Review these other articles related to service performance:
+ [Analyze performance](search-performance-analysis.md)++ [Index large data sets in Azure Cognitive Search](search-howto-large-index.md) + [Choose a service tier](search-sku-tier.md)
-+ [Add capacity (replicas and partitions)](search-capacity-planning.md#adjust-capacity)
++ [Plan or add capacity](search-capacity-planning.md#adjust-capacity) + [Case Study: Use Cognitive Search to Support Complex AI Scenarios](https://techcommunity.microsoft.com/t5/azure-ai/case-study-effectively-using-cognitive-search-to-support-complex/ba-p/2804078)
sentinel Sentinel Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions.md
Both Microsoft and other organizations author Microsoft Sentinel out-of-the-box
|**Partner-supported** | Applies to content/solutions authored by parties other than Microsoft. <br><br> The partner company provides support or maintenance for these pieces of content/solutions. The partner company can be an Independent Software Vendor, a Managed Service Provider (MSP/MSSP), a Systems Integrator (SI), or any organization whose contact information is provided on the Microsoft Sentinel page for the selected content/solutions.<br><br> For any issues with a partner-supported solution, contact the specified support contact.| |**Community-supported** |Applies to content/solutions authored by Microsoft or partner developers that don't have listed contacts for support and maintenance in Microsoft Sentinel.<br><br> For questions or issues with these solutions, [file an issue](https://github.com/Azure/Azure-Sentinel/issues/new/choose) in the [Microsoft Sentinel GitHub community](https://aka.ms/threathunters). |
-## Content sources for Microsoft Sentinel out-of-the-box content and solutions
+## Content sources for Microsoft Sentinel content and solutions
-Each piece of out-of-the-box content or solution has one of the following content sources:
+Each piece of content or solution has one of the following content sources:
|Content source |Description | ||| |**Content hub** |Content or solutions deployed by the content hub that support lifecycle management | |**Custom** | Content or solutions you've customized in your workspace | |**Gallery content** | Content or solutions from the gallery that don't support lifecycle management |
-|**Repository** | Content or solutions from a repository connected to your workspace |
+|**Repositories** | Content or solutions from a repository connected to your workspace |
## Next steps
service-bus-messaging Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/private-link-service.md
Title: Integrate Azure Service Bus with Azure Private Link Service
description: Learn how to integrate Azure Service Bus with Azure Private Link Service Previously updated : 01/04/2022 Last updated : 08/26/2022
If you already have an existing namespace, you can create a private endpoint by
1. Select the **Azure subscription** in which you want to create the private endpoint. 2. Select the **resource group** for the private endpoint resource. 3. Enter a **name** for the private endpoint.
- 5. Select a **region** for the private endpoint. Your private endpoint must be in the same region as your virtual network, but can be in a different region from the private link resource that you are connecting to.
- 6. Select **Next: Resource >** button at the bottom of the page.
-
- ![Create Private Endpoint - Basics page](./media/private-link-service/create-private-endpoint-basics-page.png)
-8. On the **Resource** page, follow these steps:
- 1. For connection method, if you select **Connect to an Azure resource in my directory**, follow these steps:
- 1. Select the **Azure subscription** in which your **Service Bus namespace** exists.
- 2. For **Resource type**, Select **Microsoft.ServiceBus/namespaces** for the **Resource type**.
- 3. For **Resource**, select a Service Bus namespace from the drop-down list.
- 4. Confirm that the **Target subresource** is set to **namespace**.
- 5. Select **Next: Configuration >** button at the bottom of the page.
-
- ![Create Private Endpoint - Resource page](./media/private-link-service/create-private-endpoint-resource-page.png)
- 2. If you select **Connect to an Azure resource by resource ID or alias**, follow these steps:
- 1. Enter the **resource ID** or **alias**. It can be the resource ID or alias that someone has shared with you. The easiest way to get the resource ID is to navigate to the Service Bus namespace in the Azure portal and copy the portion of URI starting from `/subscriptions/`. See the following image for an example.
- 2. For **Target sub-resource**, enter **namespace**. It's the type of the sub-resource that your private endpoint can access.
- 3. (optional) Enter a **request message**. The resource owner sees this message while managing private endpoint connection.
- 4. Then, select **Next: Configuration >** button at the bottom of the page.
-
- ![Create Private Endpoint - connect using resource ID](./media/private-link-service/connect-resource-id.png)
-9. On the **Configuration** page, you select the subnet in a virtual network to where you want to deploy the private endpoint.
+ 1. Enter a **name for the network interface**.
+ 1. Select a **region** for the private endpoint. Your private endpoint must be in the same region as your virtual network, but can be in a different region from the private link resource that you are connecting to.
+ 1. Select **Next: Resource >** button at the bottom of the page.
+
+ :::image type="content" source="./media/private-link-service/create-private-endpoint-basics-page.png" alt-text="Screenshot showing the Basics page of the Create private endpoint wizard.":::
+8. On the **Resource** page, review settings, and select **Next: Virtual Network** at the bottom of the page.
+
+ :::image type="content" source="./media/private-link-service/create-private-endpoint-resource-page.png" alt-text="Screenshot showing the Resource page of the Create private endpoint wizard.":::
+9. On the **Virtual Network** page, you select the subnet in a virtual network to where you want to deploy the private endpoint.
1. Select a **virtual network**. Only virtual networks in the currently selected subscription and location are listed in the drop-down list. 2. Select a **subnet** in the virtual network you selected.
- 3. Select **Next: Tags >** button at the bottom of the page.
+ 1. Notice that the **network policy for private endpoints** is disabled. If you want to enable it, select **edit**, update the setting, and select **Save**.
+ 1. For **Private IP configuration**, by default, **Dynamically allocate IP address** option is selected. If you want to assign a static IP address, select **Statically allocate IP address***.
+ 1. For **Application security group**, select an existing application security group or create one that's to be associated with the private endpoint.
+ 1. Select **Next: DNS >** button at the bottom of the page.
+
+ :::image type="content" source="./media/private-link-service/create-private-endpoint-configuration-page.png" alt-text="Screenshot showing the Virtual Network page of the Create private endpoint wizard.":::
+10. On the **DNS** page, select whether you want the private endpoint to be integrated with a private DNS zone, and then select **Next: Tags**.
- ![Create Private Endpoint - Configuration page](./media/private-link-service/create-private-endpoint-configuration-page.png)
-10. On the **Tags** page, create any tags (names and values) that you want to associate with the private endpoint resource. Then, select **Review + create** button at the bottom of the page.
-11. On the **Review + create**, review all the settings, and select **Create** to create the private endpoint.
+ :::image type="content" source="./media/private-link-service/create-private-endpoint-dns-page.png" alt-text="Screenshot showing the DNS page of the Create private endpoint wizard.":::
+1. On the **Tags** page, create any tags (names and values) that you want to associate with the private endpoint resource. Then, select **Review + create** button at the bottom of the page.
+1. On the **Review + create**, review all the settings, and select **Create** to create the private endpoint.
- ![Create Private Endpoint - Review and Create page](./media/private-link-service/create-private-endpoint-review-create-page.png)
+ :::image type="content" source="./media/private-link-service/create-private-endpoint-review-create-page.png" alt-text="Screenshot showing the Review and Create page of the Create private endpoint wizard.":::
12. Confirm that the private endpoint is created. If you are the owner of the resource and had selected **Connect to an Azure resource in my directory** option for the **Connection method**, the endpoint connection should be **auto-approved**. If it's in the **pending** state, see the [Manage private endpoints using Azure portal](#manage-private-endpoints-using-azure-portal) section. ![Private endpoint created](./media/private-link-service/private-endpoint-created.png)
There are four provisioning states:
2. On the **Delete connection** page, select **Yes** to confirm the deletion of the private endpoint. If you select **No**, nothing happens. ![Delete connection page](./media/private-link-service/delete-connection-page.png)
-3. You should see the status changed to **Disconnected**. Then, you will see the endpoint disappear from the list.
+3. You should see the status changed to **Disconnected**. Then, the endpoint will disappear from the list.
## Validate that the private link connection works
site-recovery Azure To Azure Network Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-network-mapping.md
The same holds for the Secondary IP Configurations as well.
## IP address assignment during failover
+>[!Note]
+>The following approach is used to assign IP address to the target VM, irrespective of the NIC settings.
+ **Source and target subnets** | **Details** | Same address space | IP address of the source VM NIC is set as the target VM NIC IP address.<br/><br/> If the address isn't available, the next available IP address is set as the target.
Different address space | The next available IP address in the target subnet is
**Target network** | **Details** | Target network is the failover VNet | - Target IP address will be static with the same IP address. <br/><br/> - If the same IP address is already assigned, then the IP address is the next one available at the end of the subnet range. For example: If the source IP address is 10.0.0.19 and failover network uses range 10.0.0.0/24, then the next IP address assigned to the target VM is 10.0.0.254.
-Target network isn't the failover VNet | - Target IP address will be static with the same IP address.<br/><br/> - If the same IP address is already assigned, then the IP address is the next one available at the end of the subnet range.<br/><br/> For example: If the source static IP address is 10.0.0.19 and failover is on an network that isn't the failover network, with the range 10.0.0.0/24, then the target static IP address will be 10.0.0.19 if available, and otherwise it will be 10.0.0.254.
+Target network isn't the failover VNet | - Target IP address will be static with the same IP address, only if it is available in the target virtual network. <br/><br/> - If the same IP address is already assigned, then the IP address is the next one available at the end of the subnet range.<br/><br/> For example: If the source static IP address is 10.0.0.19 and failover is on an network that isn't the failover network, with the range 10.0.0.0/24, then the target static IP address will be 10.0.0.19 if available, and otherwise it will be 10.0.0.254.
- The failover VNet is the target network that you select when you set up disaster recovery. - We recommend that you always use a non-production network for test failover.
static-web-apps Branch Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/branch-environments.md
In this example, the preview environments are defined for the `dev` and `staging
## Next Steps > [!div class="nextstepaction"]
-> [Review pull requests in pre-production environments](./review-publish-pull-requests.md)
+> [Create named preview environments](./named-environments.md)
storage Archive Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-blob.md
To create a lifecycle management policy to archive blobs in the Azure portal, fo
#### Step 3: Ensure that the rule excludes rehydrated blobs
-If you rehydrate a blob by changing it's tier, this rule will move the blob back to the archive tier if the last modified time, creation time, or last access time is beyond the threshold set for the policy.
+If you rehydrate a blob by changing its tier, this rule will move the blob back to the archive tier if the last modified time, creation time, or last access time is beyond the threshold set for the policy.
If you selected the **Last modified** rule condition, you can prevent this from happening by selecting **Skip blobs that have been rehydrated in the last**, and then entering the number of days you want a rehydrated blob to be excluded from this rule.
storage Lifecycle Management Policy Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-policy-configure.md
A lifecycle management policy must be read or written in full. Partial updates a
> [!NOTE] > If you enable firewall rules for your storage account, lifecycle management requests may be blocked. You can unblock these requests by providing exceptions for trusted Microsoft services. For more information, see the **Exceptions** section in [Configure firewalls and virtual networks](../common/storage-network-security.md#exceptions).
+> [!NOTE]
+> A lifecycle management policy can't change the tier of a blob that uses an encryption scope.
+ ## See also - [Optimize costs by automatically managing the data lifecycle](lifecycle-management-overview.md)
storage Storage Quickstart Blobs Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-go.md
if err != nil {
Blob storage supports block blobs, append blobs, and page blobs. Block blobs are the most commonly used, and that is what is used in this quickstart.
-The SDK offers [high-level APIs](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/storage/azblob/highlevel.go) that are built on top of the low-level REST APIs. As an example, ***UploadBufferToBlockBlob*** function uses StageBlock (PutBlock) operations to concurrently upload a file in chunks to optimize the throughput. If the file is less than 256 MB, it uses Upload (PutBlob) instead to complete the transfer in a single transaction.
+The SDK offers [high-level APIs](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/storage/azblob/zt_highlevel_test.go) that are built on top of the low-level REST APIs. As an example, ***UploadBufferToBlockBlob*** function uses StageBlock (PutBlock) operations to concurrently upload a file in chunks to optimize the throughput. If the file is less than 256 MB, it uses Upload (PutBlob) instead to complete the transfer in a single transaction.
The following example uploads the file to your container called **quickstartblob-[randomstring]**.
See these other resources for Go development with Blob storage:
## Next steps
-In this quickstart, you learned how to transfer files between a local disk and Azure blob storage using Go. For more information about the Azure Storage Blob SDK, view the [Source Code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob) and [API Reference](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob).
+In this quickstart, you learned how to transfer files between a local disk and Azure blob storage using Go. For more information about the Azure Storage Blob SDK, view the [Source Code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob) and [API Reference](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob).
storage Customer Managed Keys Configure Existing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-existing-account.md
+
+ Title: Configure customer-managed keys for an existing storage account
+
+description: Learn how to configure Azure Storage encryption with customer-managed keys for an existing storage account by using the Azure portal, PowerShell, or Azure CLI. Customer-managed keys are stored in an Azure key vault.
+++++ Last updated : 08/24/2022++++++
+# Configure customer-managed keys in an Azure key vault for an existing storage account
+
+Azure Storage encrypts all data in a storage account at rest. By default, data is encrypted with Microsoft-managed keys. For additional control over encryption keys, you can manage your own keys. Customer-managed keys must be stored in Azure Key Vault or Key Vault Managed Hardware Security Model (HSM).
+
+This article shows how to configure encryption with customer-managed keys for an existing storage account. The customer-managed keys are stored in a key vault.
+
+To learn how to configure customer-managed keys for a new storage account, see [Configure customer-managed keys in an Azure key vault for an existing storage account](customer-managed-keys-configure-existing-account.md).
+
+To learn how to configure encryption with customer-managed keys stored in a managed HSM, see [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM](customer-managed-keys-configure-key-vault-hsm.md).
+
+> [!NOTE]
+> Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for configuration.
+++
+## Choose a managed identity to authorize access to the key vault
+
+When you enable customer-managed keys for an existing storage account, you must specify a managed identity that will be used to authorize access to the key vault that contains the key. The managed identity must have permissions to access the key in the key vault.
+
+The managed identity that authorizes access to the key vault may be either a user-assigned or system-assigned managed identity. To learn more about system-assigned versus user-assigned managed identities, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
+
+### Use a user-assigned managed identity to authorize access
+
+A user-assigned is a standalone Azure resource. You must create the user-assigned identity before you configure customer-managed keys. To learn how to create and manage a user-assigned managed identity, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+
+#### [Azure portal](#tab/portal)
+
+When you configure customer-managed keys with the Azure portal, you can select an existing user-assigned identity through the portal user interface. For details, see [Configure customer-managed keys for an existing account](#configure-customer-managed-keys-for-an-existing-account).
+
+#### [PowerShell](#tab/powershell)
+
+To authorize access to the key vault with a user-assigned managed identity, you will need the resource ID and principal ID of the user-assigned managed identity. Call [Get-AzUserAssignedIdentity](/powershell/module/az.managedserviceidentity/get-azuserassignedidentity) to get the user-assigned managed identity and assign it to a variable that you will reference in subsequent steps:
+
+```azurepowershell
+$userIdentity = Get-AzUserAssignedIdentity -Name <user-assigned-identity> -ResourceGroupName <resource-group>
+$principalId = $userIdentity.PrincipalId
+```
+
+#### [Azure CLI](#tab/azure-cli)
+
+To authorize access to the key vault with a user-assigned managed identity, you will need the resource ID and principal ID of the user-assigned managed identity. Call [az identity show](/cli/azure/identity#az-identity-show) command to get the user-assigned managed identity, then save the resource ID and principal ID to variables. You will need these values in subsequent steps:
+
+```azurecli
+userIdentityId=$(az identity show --name sample-user-assigned-identity --resource-group storagesamples-rg --query id)
+principalId=$(az identity show --name sample-user-assigned-identity --resource-group storagesamples-rg --query principalId)
+```
+++
+### Use a system-assigned managed identity to authorize access
+
+A system-assigned managed identity is associated with an instance of an Azure service, in this case an Azure Storage account. You must explicitly assign a system-assigned managed identity to a storage account before you can use the system-assigned managed identity to authorize access to the key vault that contains your customer-managed key.
+
+Only existing storage accounts can use a system-assigned identity to authorize access to the key vault. New storage accounts must use a user-assigned identity, if customer-managed keys are configured on account creation.
+
+#### [Azure portal](#tab/portal)
+
+When you configure customer-managed keys with the Azure portal with a system-assigned managed identity, the system-assigned managed identity is assigned to the storage account for you under the covers. For details, see [Configure customer-managed keys for an existing account](#configure-customer-managed-keys-for-an-existing-account).
+
+#### [PowerShell](#tab/powershell)
+
+To assign a system-assigned managed identity to your storage account, call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount):
+
+```azurepowershell
+$storageAccount = Set-AzStorageAccount -ResourceGroupName <resource_group> `
+ -Name <storage-account> `
+ -AssignIdentity
+```
+
+Next, get the principal ID for the system-assigned managed identity, and save it to a variable. You will need this value in the next step to create the key vault access policy:
+
+```azurepowershell
+$principalId = $storageAccount.Identity.PrincipalId
+```
+
+#### [Azure CLI](#tab/azure-cli)
+
+To authenticate access to the key vault with a system-assigned managed identity, assign the system-assigned managed identity to the storage account by calling [az storage account update](/cli/azure/storage/account#az-storage-account-update):
+
+```azurecli
+az storage account update \
+ --name <storage-account> \
+ --resource-group <resource_group> \
+ --assign-identity
+```
+
+Next, get the principal ID for the system-assigned managed identity, and save it to a variable. You will need this value in the next step to create the key vault access policy:
+
+```azurecli
+principalId = $(az storage account show --name <storage-account> --resource-group <resource_group> --query identity.principalId)
+```
+++
+## Configure the key vault access policy
+
+The next step is to configure the key vault access policy. The key vault access policy grants permissions to the managed identity that will be used to authorize access to the key vault. To learn more about key vault access policies, see [Azure Key Vault Overview](../../key-vault/general/overview.md#securely-store-secrets-and-keys) and [Azure Key Vault security overview](../../key-vault/general/security-features.md#key-vault-authentication-options).
+
+### [Azure portal](#tab/portal)
+
+To learn how to configure the key vault access policy with the Azure portal, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
+
+### [PowerShell](#tab/powershell)
+
+To configure the key vault access policy with PowerShell, call [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy), providing the variable for the principal ID that you previously retrieved for the managed identity.
+
+```azurepowershell
+Set-AzKeyVaultAccessPolicy `
+ -VaultName $keyVault.VaultName `
+ -ObjectId $principalId `
+ -PermissionsToKeys wrapkey,unwrapkey,get
+```
+
+To learn more about assigning the key vault access policy with PowerShell, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
+
+### [Azure CLI](#tab/azure-cli)
+
+To configure the key vault access policy with PowerShell, call [az keyvault set-policy](/cli/azure/keyvault#az-keyvault-set-policy), providing the variable for the principal ID that you previously retrieved for the managed identity.
+
+```azurecli
+az keyvault set-policy \
+ --name <key-vault> \
+ --resource-group <resource_group>
+ --object-id $principalId \
+ --key-permissions get unwrapKey wrapKey
+```
+
+To learn more about assigning the key vault access policy with Azure CLI, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
+++
+## Configure customer-managed keys for an existing account
+
+When you configure encryption with customer-managed keys for an existing storage account, you can choose to automatically update the key version used for Azure Storage encryption whenever a new version is available in the associated key vault. Alternately, you can explicitly specify a key version to be used for encryption until the key version is manually updated.
+
+You can use either a system-assigned or user-assigned managed identity to authorize access to the key vault when you configure customer-managed keys for an existing storage account.
+
+> [!NOTE]
+> To rotate a key, create a new version of the key in Azure Key Vault. Azure Storage does not handle key rotation, so you will need to manage rotation of the key in the key vault. You can [configure key auto-rotation in Azure Key Vault](../../key-vault/keys/how-to-configure-key-rotation.md) or rotate your key manually.
+
+### Configure encryption for automatic updating of key versions
+
+Azure Storage can automatically update the customer-managed key that is used for encryption to use the latest key version from the key vault. Azure Storage checks the key vault daily for a new version of the key. When a new version becomes available, then Azure Storage automatically begins using the latest version of the key for encryption.
+
+> [!IMPORTANT]
+> Azure Storage checks the key vault for a new key version only once daily. When you rotate a key, be sure to wait 24 hours before disabling the older version.
+
+### [Azure portal](#tab/portal)
+
+To configure customer-managed keys for an existing account with automatic updating of the key version in the Azure portal, follow these steps:
+
+1. Navigate to your storage account.
+1. On the **Settings** blade for the storage account, click **Encryption**. By default, key management is set to **Microsoft Managed Keys**, as shown in the following image.
+
+ :::image type="content" source="media/customer-managed-keys-configure-existing-account/portal-configure-encryption-keys.png" alt-text="Screenshot showing encryption options in Azure portal." lightbox="media/customer-managed-keys-configure-existing-account/portal-configure-encryption-keys.png":::
+
+1. Select the **Customer Managed Keys** option.
+1. Choose the **Select from Key Vault** option.
+1. Select **Select a key vault and key**.
+1. Select the key vault containing the key you want to use. You can also create a new key vault.
+1. Select the key from the key vault. You can also create a new key.
+
+ :::image type="content" source="media/customer-managed-keys-configure-existing-account/portal-select-key-from-key-vault.png" alt-text="Screenshot showing how to select key vault and key in Azure portal.":::
+
+1. Select the type of identity to use to authenticate access to the key vault. The options include **System-assigned** (the default) or **User-assigned**. To learn more about each type of managed identity, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
+
+ 1. If you select **System-assigned**, the system-assigned managed identity for the storage account is created under the covers, if it does not already exist.
+ 1. If you select **User-assigned**, then you must select an existing user-assigned identity that has permissions to access the key vault. To learn how to create a user-assigned identity, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+
+ :::image type="content" source="media/customer-managed-keys-configure-existing-account/select-user-assigned-managed-identity-portal.png" alt-text="Screenshot showing how to select a user-assigned managed identity for key vault authentication.":::
+
+1. Save your changes.
+
+After you've specified the key, the Azure portal indicates that automatic updating of the key version is enabled and displays the key version currently in use for encryption. The portal also displays the type of managed identity used to authorize access to the key vault and the principal ID for the managed identity.
++
+### [PowerShell](#tab/powershell)
+
+To configure customer-managed keys for an existing account with automatic updating of the key version with PowerShell, install the [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage) module, version 2.0.0 or later.
+
+Next, call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings, omitting the key version. Include the **-KeyvaultEncryption** option to enable customer-managed keys for the storage account.
+
+```azurepowershell
+Set-AzStorageAccount -ResourceGroupName <resource-group> `
+ -AccountName <storage-account> `
+ -KeyvaultEncryption `
+ -KeyName $key.Name `
+ -KeyVaultUri $keyVault.VaultUri
+```
+
+### [Azure CLI](#tab/azure-cli)
+
+To configure customer-managed keys for an existing account with automatic updating of the key version with Azure CLI, install [Azure CLI version 2.4.0](/cli/azure/release-notes-azure-cli#april-21-2020) or later. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+
+Next, call [az storage account update](/cli/azure/storage/account#az-storage-account-update) to update the storage account's encryption settings, omitting the key version. Include the `--encryption-key-source` parameter and set it to `Microsoft.Keyvault` to enable customer-managed keys for the account.
+
+```azurecli
+key_vault_uri=$(az keyvault show \
+ --name <key-vault> \
+ --resource-group <resource_group> \
+ --query properties.vaultUri \
+ --output tsv)
+az storage account update
+ --name <storage-account> \
+ --resource-group <resource_group> \
+ --encryption-key-name <key> \
+ --encryption-key-source Microsoft.Keyvault \
+ --encryption-key-vault $key_vault_uri
+```
+++
+### Configure encryption for manual updating of key versions
+
+If you prefer to manually update the key version, then explicitly specify the version at the time that you configure encryption with customer-managed keys. In this case, Azure Storage will not automatically update the key version when a new version is created in the key vault. To use a new key version, you must manually update the version used for Azure Storage encryption.
+
+# [Azure portal](#tab/portal)
+
+To configure customer-managed keys with manual updating of the key version in the Azure portal, specify the key URI, including the version. To specify a key as a URI, follow these steps:
+
+1. To locate the key URI in the Azure portal, navigate to your key vault, and select the **Keys** setting. Select the desired key, then click the key to view its versions. Select a key version to view the settings for that version.
+1. Copy the value of the **Key Identifier** field, which provides the URI.
+
+ :::image type="content" source="media/customer-managed-keys-configure-existing-account/portal-copy-key-identifier.png" alt-text="Screenshot showing key vault key URI in Azure portal.":::
+
+1. In the **Encryption key** settings for your storage account, choose the **Enter key URI** option.
+1. Paste the URI that you copied into the **Key URI** field. Omit the key version from the URI to enable automatic updating of the key version.
+
+ :::image type="content" source="media/customer-managed-keys-configure-existing-account/portal-specify-key-uri.png" alt-text="Screenshot showing how to enter key URI in Azure portal.":::
+
+1. Specify the subscription that contains the key vault.
+1. Specify either a system-assigned or user-assigned managed identity.
+1. Save your changes.
+
+# [PowerShell](#tab/powershell)
+
+To configure customer-managed keys with manual updating of the key version, explicitly provide the key version when you configure encryption for the storage account. Call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings, as shown in the following example, and include the **-KeyvaultEncryption** option to enable customer-managed keys for the storage account.
+
+Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples.
+
+```azurepowershell
+Set-AzStorageAccount -ResourceGroupName <resource-group> `
+ -AccountName <storage-account> `
+ -KeyvaultEncryption `
+ -KeyName $key.Name `
+ -KeyVersion $key.Version `
+ -KeyVaultUri $keyVault.VaultUri
+```
+
+When you manually update the key version, you will need to update the storage account's encryption settings to use the new version. First, call [Get-AzKeyVaultKey](/powershell/module/az.keyvault/get-azkeyvaultkey) to get the latest version of the key. Then call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings to use the new version of the key, as shown in the previous example.
+
+# [Azure CLI](#tab/azure-cli)
+
+To configure customer-managed keys with manual updating of the key version, explicitly provide the key version when you configure encryption for the storage account. Call [az storage account update](/cli/azure/storage/account#az-storage-account-update) to update the storage account's encryption settings, as shown in the following example. Include the `--encryption-key-source` parameter and set it to `Microsoft.Keyvault` to enable customer-managed keys for the account.
+
+Remember to replace the placeholder values in brackets with your own values.
+
+```azurecli
+key_vault_uri=$(az keyvault show \
+ --name <key-vault> \
+ --resource-group <resource_group> \
+ --query properties.vaultUri \
+ --output tsv)
+key_version=$(az keyvault key list-versions \
+ --name <key> \
+ --vault-name <key-vault> \
+ --query [-1].kid \
+ --output tsv | cut -d '/' -f 6)
+az storage account update
+ --name <storage-account> \
+ --resource-group <resource_group> \
+ --encryption-key-name <key> \
+ --encryption-key-version $key_version \
+ --encryption-key-source Microsoft.Keyvault \
+ --encryption-key-vault $key_vault_uri
+```
+
+When you manually update the key version, you will need to update the storage account's encryption settings to use the new version. First, query for the key vault URI by calling [az keyvault show](/cli/azure/keyvault#az-keyvault-show), and for the key version by calling [az keyvault key list-versions](/cli/azure/keyvault/key#az-keyvault-key-list-versions). Then call [az storage account update](/cli/azure/storage/account#az-storage-account-update) to update the storage account's encryption settings to use the new version of the key, as shown in the previous example.
++++++
+## Next steps
+
+- [Azure Storage encryption for data at rest](storage-service-encryption.md)
+- [Customer-managed keys for Azure Storage encryption](customer-managed-keys-overview.md)
+- [Configure customer-managed keys in an Azure key vault for a new storage account](customer-managed-keys-configure-new-account.md)
storage Customer Managed Keys Configure New Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-new-account.md
+
+ Title: Configure customer-managed keys for a new storage account
+
+description: Learn how to configure Azure Storage encryption with customer-managed keys for a new storage account by using the Azure portal, PowerShell, or Azure CLI. Customer-managed keys are stored in an Azure key vault.
+++++ Last updated : 08/24/2022++++++
+# Configure customer-managed keys in an Azure key vault for a new storage account
+
+Azure Storage encrypts all data in a storage account at rest. By default, data is encrypted with Microsoft-managed keys. For additional control over encryption keys, you can manage your own keys. Customer-managed keys must be stored in an Azure Key Vault or in an Azure Key Vault Managed Hardware Security Model (HSM).
+
+This article shows how to configure encryption with customer-managed keys at the time that you create a new storage account. The customer-managed keys are stored in a key vault.
+
+To learn how to configure customer-managed keys for an existing storage account, see [Configure customer-managed keys in an Azure key vault for an existing storage account](customer-managed-keys-configure-existing-account.md).
+
+To learn how to configure encryption with customer-managed keys stored in a managed HSM, see [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM](customer-managed-keys-configure-key-vault-hsm.md).
+
+> [!NOTE]
+> Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for configuration.
+++
+## Use a user-assigned managed identity to authorize access to the key vault
+
+When you enable customer-managed keys for a new storage account, you must specify a user-assigned managed identity. The user-assigned managed identity will be used to authorize access to the key vault that contains the key. The user-assigned managed identity must have permissions to access the key in the key vault.
+
+A user-assigned is a standalone Azure resource. To learn more about user-assigned managed identities, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types). To learn how to create and manage a user-assigned managed identity, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+
+Both new and existing storage accounts can use a user-assigned identity to authorize access to the key vault. You must create the user-assigned identity before you configure customer-managed keys.
+
+### [Azure portal](#tab/portal)
+
+When you configure customer-managed keys with the Azure portal, you can select an existing user-assigned identity through the portal user interface. For details, see [Configure customer-managed keys for a new storage account](#configure-customer-managed-keys-for-a-new-storage-account).
+
+### [PowerShell](#tab/powershell)
+
+To authorize access to the key vault with a user-assigned managed identity, you will need the resource ID and principal ID of the user-assigned managed identity. Call [Get-AzUserAssignedIdentity](/powershell/module/az.managedserviceidentity/get-azuserassignedidentity) to get the user-assigned managed identity and assign it to a variable that you will reference in subsequent steps:
+
+```azurepowershell
+$userIdentity = Get-AzUserAssignedIdentity -Name <user-assigned-identity> -ResourceGroupName <resource-group>
+```
+
+### [Azure CLI](#tab/azure-cli)
+
+To authorize access to the key vault with a user-assigned managed identity, you will need the resource ID and principal ID of the user-assigned managed identity. Call [az identity show](/cli/azure/identity#az-identity-show) command to get the user-assigned managed identity, then save the resource ID and principal ID to variables. You will need these values in subsequent steps:
+
+```azurecli
+userIdentityId=$(az identity show --name sample-user-assigned-identity --resource-group storagesamples-rg --query id)
+principalId=$(az identity show --name sample-user-assigned-identity --resource-group storagesamples-rg --query principalId)
+```
+++
+## Configure the key vault access policy
+
+The next step is to configure the key vault access policy. The key vault access policy grants permissions to the user-assigned managed identity that will be used to authorize access to the key vault. To learn more about key vault access policies, see [Azure Key Vault Overview](../../key-vault/general/overview.md#securely-store-secrets-and-keys) and [Azure Key Vault security overview](../../key-vault/general/security-features.md#key-vault-authentication-options).
+
+### [Azure portal](#tab/portal)
+
+To learn how to configure the key vault access policy with the Azure portal, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
+
+### [PowerShell](#tab/powershell)
+
+To configure the key vault access policy with PowerShell, call [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy), providing the variable for the principal ID that you previously retrieved for the user-assigned managed identity.
+
+```azurepowershell
+Set-AzKeyVaultAccessPolicy `
+ -VaultName $keyVault.VaultName `
+ -ObjectId $userIdentity.PrincipalId `
+ -PermissionsToKeys wrapkey,unwrapkey,get
+```
+
+To learn more about assigning the key vault access policy with PowerShell, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
+
+### [Azure CLI](#tab/azure-cli)
+
+To configure the key vault access policy with PowerShell, call [az keyvault set-policy](/cli/azure/keyvault#az-keyvault-set-policy), providing the variable for the principal ID that you previously retrieved for the user-assigned managed identity.
+
+```azurecli
+az keyvault set-policy \
+ --name <key-vault> \
+ --resource-group <resource_group>
+ --object-id $principalId \
+ --key-permissions get unwrapKey wrapKey
+```
+
+To learn more about assigning the key vault access policy with Azure CLI, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
+++
+## Configure customer-managed keys for a new storage account
+
+When you configure encryption with customer-managed keys for a new storage account, you can choose to automatically update the key version used for Azure Storage encryption whenever a new version is available in the associated key vault. Alternately, you can explicitly specify a key version to be used for encryption until the key version is manually updated.
+
+You must use an existing user-assigned managed identity to authorize access to the key vault when you configure customer-managed keys while creating the storage account. The user-assigned managed identity must have appropriate permissions to access the key vault. For more information, see [Authenticate to Azure Key Vault](../../key-vault/general/authentication.md).
+
+### Configure encryption for automatic updating of key versions
+
+Azure Storage can automatically update the customer-managed key that is used for encryption to use the latest key version from the key vault. Azure Storage checks the key vault daily for a new version of the key. When a new version becomes available, then Azure Storage automatically begins using the latest version of the key for encryption.
+
+> [!IMPORTANT]
+> Azure Storage checks the key vault for a new key version only once daily. When you rotate a key, be sure to wait 24 hours before disabling the older version.
+
+### [Azure portal](#tab/portal)
+
+To configure customer-managed keys for a new storage account with automatic updating of the key version, follow these steps:
+
+1. In the Azure portal, navigate to the **Storage accounts** page, and select the **Create** button to create a new account.
+1. Follow the steps outlined in [Create a storage account](storage-account-create.md) to fill out the fields on the **Basics**, **Advanced**, **Networking**, and **Data Protection** tabs.
+1. On the **Encryption** tab, indicate for which services you want to enable support for customer-managed keys in the **Enable support for customer-managed keys** field.
+1. In the **Encryption type** field, select **Customer-managed keys (CMK)**.
+1. In the **Encryption key** field, choose **Select a key vault and key**, and specify the key vault and key.
+1. For the **User-assigned identity** field, select an existing user-assigned managed identity.
+
+ :::image type="content" source="media/customer-managed-keys-configure-new-account/portal-new-account-configure-cmk.png" alt-text="Screenshot showing how to configure customer-managed keys for a new storage account in Azure portal.":::
+
+1. Select the **Review** button to validate and create the account.
+
+You can also configure customer-managed keys with manual updating of the key version when you create a new storage account. Follow the steps described in [Configure encryption for manual updating of key versions](#configure-encryption-for-manual-updating-of-key-versions).
+
+### [PowerShell](#tab/powershell)
+
+To configure customer-managed keys for a new storage account with automatic updating of the key version, call [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount), as shown in the following example. Use the variable you created previously for the resource ID for the user-assigned managed identity. You will also need the key vault URI and key name:
+
+```azurepowershell
+New-AzStorageAccount -ResourceGroupName <resource-group> `
+ -Name <storage-account> `
+ -Kind StorageV2 `
+ -SkuName Standard_LRS `
+ -Location $location `
+ -IdentityType SystemAssignedUserAssigned `
+ -UserAssignedIdentityId $userIdentity.Id `
+ -KeyVaultUri $keyVault.VaultUri `
+ -KeyName $key.Name `
+ -KeyVaultUserAssignedIdentityId $userIdentity.Id
+```
+
+### [Azure CLI](#tab/azure-cli)
+
+To configure customer-managed keys for a new storage account with automatic updating of the key version, call [az storage account create](/cli/azure/storage/account#az-storage-account-create), as shown in the following example. Use the variable you created previously for the resource ID for the user-assigned managed identity. You will also need the key vault URI and key name:
+
+```azurecli
+az storage account create \
+ --name <storage-account> \
+ --resource-group <resource-group> \
+ --location <location> \
+ --sku Standard_LRS \
+ --kind StorageV2 \
+ --identity-type SystemAssigned,UserAssigned \
+ --user-identity-id <user-assigned-managed-identity> \
+ --encryption-key-vault <key-vault-uri> \
+ --encryption-key-name <key-name> \
+ --encryption-key-source Microsoft.Keyvault \
+ --key-vault-user-identity-id <user-assigned-managed-identity>
+```
+++
+### Configure encryption for manual updating of key versions
+
+If you prefer to manually update the key version, then explicitly specify the version when you configure encryption with customer-managed keys while creating the storage account. In this case, Azure Storage will not automatically update the key version when a new version is created in the key vault. To use a new key version, you must manually update the version used for Azure Storage encryption.
+
+You must use an existing user-assigned managed identity to authorize access to the key vault when you configure customer-managed keys while creating the storage account. The user-assigned managed identity must have appropriate permissions to access the key vault. For more information, see [Authenticate to Azure Key Vault](../../key-vault/general/authentication.md).
+
+# [Azure portal](#tab/portal)
+
+To configure customer-managed keys with manual updating of the key version in the Azure portal, specify the key URI, including the version, while creating the storage account. To specify a key as a URI, follow these steps:
+
+1. In the Azure portal, navigate to the **Storage accounts** page, and select the **Create** button to create a new account.
+1. Follow the steps outlined in [Create a storage account](storage-account-create.md) to fill out the fields on the **Basics**, **Advanced**, **Networking**, and **Data Protection** tabs.
+1. On the **Encryption** tab, indicate for which services you want to enable support for customer-managed keys in the **Enable support for customer-managed keys** field.
+1. In the **Encryption type** field, select **Customer-managed keys (CMK)**.
+1. To locate the key URI in the Azure portal, navigate to your key vault, and select the **Keys** setting. Select the desired key, then click the key to view its versions. Select a key version to view the settings for that version.
+1. Copy the value of the **Key Identifier** field, which provides the URI.
+
+ :::image type="content" source="media/customer-managed-keys-configure-new-account/portal-copy-key-identifier.png" alt-text="Screenshot showing key vault key URI in Azure portal.":::
+
+1. In the **Encryption key** settings for your storage account, choose the **Enter key URI** option.
+1. Paste the URI that you copied into the **Key URI** field. Include the key version on the URI to configure manual updating of the key version.
+1. Specify a user-assigned managed identity by choosing the **Select an identity** link.
+
+ :::image type="content" source="media/customer-managed-keys-configure-new-account/portal-specify-key-uri.png" alt-text="Screenshot showing how to enter key URI in Azure portal.":::
+
+1. Select the **Review** button to validate and create the account.
+
+# [PowerShell](#tab/powershell)
+
+To configure customer-managed keys with manual updating of the key version, explicitly provide the key version when you configure encryption while creating the storage account. Call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings, as shown in the following example, and include the **-KeyvaultEncryption** option to enable customer-managed keys for the storage account.
+
+Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples.
+
+```azurepowershell
+New-AzStorageAccount -ResourceGroupName <resource-group> `
+ -Name <storage-account> `
+ -Kind StorageV2 `
+ -SkuName Standard_LRS `
+ -Location $location `
+ -IdentityType SystemAssignedUserAssigned `
+ -UserAssignedIdentityId $userIdentity.Id `
+ -KeyVaultUri $keyVault.VaultUri `
+ -KeyName $key.Name `
+ -KeyVersion $key.Version `
+ -KeyVaultUserAssignedIdentityId $userIdentity.Id
+```
++
+When you manually update the key version, you will need to update the storage account's encryption settings to use the new version. First, call [Get-AzKeyVaultKey](/powershell/module/az.keyvault/get-azkeyvaultkey) to get the latest version of the key. Then call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings to use the new version of the key, as shown in the previous example.
+
+# [Azure CLI](#tab/azure-cli)
+
+To configure customer-managed keys with manual updating of the key version, explicitly provide the key version when you configure encryption while creating the storage account. Call [az storage account update](/cli/azure/storage/account#az-storage-account-update) to update the storage account's encryption settings, as shown in the following example. Include the `--encryption-key-source` parameter and set it to `Microsoft.Keyvault` to enable customer-managed keys for the account.
+
+Remember to replace the placeholder values in brackets with your own values.
+
+```azurecli
+key_vault_uri=$(az keyvault show \
+ --name <key-vault> \
+ --resource-group <resource_group> \
+ --query properties.vaultUri \
+ --output tsv)
+key_version=$(az keyvault key list-versions \
+ --name <key> \
+ --vault-name <key-vault> \
+ --query [-1].kid \
+ --output tsv | cut -d '/' -f 6)
+az storage account create \
+ --name <storage-account> \
+ --resource-group <resource-group> \
+ --location <location> \
+ --sku Standard_LRS \
+ --kind StorageV2 \
+ --identity-type SystemAssigned,UserAssigned \
+ --user-identity-id <user-assigned-managed-identity> \
+ --encryption-key-vault $key_vault_uri \
+ --encryption-key-name <key-name> \
+ --encryption-key-source Microsoft.Keyvault \
+ --encryption-key-version $key_version \
+ --key-vault-user-identity-id <user-assigned-managed-identity>
+```
+
+When you manually update the key version, you will need to update the storage account's encryption settings to use the new version. First, query for the key vault URI by calling [az keyvault show](/cli/azure/keyvault#az-keyvault-show), and for the key version by calling [az keyvault key list-versions](/cli/azure/keyvault/key#az-keyvault-key-list-versions). Then call [az storage account update](/cli/azure/storage/account#az-storage-account-update) to update the storage account's encryption settings to use the new version of the key, as shown in the previous example.
++++++
+## Next steps
+
+- [Azure Storage encryption for data at rest](storage-service-encryption.md)
+- [Customer-managed keys for Azure Storage encryption](customer-managed-keys-overview.md)
+- [Configure customer-managed keys in an Azure key vault for an existing storage account](customer-managed-keys-configure-existing-account.md)
storage Customer Managed Keys Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-overview.md
Previously updated : 07/11/2022 Last updated : 08/12/2022
storage Geo Redundant Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/geo-redundant-design.md
Another consideration is how to handle multiple instances of an application, and
You have three main options for monitoring the frequency of retries in the primary region in order to determine when to switch over to the secondary region and change the application to run in read-only mode. -- Add a handler for the [**Retrying**](/dotnet/api/microsoft.azure.cosmos.table.operationcontext.retrying) event on the [**OperationContext**](/java/api/com.microsoft.applicationinsights.extensibility.context.operationcontext) object you pass to your storage requests ΓÇô this is the method displayed in this article and used in the accompanying sample. These events fire whenever the client retries a request, enabling you to track how often the client encounters retryable errors on a primary endpoint.
+- Add a handler for the [**Retrying**](/dotnet/api/microsoft.azure.cosmos.table.operationcontext.retrying) event on the [**OperationContext**](/java/api/com.microsoft.azure.storage.operationcontext) object you pass to your storage requests ΓÇô this is the method displayed in this article and used in the accompanying sample. These events fire whenever the client retries a request, enabling you to track how often the client encounters retryable errors on a primary endpoint.
# [.NET v12 SDK](#tab/current)
storage Storage Explorer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-troubleshooting.md
When you report an issue to GitHub, you might be asked to gather certain logs to
### Storage Explorer logs
-Starting with version 1.16.0, Storage Explorer logs various things to its own application logs. You can easily get to these logs by selecting **Help** > **Open Logs Directory**. By default, Storage Explorer logs at a low level of verbosity. To change the verbosity level, add an environment variable with the name of `STG_EX_LOG_LEVEL`, and any of the following values:
--- `silent`-- `critical`-- `error`-- `warning`-- `info` (default level)-- `verbose`-- `debug`
+Storage Explorer logs various things to its own application logs. You can easily get to these logs by selecting **Help** > **Open Logs Directory**. By default, Storage Explorer logs at a low level of verbosity. To change the verbosity level, go to **Settings** (the **gear** symbol on the left) > **Application** > **Logging** > **Log Level**. You can then set the log level as needed. For troubleshooting, it is recommended to use the `debug` log level.
Logs are split into folders for each session of Storage Explorer that you run. For whatever log files you need to share, place them in a zip archive, with files from different sessions in different folders.
If you're having trouble transferring data, you might need to get the AzCopy log
### Network logs
-For some issues, you'll need to provide logs of the network calls made by Storage Explorer. On Windows, you can do this step by using Fiddler.
+For some issues, you'll need to provide logs of the network calls made by Storage Explorer. On Windows, you can do this by using Fiddler.
> [!NOTE] > Fiddler traces might contain passwords you entered or sent in your browser during the gathering of the trace. Make sure to read the instructions on how to sanitize a Fiddler trace. Don't upload Fiddler traces to GitHub. You'll be told where you can securely send your Fiddler trace.
For some issues, you'll need to provide logs of the network calls made by Storag
1. Make sure **Capture CONNECTs** and **Decrypt HTTPS traffic** are selected. 1. Select **Actions**. 1. Select **Trust Root Certificate** and then select **Yes** in the next dialog.
-1. Select **Actions** again.
-1. Select **Export Root Certificate to Desktop**.
-1. Go to your desktop, find the *FiddlerRoot.cer* file, and double-click it.
-1. Go to the **Details** tab.
-1. Select **Copy to File**.
-1. In the export wizard, choose the following options:
-
- - Base-64 encoded X.509.
- - For file name, browse to *C:\Users\\<your user dir\>\AppData\Roaming\StorageExplorer\certs*. Then you can save it as any file name.
-
-1. Close the certificate window.
1. Start Storage Explorer.
-1. Go to **Edit** > **Configure Proxy**.
-1. In the dialog, select **Use app proxy settings**. Set the URL to http://localhost and the port to **8888**.
-1. Select **OK**.
+1. Go to **Settings** (the **gear** symbol on the left) > **Application** > **Proxy**
+1. Change the proxy source dropdown to be **Use system proxy (preview)**.
1. Restart Storage Explorer. 1. You should start seeing network calls from a `storageexplorer:` process show up in Fiddler.
For some issues, you'll need to provide logs of the network calls made by Storag
1. Close all apps other than Fiddler. 1. Clear the Fiddler log by using the **X** in the top left, near the **View** menu.
-1. Optional/recommended: Let Fiddler set for a few minutes. If you see network calls appear that aren't related to Storage Explorer, right-click them and select **Filter Now** > **Hide (process name)**.
-1. Start Storage Explorer.
+1. Optional/recommended: Let Fiddler set for a few minutes. If you see network calls appear that aren't related to Storage Explorer, right-click them and select **Filter Now** > **Hide \<process name\>**.
+1. Start/restart Storage Explorer.
1. Reproduce the issue. 1. Select **File** > **Save** > **All Sessions**. Save it somewhere you won't forget. 1. Close Fiddler and Storage Explorer.
stream-analytics Migrate To Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/migrate-to-vscode.md
+
+ Title: How to Migrate ASA projects to Visual Studio Code
+description: This article provides guidance for Visual Studio users migrating ASA projects to Visual Studio Code.
++ Last updated : 08/23/2022++++
+# How to Migrate ASA projects to Visual Studio Code
+
+This article provides guidance for Visual Studio users migrating Azure Stream Analytics (ASA) projects to Visual Studio Code (VSCode). Please note that the ASA Tools extension for Visual Studio is no longer maintained. We recommend that you use the ASA tools extension in VSCode for local testing before you submit and start an ASA job.
+
+If you have a local ASA project in Visual Studio, follow [these steps](#faqs) to submit your ASA project to Azure portal.
+
+## Install VSCode and ASA Tools extension
+
+1. Install [Visual Studio Code](https://code.visualstudio.com/)
+
+2. Open Visual Studio Code, select **Extensions** on the left pane, search for **Stream Analytics** and select **Install** on the **Azure Stream Analytics Tools** extension.
+
+ ![Search for Stream Analytics](./media/stream-analytics-migrate-to-vscode/search-for-stream-analytics.png)
+
+3. After the extension is installed, verify that **Azure Stream Analytics Tools** is visible in **Enabled Extensions**.
+
+4. Select the Azure icon on the Visual Studio Code activity bar. Under Stream Analytics on the side bar, select **Sign in to Azure**.
+
+ ![Sign in to Azure in Visual Studio Code](./media/stream-analytics-migrate-to-vscode/azure-sign-in.png)
+
+5. When you're signed in, your Azure account name appears on the status bar in the lower-left corner of the Visual Studio Code window.
++
+## Export an ASA Job and open in VSCode
+
+If you've created an ASA job in the Azure portal, you can export the ASA job to VSCode in your local machine. Two ways to export an ASA job:
+
+### Option 1 ΓÇô Export from the Azure portal
+
+1. Sign in to Azure portal and open your ASA job. Under **Query** page, select **Open in VS Code** to export job.
+
+ :::image type="content" source="./media/stream-analytics-migrate-to-vscode/portal-open-in-vscode.png" alt-text="Screenshot of the Azure portal using the Open in VSCode to launch VSCode in the local machine." lightbox= "./media/stream-analytics-migrate-to-vscode/portal-open-in-vscode.png" :::
+
+2. Select a folder where you want to export the ASA project.
+3. Then it will automatically create an ASA project and add it to your workspace in VSCode. You should see a folder with the same name as your ASA job.
+
+ ![VSCode export ASA project](./media/stream-analytics-migrate-to-vscode/vscode-export-asa-project.jpg)
+
+4. A Stream Analytics project consists of three folders: **Inputs**, **Outputs**, and **Functions**. It also has the query script **(\*.asaql)**, a **JobConfig.json** file, and an **asaproj.json** configuration file. If you have configured multiple Input and Output sources for the job, it will create JSON files for each source under the folders respectively.
+
+ ![VSCode Inputs and Outputs folders](./media/stream-analytics-migrate-to-vscode/vscode-folders.jpg)
+
+### Option 2 - Export an ASA job in VSCode
+
+1. Select the **Azure** icon on the VSCode activity bar. Find the **Subscription** where your ASA job is created, select **Export** to export the ASA job.
+
+ ![Export an ASA job in VSCode](./media/stream-analytics-migrate-to-vscode/vscode-export-job.jpg)
+
+2. Once the export is completed, you'll see an ASA project created in your workspace.
+
+ ![ASA job in VSCode workspace](./media/stream-analytics-migrate-to-vscode/vscode-workspace.jpg)
+
+3. If your ASA job has configured multiple input and output sources, it will create JSON files for each source under the **Inputs** and **Outputs** folders respectively.
+
+## Run an ASA job in VSCode
+
+After an ASA job is exported, you can run your query on the local machine. For input, data can be ingested from local files or live sources. Output results are either sent as files to a local folder, or to the live sinks. For more detail, visit [Run jobs locally with VS Code](./visual-studio-code-local-run-all.md).
+
+Follow these steps to run your job with live input and save output results locally:
+1. Before you begin, install [.NET core SDK](https://dotnet.microsoft.com/download) and restart Visual Studio Code.
+
+2. Go to. **\*.asaql** file, select **Run Locally**.
+
+ :::image type="content" source="media/stream-analytics-migrate-to-vscode/run-locally-vscode.png" alt-text="Screenshot of the Visual Studio Coding using Run Locally to run an ASA job." lightbox= "media/stream-analytics-migrate-to-vscode/run-locally-vscode.png" :::
+
+3. Then select **Use Live Input and Local Output** under the Command Palette.
+
+ ![vscode command palette](./media/stream-analytics-migrate-to-vscode/local-run-command-palette.png)
+
+4. If your job started successfully, you can view the output results, job diagram, and metrics for your ASA job.
+
+ :::image type="content" source="./media/stream-analytics-migrate-to-vscode/vscode-job-diagram-metrics.png" alt-text="Screenshot of the Visual Studio Code using Job Diagram and Metric features. " lightbox= "./media/stream-analytics-migrate-to-vscode/vscode-job-diagram-metrics.png" :::
+
+For more details about debugging, visit [Debug ASA queries locally using job diagram](./debug-locally-using-job-diagram-vs-code.md)
+
+## FAQs
+
+### How to migrate a local ASA project from Visual Studio to VSCode?
+
+If you have a local ASA project in Visual Studio and not yet submitted, follow these steps to submit your ASA project to Azure.
+
+1. Open your ASA project in Visual Studio, you should see the **Functions**, **Inputs** and **Outputs** folders in the **Solution Explorer**.
+
+ ![VS Solution Explorer](./media/stream-analytics-migrate-to-vscode/vs-soluton-explorer.png)
+
+2. Open the script **(\*.asaql)**, select **Submit to Azure** in the editor.
+
+ ![VS Submit to Azure](./media/stream-analytics-migrate-to-vscode/vs-submit-to-azure.png)
+
+3. Select **Create a New Azure Stream Analytics job** and enter a **Job Name**. Choose the **Subscription**, **Resource Group**, and **Location** for the ASA project.
+
+ ![VS save project](./media/stream-analytics-migrate-to-vscode/vs-save-project.jpg)
+
+4. Then you can go to the Azure portal and find the ASA job under your **Resource Group**.
+
+5. To learn how to export an ASA job in VSCode, see [here](#export-an-asa-job-and-open-in-vscode).
+
+### Do I need to configure the input and output sources after an ASA job is exported?
+
+No, if your ASA job has configured multiple Inputs and Outputs sources in the Azure portal, it will create JSON files for each source under the folders respectively.
++
+### How to add a new input source in VSCode?
+
+1. Right-click the Inputs folder in your Stream Analytics project. Then select **ASA: Add Input** from the context menu.
+
+ ![vscode add input](./media/stream-analytics-migrate-to-vscode/vscode-add-input.png)
+
+2. Choose the input type and follow the instructions to edit your input JSON files.
+
+ ![vscode add input codelens](./media/stream-analytics-migrate-to-vscode/vscode-add-input-codelens.png)
+
+3. Then you can preview data and verify if the new input source is added.
+
+ :::image type="content" source="./media/stream-analytics-migrate-to-vscode/preview-data.png" alt-text="Screenshot of the Visual Studio Code using Preview Data." lightbox= "./media/stream-analytics-migrate-to-vscode/preview-data.png" :::
+
+## Next steps
+
+To learn about Azure Stream Analytics Tools for Visual Studio Code, continue to the following articles:
+* [Test Stream Analytics queries locally with sample data using Visual Studio Code](visual-studio-code-local-run.md)
+* [Test Azure Stream Analytics jobs locally against live input with Visual Studio Code](visual-studio-code-local-run-live-input.md)
+* [Use Visual Studio Code to view Azure Stream Analytics jobs](visual-studio-code-explore-jobs.md)
+* [Set up CI/CD pipelines by using the npm package](./cicd-overview.md)
synapse-analytics Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/cicd/continuous-integration-delivery.md
Use the [Synapse workspace deployment](https://marketplace.visualstudio.com/item
The deployment task supports 3 types of operations, validate only, deploy and validate and deploy. > [!NOTE]
- > This workspace deployment extension in is not backward compatible. Please make sure that the latest version is installed and used. You can read the release note in [overview] (https://marketplace.visualstudio.com/items?itemName=AzureSynapseWorkspace.synapsecicd-deploy&ssr=false#overview) in Azure DevOps and the [latest version](https://github.com/marketplace/actions/synapse-workspace-deployment) in GitHub action.
+ > This workspace deployment extension in is not backward compatible. Please make sure that the latest version is installed and used. You can read the release note in [overview](https://marketplace.visualstudio.com/items?itemName=AzureSynapseWorkspace.synapsecicd-deploy&ssr=false#overview)in Azure DevOps and the [latest version](https://github.com/marketplace/actions/synapse-workspace-deployment) in GitHub action.
**Validate** is to validate the Synapse artifacts in non-publish branch with the task and generate the workspace template and parameter template file. The validation operation only works in the YAML pipeline. The sample YAML file is as below:
virtual-desktop Sandbox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/sandbox.md
To publish Windows Sandbox to your host pool using PowerShell:
1. Connect to Azure using one of the following methods:
- - Open a PowerShell prompt on your local device. Run the `Connect-AzAccount` cmdlet to sign in to your Azure account. For more information, see [Sign in with Azure PowerShell](https://github.com/powershell/azure/authenticate-azureps).
+ - Open a PowerShell prompt on your local device. Run the `Connect-AzAccount` cmdlet to sign in to your Azure account. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
- Sign in to [the Azure portal](https://portal.azure.com/) and open [Azure Cloud Shell](https://github.com/MicrosoftDocs/azure-docs-pr/pull/cloud-shell/overview.md) with PowerShell as the shell type. 2. Run the following cmdlet to get a list of all the Azure tenants your account has access to:
That's it! Leave the rest of the options default. You should now have Windows Sa
## Next steps
-Learn more about sandboxes and how to use them to test Windows environments at [Windows Sandbox](/windows/security/threat-protection/windows-sandbox/windows-sandbox-overview).
+Learn more about sandboxes and how to use them to test Windows environments at [Windows Sandbox](/windows/security/threat-protection/windows-sandbox/windows-sandbox-overview).
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Instance Repairs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md
Before enabling automatic instance repairs policy, ensure that the scale set ins
For instances marked as "Unhealthy", automatic repairs are triggered by the scale set. Ensure the application endpoint is correctly configured before enabling the automatic repairs policy in order to avoid unintended instance repairs, while the endpoint is getting configured.
-**Maximum number of instances in the scale set**
-
-This feature is currently available only for scale sets that have a maximum of 500 instances. The scale set can be deployed as either a single placement group or a multi-placement group, however the instance count cannot be above 500 if automatic instance repairs is enabled for the scale set.
- **API version** Automatic repairs policy is supported for compute API version 2018-10-01 or higher.
virtual-machines Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-compute-gallery.md
An Azure Compute Gallery helps you build structure and organization around your Azure resources, like images and [applications](vm-applications.md). An Azure Compute Gallery provides: -- Global replication.
+- Global replication.<sup>1</sup>
- Versioning and grouping of resources for easier management. - Highly available resources with Zone Redundant Storage (ZRS) accounts in regions that support Availability Zones. ZRS offers better resilience against zonal failures. - Premium storage support (Premium_LRS).
An Azure Compute Gallery helps you build structure and organization around your
With a gallery, you can share your resources to everyone, or limit sharing to different users, service principals, or AD groups within your organization. Resources can be replicated to multiple regions, for quicker scaling of your deployments.
+<sup>1</sup> The Azure Compute Gallery service is not a global resource. For disaster recovery scenarios, it is a best practice is to have at least two galleries, in different regions.
## Images
virtual-machines Dplsv5 Dpldsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dplsv5-dpldsv5-series.md
+
+ Title: Overview of the Dplsv5 and Dpldsv5-series sizes
+description: Overview of Dplsv5 and Dpldsv5 series of ARM64-based Azure Virtual Machines featuring the 80 core, 3.0 GHz Ampere Altra processor.
+++++ Last updated : 08/26/2022++++
+# Dplsv5 and Dpldsv5-series
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+The Dplsv5-series and Dpldsv5-series virtual machines are based on the Arm architecture, delivering outstanding price-performance for general-purpose workloads. These virtual machines feature the Ampere® Altra® Arm-based processor operating at 3.0 GHz, which provides an entire physical core for each virtual machine vCPU. These virtual machines offer a range of vCPU sizes, up to 2 GiB of memory per vCPU, and temporary storage options able to meet the requirements of most non-memory-intensive and scale-out workloads such as microservices, small databases, caches, gaming servers, and more.
+
+## Dplsv5-series
+
+Dplsv5-series virtual machines feature the Ampere® Altra® Arm-based processor operating at 3.0 GHz, which provides an entire physical core for each virtual machine vCPU. These virtual machines offer up to 64 vCPU and 128 GiB of RAM and offer a better value proposition for non-memory-intensive scale-out workloads. Dplsv5-series virtual machines support Standard SSD, Standard HDD, and Premium SSD disk types with no local-SSD support. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
+
+- [Premium Storage](premium-storage-performance.md): Supported
+- [Premium Storage caching](premium-storage-performance.md): Supported
+- [Live Migration](maintenance-and-updates.md): Supported
+- [Memory Preserving Updates](maintenance-and-updates.md): Supported
+- [VM Generation Support](generation-2.md): Generation 2
+- [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported
+- [Ephemeral OS Disks](ephemeral-os-disks.md): Supported
+- [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not supported
+
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max NICs | Max network bandwidth (Mbps) |
+||||||||
+| Standard_D2pls_v5 | 2 | 4 | Remote Storage Only | 4 | 3750/85 | 10000/1200 | 2 | 12500 |
+| Standard_D4pls_v5 | 4 | 8 | Remote Storage Only | 8 | 6400/145 | 20000/1200 | 2 | 12500 |
+| Standard_D8pls_v5 | 8 | 16 | Remote Storage Only | 16 | 12800/290 | 20000/1200 | 4 | 12500 |
+| Standard_D16pls_v5 | 16 | 32 | Remote Storage Only | 32 | 25600/600 | 40000/1200 | 4 | 12500 |
+| Standard_D32pls_v5 | 32 | 64 | Remote Storage Only | 32 | 51200/865 | 80000/2000 | 8 | 16000 |
+| Standard_D48pls_v5 | 48 | 96 | Remote Storage Only | 32 | 76800/1315 | 80000/3000 | 8 | 24000 |
+| Standard_D64pls_v5 | 64 | 128 | Remote Storage Only | 32 | 80000/1735 | 80000/3000 | 8 | 40000 |
+
+> [!NOTE]
+> Accelerated networking is required and turned on by default on all Dplsv5 machines.
+
+## Dpldsv5-series
+
+Dpldsv5-series virtual machines feature the Ampere® Altra® Arm-based processor operating at 3.0 GHz, which provides an entire physical core for each virtual machine vCPU. These virtual machines offer up to 64 vCPU, 128 GiB of RAM, and fast local SSD storage up to 2,400 GiB built for scale-out, non-memory-intensive workloads that require local disk. Dpldsv5-series virtual machines support Standard SSD, Standard HDD, and Premium SSD disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
+
+- [Premium Storage](premium-storage-performance.md): Supported
+- [Premium Storage caching](premium-storage-performance.md): Supported
+- [Live Migration](maintenance-and-updates.md): Supported
+- [Memory Preserving Updates](maintenance-and-updates.md): Supported
+- [VM Generation Support](generation-2.md): Generation 2
+- [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported
+- [Ephemeral OS Disks](ephemeral-os-disks.md): Supported
+- [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not supported
+
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max NICs | Max network bandwidth (Mbps) |
+||||||||
+| Standard_D2plds_v5 | 2 | 4 | 75 | 4 | 9375/125 | 3750/85 | 10000/1200 | 2 | 12500 |
+| Standard_D4plds_v5 | 4 | 8 | 150 | 8 | 19000/250 | 6400/145 | 20000/1200 | 2 | 12500 |
+| Standard_D8plds_v5 | 8 | 16 | 300 | 16 | 38000/500 | 12800/290 | 20000/1200 | 4 | 12500 |
+| Standard_D16plds_v5 | 16 | 32 | 600 | 32 | 75000/1000 | 25600/600 | 40000/1200 | 4 | 12500 |
+| Standard_D32plds_v5 | 32 | 64 | 1200 | 32 | 150000/2000 | 51200/865 | 80000/2000 | 8 | 16000 |
+| Standard_D48plds_v5 | 48 | 96 | 1800 | 32 | 225000/3000 | 76800/1315 | 80000/3000 | 8 | 24000 |
+| Standard_D64plds_v5 | 64 | 128 | 2400 | 32 | 300000/4000 | 80000/1735 | 80000/3000 | 8 | 40000 |
+
+> [!NOTE]
+> Accelerated networking is required and turned on by default on all Dplsv5 machines.
++
+## Other sizes and information
+
+- [General purpose](sizes-general.md)
+- [Memory optimized](sizes-memory.md)
+- [Storage optimized](sizes-storage.md)
+- [GPU optimized](sizes-gpu.md)
+- [High performance compute](sizes-hpc.md)
+- [Previous generations](sizes-previous-gen.md)
+
+Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
+
+## Next steps
+
+Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Dpsv5 Dpdsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dpsv5-dpdsv5-series.md
+
+ Title: Overview of the Dpsv5 and Dpdsv5-series sizes
+description: Overview of the Dpsv5 and Dpdsv5 series of ARM64-based Azure Virtual Machines featuring the 80 core, 3.0 GHz Ampere Altra processor.
+++++ Last updated : 08/26/2022++++
+# Dpsv5 and Dpdsv5-series
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+The Dpsv5-series and Dpdsv5-series virtual machines are based on the Arm architecture, delivering outstanding price-performance for general-purpose workloads. These virtual machines feature the Ampere® Altra® Arm-based processor operating at 3.0 GHz, which provides an entire physical core for each virtual machine vCPU. These virtual machines offer a range of vCPU sizes, up to 4 GiB of memory per vCPU, and temporary storage options able to meet the requirements of scale-out and most enterprise workloads such as web and application servers, small to medium databases, caches, and more.
+
+## Dpsv5-series
+
+Dpsv5-series virtual machines feature the Ampere® Altra® Arm-based processor operating at 3.0 GHz, which provides an entire physical core for each virtual machine vCPU. These virtual machines offer up to 64 vCPU and 208 GiB of RAM and are optimized for scale-out and most enterprise workloads. Dpsv5-series virtual machines support Standard SSD, Standard HDD, and Premium SSD disk types with no local-SSD support. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
+
+- [Premium Storage](premium-storage-performance.md): Supported
+- [Premium Storage caching](premium-storage-performance.md): Supported
+- [Live Migration](maintenance-and-updates.md): Supported
+- [Memory Preserving Updates](maintenance-and-updates.md): Supported
+- [VM Generation Support](generation-2.md): Generation 2
+- [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported
+- [Ephemeral OS Disks](ephemeral-os-disks.md): Supported
+- [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not supported
+
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max NICs | Max network bandwidth (Mbps) |
+||||||||
+| Standard_D2ps_v5 | 2 | 8 | Remote Storage Only | 4 | 3750/85 | 10000/1200 | 2 | 12500 |
+| Standard_D4ps_v5 | 4 | 16 | Remote Storage Only | 8 | 6400/145 | 20000/1200 | 2 | 12500 |
+| Standard_D8ps_v5 | 8 | 32 | Remote Storage Only | 16 | 12800/290 | 20000/1200 | 4 | 12500 |
+| Standard_D16ps_v5 | 16 | 64 | Remote Storage Only | 32 | 25600/600 | 40000/1200 | 4 | 12500 |
+| Standard_D32ps_v5 | 32 | 128 | Remote Storage Only | 32 | 51200/865 | 80000/2000 | 8 | 16000 |
+| Standard_D48ps_v5 | 48 | 192 | Remote Storage Only | 32 | 76800/1315 | 80000/3000 | 8 | 24000 |
+| Standard_D64ps_v5 | 64 | 208 | Remote Storage Only | 32 | 80000/1735 | 80000/3000 | 8 | 40000 |
+
+> [!NOTE]
+> Accelerated networking is required and turned on by default on all Dpsv5 machines.
+
+## Dpdsv5-series
+
+Dpdsv5-series virtual machines feature the Ampere® Altra® Arm-based processor operating at 3.0 GHz, which provides an entire physical core for each virtual machine vCPU. These virtual machines offer up to 64 vCPU, 208 GiB of RAM, and fast local SSD storage with up to 2,400 GiB in capacity and are optimized for scale-out and most enterprise workloads. Dpdsv5-series virtual machines support Standard SSD, Standard HDD, and Premium SSD disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
+
+- [Premium Storage](premium-storage-performance.md): Supported
+- [Premium Storage caching](premium-storage-performance.md): Supported
+- [Live Migration](maintenance-and-updates.md): Supported
+- [Memory Preserving Updates](maintenance-and-updates.md): Supported
+- [VM Generation Support](generation-2.md): Generation 2
+- [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported
+- [Ephemeral OS Disks](ephemeral-os-disks.md): Supported
+- [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not supported
+
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max NICs | Max network bandwidth (Mbps) |
+||||||||
+| Standard_D2pds_v5 | 2 | 8 | 75 | 4 | 9375/125 | 3750/85 | 10000/1200 | 2 | 12500 |
+| Standard_D4pds_v5 | 4 | 16 | 150 | 8 | 19000/250 | 6400/145 | 20000/1200 | 2 | 12500 |
+| Standard_D8pds_v5 | 8 | 32 | 300 | 16 | 38000/500 | 12800/290 | 20000/1200 | 4 | 12500 |
+| Standard_D16pds_v5 | 16 | 64 | 600 | 32 | 75000/1000 | 25600/600 | 40000/1200 | 4 | 12500 |
+| Standard_D32pds_v5 | 32 | 128 | 1200 | 32 | 150000/2000 | 51200/865 | 80000/2000 | 8 | 16000 |
+| Standard_D48pds_v5 | 48 | 192 | 1800 | 32 | 225000/3000 | 76800/1315 | 80000/3000 | 8 | 24000 |
+| Standard_D64pds_v5 | 64 | 208 | 2400 | 32 | 300000/4000 | 80000/1735 |80000/3000 | 8 | 40000 |
+
+> [!NOTE]
+> Accelerated networking is required and turned on by default on all Dpsv5 machines.
++
+## Other sizes and information
+
+- [General purpose](sizes-general.md)
+- [Memory optimized](sizes-memory.md)
+- [Storage optimized](sizes-storage.md)
+- [GPU optimized](sizes-gpu.md)
+- [High performance compute](sizes-hpc.md)
+- [Previous generations](sizes-previous-gen.md)
+
+Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
+
+## Next steps
+
+Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Epsv5 Epdsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/epsv5-epdsv5-series.md
+
+ Title: Overview of the Epsv5 and Epdsv5-series sizes
+description: Overview of memory-optimized Epsv5 and Epdsv5-series of ARM64-based Azure Virtual Machines featuring the 80 core, 3.0 GHz Ampere Altra processor.
+++++ Last updated : 08/26/2022++++
+# Epsv5 and Epdsv5-series
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+The Epsv5-series and Epdsv5-series virtual machines are based on the Arm architecture, delivering outstanding price-performance for memory-intensive workloads. These virtual machines feature the Ampere® Altra® Arm-based processor operating at 3.0 GHz, which provides an entire physical core for each virtual machine vCPU. These virtual machines offer a range of vCPU sizes, up to 8 GiB of memory per vCPU, and are best suited for memory-intensive scale-out and enterprise workloads, such as relational database servers, large databases, data analytics engines, in-memory caches, and more.
+
+## Epsv5-series
+
+Epsv5-series virtual machines feature the Ampere® Altra® Arm-based processor operating at 3.0 GHz, which provides an entire physical core for each virtual machine vCPU. These virtual machines offer up to 32 vCPU and 208 GiB of RAM and are ideal for memory-intensive scale-out and most Enterprise workloads. Epsv5-series virtual machines support Standard SSD, Standard HDD, and Premium SSD disk types with no local-SSD support. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
+
+- [Premium Storage](premium-storage-performance.md): Supported
+- [Premium Storage caching](premium-storage-performance.md): Supported
+- [Live Migration](maintenance-and-updates.md): Supported
+- [Memory Preserving Updates](maintenance-and-updates.md): Supported
+- [VM Generation Support](generation-2.md): Generation 2
+- [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported
+- [Ephemeral OS Disks](ephemeral-os-disks.md): Supported
+- [Nested virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not supported
+
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max NICs | Max network bandwidth (Mbps) |
+||||||||
+| Standard_E2ps_v5 | 2 | 16 | Remote Storage Only | 4 | 3750/85 | 10000/1200 | 2 | 12500 |
+| Standard_E4ps_v5 | 4 | 32 | Remote Storage Only | 8 | 6400/145 | 10000/1200 | 2 | 12500 |
+| Standard_E8ps_v5 | 8 | 64 | Remote Storage Only | 16 | 12800/290 | 20000/1200 | 4 | 12500 |
+| Standard_E16ps_v5 | 16 | 128 | Remote Storage Only | 32 | 25600/600 | 40000/1200 | 4 | 12500 |
+| Standard_E20ps_v5 | 20 | 160 | Remote Storage Only | 32 | 32000/750 | 64000/1600 | 8 | 12500 |
+| Standard_E32ps_v5 | 32 | 208 | Remote Storage Only | 32 | 51200/865 | 80000/2000 | 8 | 16000 |
+
+> [!NOTE]
+> Accelerated networking is required and turned on by default on all Epsv5 machines.
+
+## Epdsv5-series
+
+Epdsv5-series virtual machines feature the Ampere® Altra® Arm-based processor operating at 3.0 GHz, which provides an entire physical core for each virtual machine vCPU. These virtual machines offer up to 32 vCPU, 208 GiB of RAM, and fast local SSD storage up to 1,200 GiB and are ideal for memory-intensive scale-out and most Enterprise workloads. Epdsv5-series virtual machines support Standard SSD, Standard HDD, and premium SSD disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
+
+- [Premium Storage](premium-storage-performance.md): Supported
+- [Premium Storage caching](premium-storage-performance.md): Supported
+- [Live Migration](maintenance-and-updates.md): Supported
+- [Memory Preserving Updates](maintenance-and-updates.md): Supported
+- [VM Generation Support](generation-2.md): Generation 2
+- [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported
+- [Ephemeral OS Disks](ephemeral-os-disks.md): Supported
+- [Nested virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not supported
+
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS/MBps | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max NICs | Max network bandwidth (Mbps) |
+||||||||
+| Standard_E2pds_v5 | 2 | 16 | 75 | 4 | 9375/125 | 3750/85 | 10000/1200 | 2 | 12500 |
+| Standard_E4pds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 6400/145 | 20000/1200 | 2 | 12500 |
+| Standard_E8pds_v5 | 8 | 64 | 300 | 16 | 38000/500 | 12800/290 | 20000/1200 | 4 | 12500 |
+| Standard_E16pds_v5 | 16 | 128 | 600 | 32 | 75000/1000 | 25600/600 | 40000/1200 | 4 | 12500 |
+| Standard_E20pds_v5 | 20 | 160 | 750 | 32 | 95000/1250 | 32000/750 | 64000/1600 | 8 | 12500 |
+| Standard_E32pds_v5 | 32 | 208 | 1200 | 32 | 150000/2000 | 51200/865 | 80000/2000 | 8 | 16000 |
+
+> [!NOTE]
+> Accelerated networking is required and turned on by default on all Epsv5 machines.
++
+## Other sizes and information
+
+- [General purpose](sizes-general.md)
+- [Memory optimized](sizes-memory.md)
+- [Storage optimized](sizes-storage.md)
+- [GPU optimized](sizes-gpu.md)
+- [High performance compute](sizes-hpc.md)
+- [Previous generations](sizes-previous-gen.md)
+
+Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
+
+## Next steps
+
+Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Generation 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generation-2.md
Previously updated : 02/26/2021 Last updated : 08/26/2022
Azure now offers generation 2 support for the following selected VM series:
|[Dadsv5-series](dasv5-dadsv5-series.md) | :heavy_check_mark: | :heavy_check_mark: | |[DCasv5-series](dcasv5-dcadsv5-series.md) | :x: | :heavy_check_mark: | |[DCadsv5-series](dcasv5-dcadsv5-series.md) | :x: | :heavy_check_mark: |
+|[Dpsv5-series](dpsv5-dpdsv5-series.md) | :x: | :heavy_check_mark: |
+|[Dpdsv5-series](dpsv5-dpdsv5-series.md) | :x: | :heavy_check_mark: |
|[Dv5-series](dv5-dsv5-series.md) | :heavy_check_mark: | :heavy_check_mark: | |[Dsv5-series](dv5-dsv5-series.md) | :heavy_check_mark: | :heavy_check_mark: | |[Ddv5-series](ddv5-ddsv5-series.md) | :heavy_check_mark: | :heavy_check_mark: |
Azure now offers generation 2 support for the following selected VM series:
|[Eadsv5-series](easv5-eadsv5-series.md) | :heavy_check_mark: | :heavy_check_mark: | |[ECasv5-series](ecasv5-ecadsv5-series.md) | :x: | :heavy_check_mark: | |[ECadsv5-series](ecasv5-ecadsv5-series.md) | :x: | :heavy_check_mark: |
+|[Epsv5-series](epsv5-epdsv5-series.md) | :x: | :heavy_check_mark: |
+|[Epdsv5-series](epsv5-epdsv5-series.md) | :x: | :heavy_check_mark: |
|[Edv5-series](edv5-edsv5-series.md) | :heavy_check_mark: | :heavy_check_mark: | |[Edsv5-series](edv5-edsv5-series.md) | :heavy_check_mark: | :heavy_check_mark: | |[Ev5-series](ev5-esv5-series.md) | :heavy_check_mark: | :heavy_check_mark: |
virtual-machines Share Gallery Direct https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-direct.md
During the preview:
- You need to create a new gallery, with the property `sharingProfile.permissions` set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated. - PowerShell, Ansible, and Terraform aren't supported at this time. - Not available in Government clouds
+- For consuming direct shared images in target subscription, Direct shared images can be found from VM/VMSS creation blade only.
- **Known issue**: When creating a VM from a direct shared image using the Azure portal, if you select a region, select an image, then change the region, you will get an error message: "You can only create VM in the replication regions of this image" even when the image is replicated to that region. To get rid of the error, select a different region, then switch back to the region you want. If the image is available, it should clear the error message. ## Prerequisites
virtual-machines Sizes General https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-general.md
Previously updated : 10/20/2021 Last updated : 08/26/2022
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+General purpose VM sizes provide balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers. This article provides information about the offerings for general purpose computing.
+ > [!TIP] > Try the **[Virtual machines selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload.
-General purpose VM sizes provide balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers. This article provides information about the offerings for general purpose computing.
+- The [Av2-series](av2-series.md) VMs can be deployed on various hardware types and processors. A-series VMs have CPU performance and memory configurations best suited for entry level workloads like development and test. This size is throttled, based on the hardware. The size offers consistent processor performance for the running instance, regardless of the hardware it's deployed on. To determine the physical hardware on which this size is deployed, query the virtual hardware from within the Virtual Machine. Example use cases include development and test servers, low traffic web servers, small to medium databases, proof-of-concepts, and code repositories.
-- The [Av2-series](av2-series.md) VMs can be deployed on a variety of hardware types and processors. A-series VMs have CPU performance and memory configurations best suited for entry level workloads like development and test. The size is throttled, based upon the hardware, to offer consistent processor performance for the running instance, regardless of the hardware it is deployed on. To determine the physical hardware on which this size is deployed, query the virtual hardware from within the Virtual Machine. Example use cases include development and test servers, low traffic web servers, small to medium databases, proof-of-concepts, and code repositories.
+- [B-series burstable](sizes-b-series-burstable.md) VMs are ideal for workloads that don't need the full performance of the CPU continuously, like web servers, small databases and development and test environments. These workloads typically have burstable performance requirements. The B-Series provides these customers the ability to purchase a VM size with a price conscious baseline performance that allows the VM instance to build up credits when the VM is utilizing less than its base performance. When the VM has accumulated credit, the VM can burst above the VMΓÇÖs baseline using up to 100% of the CPU when your application requires the higher CPU performance.
-- [B-series burstable](sizes-b-series-burstable.md) VMs are ideal for workloads that do not need the full performance of the CPU continuously, like web servers, small databases and development and test environments. These workloads typically have burstable performance requirements. The B-Series provides these customers the ability to purchase a VM size with a price conscious baseline performance that allows the VM instance to build up credits when the VM is utilizing less than its base performance. When the VM has accumulated credit, the VM can burst above the VMΓÇÖs baseline using up to 100% of the CPU when your application requires the higher CPU performance.
+- The [DCv2-series](dcv2-series.md) can help protect the confidentiality and integrity of your data and code while itΓÇÖs processed in the public cloud. These machines are backed by the latest generation of Intel XEON E-2288G Processor with SGX technology. With the Intel Turbo Boost Technology, these machines can go up to 5.0 GHz. DCv2 series instances enable customers to build secure enclave-based applications to protect their code and data while itΓÇÖs in use.
-- The [DCv2-series](dcv2-series.md) can help protect the confidentiality and integrity of your data and code while itΓÇÖs processed in the public cloud. These machines are backed by the latest generation of Intel XEON E-2288G Processor with SGX technology. With the Intel Turbo Boost Technology these machines can go up to 5.0GHz. DCv2 series instances enable customers to build secure enclave-based applications to protect their code and data while itΓÇÖs in use.
+- The [Dpsv5 and Dpdsv5-series](dpsv5-dpdsv5-series.md) and [Dplsv5 and Dpldsv5-series](dplsv5-dpldsv5-series.md) are ARM64-based VMs featuring the 80 core, 3.0 GHz Ampere Altra processor. These series are designed for common enterprise workloads. They're optimized for database, in-memory caching, analytics, gaming, web, and application servers running on Linux.
-- [Dv2 and Dsv2-series](dv2-dsv2-series.md) VMs, a follow-on to the original D-series, features a more powerful CPU and optimal CPU-to-memory configuration making them suitable for most production workloads. The Dv2-series is about 35% faster than the D-series. Dv2-series run on 2nd Generation Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz (Skylake), Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors with the Intel Turbo Boost Technology 2.0. The Dv2-series has the same memory and disk configurations as the D-series.
+- [Dv2 and Dsv2-series](dv2-dsv2-series.md) VMs, a follow-on to the original D-series, features a more powerful CPU and optimal CPU-to-memory configuration making them suitable for most production workloads. The Dv2-series is about 35% faster than the D-series. Dv2-series run on 2nd Generation Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1 GHz (Skylake), Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors with the Intel Turbo Boost Technology 2.0. The Dv2-series has the same memory and disk configurations as the D-series.
-- The [Dv3 and Dsv3-series](dv3-dsv3-series.md) runs on 2nd Generation Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz (Skylake), Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors in a hyper-threaded configuration, providing a better value proposition for most general purpose workloads. Memory has been expanded (from ~3.5 GiB/vCPU to 4 GiB/vCPU) while disk and network limits have been adjusted on a per core basis to align with the move to hyperthreading. The Dv3-series no longer has the high memory VM sizes of the D/Dv2-series, those have been moved to the memory optimized [Ev3 and Esv3-series](ev3-esv3-series.md).
+- The [Dv3 and Dsv3-series](dv3-dsv3-series.md) runs on 2nd Generation Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1 GHz (Skylake), Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors. These series run in a hyper-threaded configuration, providing a better value proposition for most general purpose workloads. Memory has been expanded (from ~3.5 GiB/vCPU to 4 GiB/vCPU) while disk and network limits have been adjusted on a per core basis to align with the move to hyperthreading. The Dv3-series no longer has the high memory VM sizes of the D/Dv2-series. Those sizes have been moved to the memory optimized [Ev3 and Esv3-series](ev3-esv3-series.md).
-- [Dav4 and Dasv4-series](dav4-dasv4-series.md) are new sizes utilizing AMDΓÇÖs 2.35Ghz EPYC<sup>TM</sup> 7452 processor in a multi-threaded configuration with up to 256 MB L3 cache dedicating 8 MB of that L3 cache to every 8 cores increasing customer options for running their general purpose workloads. The Dav4-series and Dasv4-series have the same memory and disk configurations as the D & Dsv3-series.
+- [Dav4 and Dasv4-series](dav4-dasv4-series.md) are new sizes utilizing AMDΓÇÖs 2.35Ghz EPYC<sup>TM</sup> 7452 processor in a multi-threaded configuration with up to 256 MB L3 cache dedicating 8 MB of that L3 cache to every eight cores increasing customer options for running their general purpose workloads. The Dav4-series and Dasv4-series have the same memory and disk configurations as the D & Dsv3-series.
- The [Dv4 and Dsv4-series](dv4-dsv4-series.md) runs on the Intel® Xeon® Platinum 8272CL (Cascade Lake) processors in a hyper-threaded configuration, providing a better value proposition for most general-purpose workloads. It features an all core Turbo clock speed of 3.4 GHz. -- The [Ddv4 and Ddsv4-series](ddv4-ddsv4-series.md) runs on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake) processors in a hyper-threaded configuration, providing a better value proposition for most general-purpose workloads. It features an all core Turbo clock speed of 3.4 GHz, [Intel&reg; Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) and [Intel&reg; Advanced Vector Extensions 512 (Intel&reg; AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html). They also support [Intel&reg; Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html). These new VM sizes will have 50% larger local storage, as well as better local disk IOPS for both read and write compared to the [Dv3/Dsv3](./dv3-dsv3-series.md) sizes with [Gen2 VMs](./generation-2.md).
+- The [Ddv4 and Ddsv4-series](ddv4-ddsv4-series.md) runs on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake) processors in a hyper-threaded configuration, providing a better value proposition for most general-purpose workloads. It features an all core Turbo clock speed of 3.4 GHz, [Intel&reg; Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) and [Intel&reg; Advanced Vector Extensions 512 (Intel&reg; AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html). They also support [Intel&reg; Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html). These new VM sizes will have 50% larger local storage, and better local disk IOPS for both read and write compared to the [Dv3/Dsv3](./dv3-dsv3-series.md) sizes with [Gen2 VMs](./generation-2.md).
-- The [Dasv5 and Dadsv5-series](dasv5-dadsv5-series.md) utilize AMD's 3rd Generation EPYC<sup>TM</sup> 7763v processor in a multi-threaded configuration with up to 256 MB L3 cache, increasing customer options for running their general purpose workloads. These virtual machines offer a combination of vCPUs and memory to meet the requirements associated with most enterprise workloads, such as small-to-medium databases, low-to-medium traffic web servers, application servers and more.
+- The [Dasv5 and Dadsv5-series](dasv5-dadsv5-series.md) utilize AMD's 3rd Generation EPYC<sup>TM</sup> 7763v processor in a multi-threaded configuration with up to 256 MB L3 cache, increasing customer options for running their general purpose workloads. These virtual machines offer a combination of vCPUs and memory to meet the requirements associated with most enterprise workloads. For example, you can use these series with small-to-medium databases, low-to-medium traffic web servers, application servers, and more.
-- The [Dv5 and Dsv5-series](dv5-dsv5-series.md) run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) processor in a hyper-threaded configuration. The Dv5 and Dsv5 virtual machine sizes do not have any temporary storage thus lowering the price of entry. The Dv5 VM sizes offer a combination of vCPUs and memory able to meet the requirements associated with most enterprise workloads, such as small-to-medium databases, low-to-medium traffic web servers, application servers and more.
+- The [Dv5 and Dsv5-series](dv5-dsv5-series.md) run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) processor in a hyper-threaded configuration. The Dv5 and Dsv5 virtual machine sizes don't have any temporary storage thus lowering the price of entry. The Dv5 VM sizes offer a combination of vCPUs and memory to meet the requirements associated with most enterprise workloads. For example, you can use these series with small-to-medium databases, low-to-medium traffic web servers, application servers, and more.
- The [Ddv5 and Ddsv5-series](ddv5-ddsv5-series.md) run on the 3rd Generation Intel&reg; Xeon&reg; Platinum 8370C (Ice Lake) processors in a hyper-threaded configuration, providing a better value proposition for most general-purpose workloads. This new processor features an all core Turbo clock speed of 3.5 GHz, [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html), [Intel&reg; Turbo Boost Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Advanced-Vector Extensions 512 (Intel&reg; AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html) and [Intel&reg; Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html).
virtual-machines Sizes Memory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-memory.md
Previously updated : 04/04/2022 Last updated : 08/26/2022
Last updated 04/04/2022
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+Memory optimized VM sizes offer a high memory-to-CPU ratio that is great for relational database servers, medium to large caches, and in-memory analytics. This article provides information about the number of vCPUs, data disks and NICs. You can also learn about storage throughput and network bandwidth for each size in this grouping.
+ > [!TIP] > Try the **[Virtual machines selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload.
-Memory optimized VM sizes offer a high memory-to-CPU ratio that is great for relational database servers, medium to large caches, and in-memory analytics. This article provides information about the number of vCPUs, data disks and NICs as well as storage throughput and network bandwidth for each size in this grouping.
- - [Dv2 and DSv2-series](dv2-dsv2-series-memory.md), a follow-on to the original D-series, features a more powerful CPU. The Dv2-series is about 35% faster than the D-series. It runs on the Intel&reg; Xeon&reg; 8171M 2.1 GHz (Skylake) or the Intel&reg; Xeon&reg; E5-2673 v4 2.3 GHz (Broadwell) or the Intel&reg; Xeon&reg; E5-2673 v3 2.4 GHz (Haswell) processors, and with the Intel Turbo Boost Technology 2.0. The Dv2-series has the same memory and disk configurations as the D-series. Dv2 and DSv2-series are ideal for applications that demand faster vCPUs, better temporary storage performance, or have higher memory demands. They offer a powerful combination for many enterprise-grade applications. -- The [Eav4 and Easv4-series](eav4-easv4-series.md) utilize AMD's 2.35Ghz EPYC<sup>TM</sup> 7452 processor in a multi-threaded configuration with up to 256MB L3 cache, increasing options for running most memory optimized workloads. The Eav4-series and Easv4-series have the same memory and disk configurations as the Ev3 & Esv3-series.
+- The [Eav4 and Easv4-series](eav4-easv4-series.md) utilize AMD's 2.35Ghz EPYC<sup>TM</sup> 7452 processor in a multi-threaded configuration with up to 256 MB L3 cache, increasing options for running most memory optimized workloads. The Eav4-series and Easv4-series have the same memory and disk configurations as the Ev3 & Esv3-series.
- The [Ebsv5 and Ebdsv5 series](ebdsv5-ebsv5-series.md) deliver higher remote storage performance in each VM size than the Ev4 series. The increased remote storage performance of the Ebsv5 and Ebdsv5 VMs is ideal for storage throughput-intensive workloads, such as relational databases and data analytics applications. -- The [Ev3 and Esv3-series](ev3-esv3-series.md) Intel&reg; Xeon&reg; 8171M 2.1 GHz (Skylake) or the Intel&reg; Xeon&reg; E5-2673 v4 2.3 GHz (Broadwell) processor in a hyper-threaded configuration, providing a better value proposition for most general purpose workloads, and bringing the Ev3 into alignment with the general purpose VMs of most other clouds. Memory has been expanded (from 7 GiB/vCPU to 8 GiB/vCPU) while disk and network limits have been adjusted on a per core basis to align with the move to hyper-threading. The Ev3 is the follow up to the high memory VM sizes of the D/Dv2 families.
+- The [Ev3 and Esv3-series](ev3-esv3-series.md) feature the Intel&reg; Xeon&reg; 8171M 2.1 GHz (Skylake) or the Intel&reg; Xeon&reg; E5-2673 v4 2.3 GHz (Broadwell) processor in a hyper-threaded configuration. This configuration provides a better value proposition for most general purpose workloads, and brings the Ev3 into alignment with the general purpose VMs of most other clouds. Memory has been expanded (from 7 GiB/vCPU to 8 GiB/vCPU) while disk and network limits have been adjusted on a per core basis to align with the move to hyper-threading. The Ev3 is the follow up to the high memory VM sizes of the D/Dv2 families.
-- The [Ev4 and Esv4-series](ev4-esv4-series.md) runs on 2nd Generation Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake) processors in a hyper-threaded configuration, are ideal for various memory-intensive enterprise applications and feature up to 504 GiB of RAM. It features the [Intel&reg; Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) and [Intel&reg; Advanced Vector Extensions 512 (Intel AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html). The Ev4 and Esv4-series do not include a local temp disk. For more information, refer to [Azure VM sizes with no local temp disk](azure-vms-no-temp-disk.yml).
+- The [Ev4 and Esv4-series](ev4-esv4-series.md) runs on 2nd Generation Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake) processors in a hyper-threaded configuration, are ideal for various memory-intensive enterprise applications and feature up to 504 GiB of RAM. It features the [Intel&reg; Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) and [Intel&reg; Advanced Vector Extensions 512 (Intel AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html). The Ev4 and Esv4-series don't include a local temp disk. For more information, see [Azure VM sizes with no local temp disk](azure-vms-no-temp-disk.yml).
- The [Edv4 and Edsv4-series](edv4-edsv4-series.md) runs on 2nd Generation Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake) processors, ideal for extremely large databases or other applications that benefit from high vCPU counts and large amounts of memory. Additionally, these VM sizes include fast, larger local SSD storage for applications that benefit from low latency, high-speed local storage. It features an all core Turbo clock speed of 3.4 GHz, [Intel&reg; Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) and [Intel&reg; Advanced Vector Extensions 512 (Intel AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html). - The [Easv5 and Eadsv5-series](easv5-eadsv5-series.md) utilize AMD's 3rd Generation EPYC<sup>TM</sup> 7763v processor in a multi-threaded configuration with up to 256 MB L3 cache, increasing customer options for running most memory optimized workloads. These virtual machines offer a combination of vCPUs and memory to meet the requirements associated with most memory-intensive enterprise applications, such as relational database servers and in-memory analytics workloads. -- The [Edv5 and Edsv5-series](edv5-edsv5-series.md) runs on the 3rd Generation Intel&reg; Xeon&reg; Platinum 8370C (Ice Lake) processors in a hyper-threaded configuration, and are ideal for various memory-intensive enterprise applications and feature up to 672 GiB of RAM, [Intel&reg; Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) and [Intel&reg; Advanced Vector Extensions 512 (Intel&reg; AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html). They also support [Intel&reg; Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html). These new VM sizes will have 50% larger local storage, as well as better local disk IOPS for both read and write compared to the [Ev3/Esv3](./ev3-esv3-series.md) sizes with [Gen2 VMs](./generation-2.md). It features an all core Turbo clock speed of 3.4 GHz.
+- The [Edv5 and Edsv5-series](edv5-edsv5-series.md) run on the 3rd Generation Intel&reg; Xeon&reg; Platinum 8370C (Ice Lake) processors in a hyper-threaded configuration. These series are ideal for various memory-intensive enterprise applications. They feature up to 672 GiB of RAM, [Intel&reg; Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) and [Intel&reg; Advanced Vector Extensions 512 (Intel&reg; AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html). The series also support [Intel&reg; Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html). These new VM sizes have 50% larger local storage, and better local disk IOPS for both read and write compared to the [Ev3/Esv3](./ev3-esv3-series.md) sizes with [Gen2 VMs](./generation-2.md). It features an all core Turbo clock speed of 3.4 GHz.
+
+- The [Epsv5 and Epdsv5-series](epsv5-epdsv5-series.md) are ARM64-based VMs featuring the 80 core, 3.0 GHz Ampere Altra processor. These series are designed for common enterprise workloads. They're optimized for database, in-memory caching, analytics, gaming, web, and application servers running on Linux.
- The [Ev5 and Esv5-series](ev5-esv5-series.md) runs on the Intel&reg; Xeon&reg; Platinum 8272CL (Ice Lake) processors in a hyper-threaded configuration, are ideal for various memory-intensive enterprise applications and feature up to 512 GiB of RAM. It features an all core Turbo clock speed of 3.4 GHz.
virtual-machines Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes.md
Previously updated : 06/01/2022 Last updated : 08/26/2022
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+This article describes the available sizes and options for the Azure virtual machines you can use to run your apps and workloads. It also provides deployment considerations to be aware of when you're planning to use these resources.
+ > [!TIP] > Try the **[Virtual machines selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload.
-This article describes the available sizes and options for the Azure virtual machines you can use to run your apps and workloads. It also provides deployment considerations to be aware of when you're planning to use these resources.
- :::image type="content" source="media/sizes/azurevmsthumb.jpg" alt-text="YouTube video for selecting the right size for your VM." link="https://youtu.be/zOSvnJFd3ZM"::: | Type | Sizes | Description | ||-|-|
-| [General purpose](sizes-general.md) | B, Dsv3, Dv3, Dasv4, Dav4, DSv2, Dv2, Av2, DC, DCv2, Dv4, Dsv4, Ddv4, Ddsv4, Dv5, Dsv5, Ddv5, Ddsv5, Dasv5, Dadsv5 | Balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers. |
+| [General purpose](sizes-general.md) | B, Dsv3, Dv3, Dasv4, Dav4, DSv2, Dv2, Av2, DC, DCv2, Dpdsv5, Dpldsv5, Dpsv5, Dplsv5, Dv4, Dsv4, Ddv4, Ddsv4, Dv5, Dsv5, Ddv5, Ddsv5, Dasv5, Dadsv5 | Balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers. |
| [Compute optimized](sizes-compute.md) | F, Fs, Fsv2, FX | High CPU-to-memory ratio. Good for medium traffic web servers, network appliances, batch processes, and application servers. |
-| [Memory optimized](sizes-memory.md) | Esv3, Ev3, Easv4, Eav4, Ebdsv5, Ebsv5, Ev4, Esv4, Edv4, Edsv4, Ev5, Esv5, Edv5, Edsv5, Easv5, Eadsv5, Mv2, M, DSv2, Dv2 | High memory-to-CPU ratio. Great for relational database servers, medium to large caches, and in-memory analytics. |
+| [Memory optimized](sizes-memory.md) | Esv3, Ev3, Easv4, Eav4, Epdsv5, Epsv5, Ev4, Esv4, Edv4, Edsv4, Ev5, Esv5, Edv5, Edsv5, Easv5, Eadsv5, Mv2, M, DSv2, Dv2 | High memory-to-CPU ratio. Great for relational database servers, medium to large caches, and in-memory analytics. |
| [Storage optimized](sizes-storage.md) | Lsv2, Lsv3, Lasv3 | High disk throughput and IO ideal for Big Data, SQL, NoSQL databases, data warehousing and large transactional databases. | | [GPU](sizes-gpu.md) | NC, NCv2, NCv3, NCasT4_v3, ND, NDv2, NV, NVv3, NVv4, NDasrA100_v4, NDm_A100_v4 | Specialized virtual machines targeted for heavy graphic rendering and video editing, as well as model training and inferencing (ND) with deep learning. Available with single or multiple GPUs. | | [High performance compute](sizes-hpc.md) | HB, HBv2, HBv3, HC, H | Our fastest and most powerful CPU virtual machines with optional high-throughput network interfaces (RDMA). |
virtual-machines Automation Configure Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-control-plane.md
The table below defines the parameters used for defining the Key Vault informati
> | `firewall_deployment` | Boolean flag controlling if an Azure firewall is to be deployed | Optional | | > | `bastion_deployment` | Boolean flag controlling if Azure Bastion host is to be deployed | Optional | | > | `enable_purge_control_for_keyvaults` | Boolean flag controlling if purge control is enabled on the Key Vault. | Optional | Use only for test deployments |
-> | `use_private_endpoint` | Boolean flag controlling if private endpoints are used. | Optional | Recommended |
+> | `use_private_endpoint` | Are private endpoints created for storage accounts and key vaults. | Optional | |
+> | `use_service_endpoint` | Are service endpoints defined for the subnets. | Optional | |
### Example parameters file for deployer (required parameters only)
virtual-machines Automation Configure Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-workload-zone.md
ANF_service_level = "Ultra"
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | | - | -- | - |
-> | `enable_purge_control_for_keyvaults` | Boolean flag controlling if purge control is enabled on the Key Vault. | Optional | Use only for test deployments |
-> | `use_private_endpoint` | Boolean flag controlling if private endpoints are used for storage accounts and key vaults. | Optional | |
-> | `diagnostics_storage_account_arm_id` | The Azure resource identifier for the diagnostics storage account | Required | For brown field deployments. |
-> | `witness_storage_account_arm_id` | The Azure resource identifier for the witness storage account | Required | For brown field deployments. |
+> | `enable_purge_control_for_keyvaults` | Is purge control is enabled on the Key Vault. | Optional | Use only for test deployments |
+> | `use_private_endpoint` | Are private endpoints created for storage accounts and key vaults. | Optional | |
+> | `use_service_endpoint` | Are service endpoints defined for the subnets. | Optional | |
+> | `diagnostics_storage_account_arm_id` | The Azure resource identifier for the diagnostics storage account | Required | For brown field deployments. |
+> | `witness_storage_account_arm_id` | The Azure resource identifier for the witness storage account | Required | For brown field deployments. |
## ISCSI Parameters
virtual-machines Sap Hana High Availability Netapp Files Red Hat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-red-hat.md
Follow the steps in [Setting up Pacemaker on Red Hat Enterprise Linux](./high-av
### Implement the Python system replication hook SAPHanaSR
-This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR python hook.
+This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR Python hook.
1. **[A]** Install the HANA "system replication hook". The hook needs to be installed on both HANA DB nodes. > [!TIP]
- > The python hook can only be implemented for HANA 2.0.
+ > The Python hook can only be implemented for HANA 2.0.
1. Prepare the hook as `root`.
virtual-machines Sap Hana High Availability Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-suse.md
Follow the steps in, [Setting up Pacemaker on SUSE Enterprise Linux](./high-avai
### Implement the Python system replication hook SAPHanaSR
-This is an important step to optimize the integration with the cluster and improve the detection, when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR python hook. Follow the steps mentioned in, [Implement the Python System Replication hook SAPHanaSR](./sap-hana-high-availability.md#implement-the-python-system-replication-hook-saphanasr)
+This is an important step to optimize the integration with the cluster and improve the detection, when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR Python hook. Follow the steps mentioned in, [Implement the Python System Replication hook SAPHanaSR](./sap-hana-high-availability.md#implement-the-python-system-replication-hook-saphanasr)
## Configure SAP HANA cluster resources
virtual-machines Sap Hana High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-rhel.md
Follow the steps in [Setting up Pacemaker on Red Hat Enterprise Linux in Azure](
## Implement the Python system replication hook SAPHanaSR
-This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR python hook.
+This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR Python hook.
1. **[A]** Install the HANA "system replication hook". The hook needs to be installed on both HANA DB nodes. > [!TIP]
- > The python hook can only be implemented for HANA 2.0.
+ > The Python hook can only be implemented for HANA 2.0.
1. Prepare the hook as `root`.
virtual-machines Sap Hana High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability.md
The steps in this section use the following prefixes:
## Implement the Python system replication hook SAPHanaSR
-This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR python hook.
+This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR Python hook.
1. **[A]** Install the HANA "system replication hook". The hook needs to be installed on both HANA DB nodes. > [!TIP] > Verify that package SAPHanaSR is at least version 0.153 to be able to use the SAPHanaSR Python hook functionality.
- > The python hook can only be implemented for HANA 2.0.
+ > The Python hook can only be implemented for HANA 2.0.
1. Prepare the hook as `root`.
This is important step to optimize the integration with the cluster and improve
2. **[A]** The cluster requires sudoers configuration on each cluster node for <sid\>adm. In this example that is achieved by creating a new file. Execute the commands as `root`. ```bash cat << EOF > /etc/sudoers.d/20-saphana
- # Needed for SAPHanaSR python hook
+ # Needed for SAPHanaSR Python hook
hn1adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_hn1_site_srHook_* EOF ```
virtual-network Virtual Network Tap Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-tap-overview.md
The accounts you use to apply TAP configuration on network interfaces must be as
### Security analytics, network/application performance management -- [Awake Security](https://awakesecurity.com/technology-partners/microsoft-azure/)
+- [Awake Security](https://www.arista.com/partner/technology-partners)
- [Cisco Stealthwatch Cloud](https://blogs.cisco.com/security/cisco-stealthwatch-cloud-and-microsoft-azure-reliable-cloud-infrastructure-meets-comprehensive-cloud-security) - [Darktrace](https://www.darktrace.com) - [ExtraHop Reveal(x)](https://www.extrahop.com/partners/tech-partners/microsoft/)