Updates from: 03/09/2023 02:24:45
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication Sample Ios App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-ios-app.md
-+ Last updated 01/06/2023
active-directory-b2c Custom Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-overview.md
-+ Last updated 01/10/2023
active-directory-b2c Enable Authentication Web Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-web-api.md
-+ Last updated 01/10/2023
Add two endpoints to your web API:
# [ASP.NET Core](#tab/csharpclient)
-Under the */Controllers* folder, add a *PublicController.cs* file, and then add to it the following code snippet:
+Under the */Controllers* folder, add a *PublicController.cs* file, and then add it to the following code snippet:
```csharp using System;
app.get('/public', (req, res) => res.send( {'date': new Date() } ));
# [ASP.NET Core](#tab/csharpclient)
-Under the */Controllers* folder, add a *HelloController.cs* file, and then add to it the following code:
+Under the */Controllers* folder, add a *HelloController.cs* file, and then add it to the following code:
```csharp using Microsoft.AspNetCore.Authorization;
In the *appsettings.json* file, update the following properties:
# [Node.js](#tab/nodejsgeneric)
-Under the project root folder, create a *config.json* file, and then add to it the following JSON snippet:
+Under the project root folder, create a *config.json* file, and then add it to the following JSON snippet:
```json {
active-directory-b2c Partner Azure Web Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-azure-web-application-firewall.md
Title: Tutorial to configure Azure Active Directory B2C with Azure Web Application Firewall
-description: Tutorial to configure Azure Active Directory B2C with Azure Web application firewall to protect your applications from malicious attacks
+description: Learn to configure Azure AD B2C with Azure Web Application Firewall to protect applications from malicious attacks
-+ - Previously updated : 08/17/2021 Last updated : 03/08/2023
-# Tutorial: Configure Azure Web Application Firewall with Azure Active Directory B2C
+# Tutorial: Configure Azure Active Directory B2C with Azure Web Application Firewall
-In this sample tutorial, learn how to enable [Azure Web Application Firewall (WAF)](https://azure.microsoft.com/services/web-application-firewall/#overview) solution for Azure Active Directory (AD) B2C tenant with custom domain. Azure WAF provides centralized protection of your web applications from common exploits and vulnerabilities.
+Learn how to enable the Azure Web Application Firewall (WAF) service for an Azure Active Directory B2C (Azure AD B2C) tenant, with a custom domain. WAF protects web applications from common exploits and vulnerabilities.
->[!NOTE]
->This feature is in public preview.
+See, [What is Azure Web Application Firewall?](../web-application-firewall/overview.md)
## Prerequisites
-To get started, you'll need:
--- An Azure subscription ΓÇô If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).--- [An Azure AD B2C tenant](tutorial-create-tenant.md) ΓÇô The authorization server, responsible for verifying the userΓÇÖs credentials using the custom policies defined in the tenant. It's also known as the identity provider.
+To get started, you need:
-- [Azure Front Door (AFD)](../frontdoor/index.yml) ΓÇô Responsible for enabling custom domains for Azure AD B2C tenant.
+* An Azure subscription
+* If you don't have one, get an [Azure free account](https://azure.microsoft.com/free/)
+* **An Azure AD B2C tenant** ΓÇô authorization server that verifies user credentials using custom policies defined in the tenant
+ * Also known as the identity provider (IdP)
+ * See, [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md)
+* **Azure Front Door (AFD)** ΓÇô enables custom domains for the Azure AD B2C tenant
+ * See, [Azure Front Door and CDN documentation](../frontdoor/index.yml)
+* **WAF** ΓÇô manages traffic sent to the authorization server
+ * [Azure Web Application Firewall](https://azure.microsoft.com/services/web-application-firewall/#overview)
-- [Azure WAF](https://azure.microsoft.com/services/web-application-firewall/#overview) ΓÇô Manages all traffic that is sent to the authorization server.
+## Custom domains in Azure AD B2C
-## Azure AD B2C setup
+To use custom domains in Azure AD B2C, use the custom domain features in AFD. See, [Enable custom domains for Azure AD B2C](./custom-domain.md?pivots=b2c-user-flow).
-To use custom domains in Azure AD B2C, it's required to use custom domain feature provided by AFD. Learn how to [enable Azure AD B2C custom domains](./custom-domain.md?pivots=b2c-user-flow).
+ > [!IMPORTANT]
+ > After you configure the custom domain, see [Test your custom domain](./custom-domain.md?pivots=b2c-custom-policy#test-your-custom-domain).
-After custom domain for Azure AD B2C is successfully configured using AFD, [test the custom domain](./custom-domain.md?pivots=b2c-custom-policy#test-your-custom-domain) before proceeding further.
+## Enable WAF
-## Onboard with Azure WAF
-
-To enable Azure WAF, configure a WAF policy and associate that policy to the AFD for protection.
+To enable WAF, configure a WAF policy and associate it with the AFD for protection.
### Create a WAF policy
-Create a basic WAF policy with managed Default Rule Set (DRS) in the [Azure portal](https://portal.azure.com).
-
-1. Go to the [Azure portal](https://portal.azure.com). Select **Create a resource** and then search for Azure WAF. Select **Azure Web Application Firewall (WAF)** > **Create**.
+Create a WAF policy with Azure-managed default rule set (DRS). See, [Web Application Firewall DRS rule groups and rules](../web-application-firewall/afds/waf-front-door-drs.md).
-2. Go to the **Create a WAF policy** page, select the **Basics** tab. Enter the following information, accept the defaults for the remaining settings.
+1. Go to the [Azure portal](https://portal.azure.com).
+2. Select **Create a resource**.
+3. Search for Azure WAF.
+4. Select **Azure Web Application Firewall (WAF)**.
+5. Select **Create**.
+6. Go to the **Create a WAF policy** page.
+7. Select the **Basics** tab.
+8. For **Policy for**, select **Global WAF (Front Door)**.
+9. For **Front Door SKU**, select between **Basic**, **Standard**, or **Premium** SKU.
+10. For **Subscription**, select your Front Door subscription name.
+11. For **Resource group**, select your Front Door resource group name.
+12. For **Policy name**, enter a unique name for your WAF policy.
+13. For **Policy state**, select **Enabled**.
+14. For **Policy mode**, select **Detection**.
+15. Select **Review + create**.
+16. Go to the **Association** tab of the Create a WAF policy page.
+17. Select **+ Associate a Front Door profile**.
+18. For **Front Door**, select your Front Door name associated with Azure AD B2C custom domain.
+19. For **Domains**, select the Azure AD B2C custom domains to associate the WAF policy to.
+20. Select **Add**.
+21. Select **Review + create**.
+22. Select **Create**.
-| Value | Description |
-|:--|:-|
-| Policy for | Global WAF (Front Door)|
-| Front Door SKU | Select between Basic, Standard, or Premium SKU |
-|Subscription | Select your Front Door subscription name |
-| Resource group | Select your Front Door resource group name |
-| Policy name | Enter a unique name for your WAF policy |
-| Policy state | Set as Enabled |
-| Policy mode | Set as Detection |
+### Detection and Prevention modes
-3. Select **Review + create**
+When you create WAF policy, the policy is in Detection mode. We recommend you don't disable Detection mode. In this mode, WAF doesn't block requests. Instead, requests that match the WAF rules are logged in the WAF logs.
-4. Go to the **Association** tab of the Create a WAF policy page, select + **Associate a Front Door profile**, enter the following settings
+Learn more: [Azure Web Application Firewall monitoring and logging](../web-application-firewall/afds/waf-front-door-monitor.md)
-| Value | Description |
-|:-|:|
-| Front Door | Select your Front Door name associated with Azure AD B2C custom domain |
-| Domains | Select the Azure AD B2C custom domains you want to associate the WAF policy to|
+The following query shows the requests blocked by the WAF policy in the past 24 hours. The details include, rule name, request data, action taken by the policy, and the policy mode.
+
+ ![Screenshot of blocked requests.](./media/partner-azure-web-application-firewall/blocked-requests-query.png)
-5. Select **Add**.
+ ![Screenshot of blocked requests details, such as Rule ID, Action, Mode, etc.](./media/partner-azure-web-application-firewall/blocked-requests-details.png)
-6. Select **Review + create**, then select **Create**.
+Review the WAF logs to determine if policy rules cause false positives. Then, exclude the WAF rules based on the WAF logs.
-### Change policy mode from detection to prevention
+Learn more: [Define exclusion rules based on Web Application Firewall logs](../web-application-firewall/afds/waf-front-door-exclusion.md#define-exclusion-based-on-web-application-firewall-logs)
-When a WAF policy is created, by default the policy is in Detection mode. In Detection mode, WAF doesn't block any requests, instead, requests matching the WAF rules are logged in the WAF logs. For more information about WAF logging, see [Azure WAF monitoring and logging](../web-application-firewall/afds/waf-front-door-monitor.md).
+#### Switching modes
-The sample query shows all the requests that were blocked by the WAF policy in the past 24 hours. The details include, rule name, request data, action taken by the policy, and the policy mode.
+To see WAF operating, select **Switch to prevention mode**, which changes the mode from Detection to Prevention. Requests that match the rules in the DRS are blocked and logged in the WAF logs.
-![Image shows the blocked requests](./media/partner-azure-web-application-firewall/blocked-requests-query.png)
+ ![Screenshot of options and selections for DefaultRuleSet under Web Application Firewall policies.](./media/partner-azure-web-application-firewall/switch-to-prevention-mode.png)
-![Image shows the blocked requests details](./media/partner-azure-web-application-firewall/blocked-requests-details.png)
+To revert to Detection mode, select **Switch to detection mode**.
-It's recommended that you let the WAF capture requests in Detection mode. Review the WAF logs to determine if there are any rules in the policy that are causing false positive results. Then after [exclude the WAF rules based on the WAF logs](../web-application-firewall/afds/waf-front-door-exclusion.md#define-exclusion-based-on-web-application-firewall-logs).
-
-To see WAF in action, use Switch to prevention mode to change from Detection to Prevention mode. All requests that match the rules defined in the Default Rule Set (DRS) are blocked and logged in the WAF logs.
-
-![Image shows the switch to prevention mode](./media/partner-azure-web-application-firewall/switch-to-prevention-mode.png)
-
-In case you want to switch back to the detection mode, you can do so by using Switch to detection mode option.
-
-![Image shows the switch to detection mode](./media/partner-azure-web-application-firewall/switch-to-detection-mode.png)
+ ![Screenshot of DefaultRuleSet with Switch to detection mode.](./media/partner-azure-web-application-firewall/switch-to-detection-mode.png)
## Next steps -- [Azure WAF monitoring and logging](../web-application-firewall/afds/waf-front-door-monitor.md)--- [WAF with Front Door service exclusion lists](../web-application-firewall/afds/waf-front-door-exclusion.md)
+* [Azure Web Application Firewall monitoring and logging](../web-application-firewall/afds/waf-front-door-monitor.md)
+* [Web Application Firewall (WAF) with Front Door exclusion lists](../web-application-firewall/afds/waf-front-door-exclusion.md)
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Previously updated : 03/07/2023 Last updated : 03/08/2023
Use the general guidelines when implementing a SCIM endpoint to ensure compatibi
* If a value isn't present, don't send null values. * Property values should be camel cased (for example, readWrite). * Must return a list response.
-* The Azure AD Provisioning Service makes the /schemas request every time someone saves the provisioning configuration in the Azure portal or every time a user lands on the edit provisioning page in the Azure portal. Other attributes discovered are surfaced to customers in the attribute mappings under the target attribute list. Schema discovery only leads to more target attributes being added. Attributes aren't removed.
+* The Azure AD Provisioning Service makes the /schemas request when you save the provisioning configuration in the Azure portal. The request is also made when you open the edit provisioning page in the Azure portal. Other attributes discovered are surfaced to customers in the attribute mappings under the target attribute list. Schema discovery only leads to more target attributes being added. Attributes aren't removed.
### User provisioning and deprovisioning
This article provides example SCIM requests emitted by the Azure Active Director
### User Operations
-* Users can be queried by `userName` or `emails[type eq "work"]` attributes.
+* Use `userName` or `emails[type eq "work"]` attributes to query users.
#### Create User
This article provides example SCIM requests emitted by the Azure Active Director
### Group Operations
-* Groups shall always be created with an empty members list.
-* Groups can be queried by the `displayName` attribute.
+* Groups are created with an empty members list.
+* Use the `displayName` attribute to query groups.
* Update to the group PATCH request should yield an *HTTP 204 No Content* in the response. Returning a body with a list of all the members isn't advisable. * It isn't necessary to support returning all the members of the group.
Now that you've designed your schema and understood the Azure AD SCIM implementa
For guidance on how to build a SCIM endpoint including examples, see [Develop a sample SCIM endpoint](use-scim-to-build-users-and-groups-endpoints.md).
-The open source .NET Core [reference code example](https://aka.ms/SCIMReferenceCode) published by the Azure AD provisioning team is one such resource that can jump start your development. Once you have built your SCIM endpoint, you'll want to test it out. You can use the collection of [Postman tests](https://github.com/AzureAD/SCIMReferenceCode/wiki/Test-Your-SCIM-Endpoint) provided as part of the reference code or run through the sample requests / responses provided [above](#user-operations).
+The open source .NET Core [reference code example](https://aka.ms/SCIMReferenceCode) published by the Azure AD provisioning team is one such resource that can jump start your development. Build a SCIM endpoint, then test it out. Use the collection of [Postman tests](https://github.com/AzureAD/SCIMReferenceCode/wiki/Test-Your-SCIM-Endpoint) provided as part of the reference code or run through the sample requests / responses [provided](#user-operations).
> [!Note] > The reference code is intended to help you get started building your SCIM endpoint and is provided "AS IS." Contributions from the community are welcome to help build and maintain the code.
The SCIM endpoint must have an HTTP address and server authentication certificat
* WoSign * DST Root CA X3
-The .NET Core SDK includes an HTTPS development certificate that can be used during development, the certificate is installed as part of the first-run experience. Depending on how you run the ASP.NET Core Web Application it will listen to a different port:
+The .NET Core SDK includes an HTTPS development certificate that is used during development. The certificate is installed as part of the first-run experience. Depending on how you run the ASP.NET Core Web Application it listens to a different port:
* Microsoft.SCIM.WebHostSample: `https://localhost:5001` * IIS Express: `https://localhost:44359`
Once the initial cycle has started, you can select **Provisioning logs** in the
## Publish your application to the Azure AD application gallery
-If you're building an application that will be used by more than one tenant, you can make it available in the Azure AD application gallery. It's easy for organizations to discover the application and configure provisioning. Publishing your app in the Azure AD gallery and making provisioning available to others is easy. Check out the steps [here](../manage-apps/v2-howto-app-gallery-listing.md). Microsoft will work with you to integrate your application into our gallery, test your endpoint, and release onboarding [documentation](../saas-apps/tutorial-list.md) for customers to use.
+If you're building an application used by more than one tenant, make it available in the Azure AD application gallery. It's easy for organizations to discover the application and configure provisioning. Publishing your app in the Azure AD gallery and making provisioning available to others is easy. Check out the steps [here](../manage-apps/v2-howto-app-gallery-listing.md). Microsoft works with you to integrate your application into the gallery, test your endpoint, and release onboarding [documentation](../saas-apps/tutorial-list.md) for customers.
### Gallery onboarding checklist
-Use the checklist to onboard your application quickly and customers have a smooth deployment experience. The information will be gathered from you when onboarding to the gallery.
+Use the checklist to onboard your application quickly and customers have a smooth deployment experience. The information is gathered from you when onboarding to the gallery.
> [!div class="checklist"] > * Support a [SCIM 2.0](#understand-the-azure-ad-scim-implementation) user and group endpoint (Only one is required but both are recommended) > * Support at least 25 requests per second per tenant to ensure that users and groups are provisioned and deprovisioned without delay (Required)
Best practices (recommended, but not required):
> [!NOTE] > While it's not possible to setup OAuth on the non-gallery applications, you can manually generate an access token from your authorization server and input it as the secret token to a non-gallery application. This allows you to verify compatibility of your SCIM server with the Azure AD Provisioning Service before onboarding to the app gallery, which does support the OAuth code grant.
-**Long-lived OAuth bearer tokens:** If your application doesn't support the OAuth authorization code grant flow, instead generate a long lived OAuth bearer token that an administrator can use to set up the provisioning integration. The token should be perpetual, or else the provisioning job will be [quarantined](application-provisioning-quarantine-status.md) when the token expires.
+**Long-lived OAuth bearer tokens:** If your application doesn't support the OAuth authorization code grant flow, instead generate a long lived OAuth bearer token that an administrator can use to set up the provisioning integration. The token should be perpetual, or else the provisioning job is [quarantined](application-provisioning-quarantine-status.md) when the token expires.
For more authentication and authorization methods, let us know on [UserVoice](https://aka.ms/appprovisioningfeaturerequest).
active-directory Active Directory Configurable Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-configurable-token-lifetimes.md
Previously updated : 09/07/2022 Last updated : 03/07/2023 # Configurable token lifetimes in the Microsoft identity platform (preview)
-You can specify the lifetime of a access, ID, or SAML token issued by the Microsoft identity platform. You can set token lifetimes for all apps in your organization, for a multi-tenant (multi-organization) application, or for a specific service principal in your organization. However, we currently do not support configuring the token lifetimes for [managed identity service principals](../managed-identities-azure-resources/overview.md).
+You can specify the lifetime of an access, ID, or SAML token issued by the Microsoft identity platform. You can set token lifetimes for all apps in your organization, for a multi-tenant (multi-organization) application, or for a specific service principal in your organization. However, we currently don't support configuring the token lifetimes for [managed identity service principals](../managed-identities-azure-resources/overview.md).
-In Azure AD, a policy object represents a set of rules that are enforced on individual applications or on all applications in an organization. Each policy type has a unique structure, with a set of properties that are applied to objects to which they are assigned.
+In Azure AD, a policy object represents a set of rules that are enforced on individual applications or on all applications in an organization. Each policy type has a unique structure, with a set of properties that are applied to objects to which they're assigned.
-You can designate a policy as the default policy for your organization. The policy is applied to any application in the organization, as long as it is not overridden by a policy with a higher priority. You also can assign a policy to specific applications. The order of priority varies by policy type.
+You can designate a policy as the default policy for your organization. The policy is applied to any application in the organization, as long as it isn't overridden by a policy with a higher priority. You also can assign a policy to specific applications. The order of priority varies by policy type.
For examples, read [examples of how to configure token lifetimes](configure-token-lifetimes.md).
Refresh and session token configuration are affected by the following properties
|Single-Factor Session Token Max Age |MaxAgeSessionSingleFactor |Session tokens (persistent and nonpersistent) |Until-revoked | |Multi-Factor Session Token Max Age |MaxAgeSessionMultiFactor |Session tokens (persistent and nonpersistent) |Until-revoked |
-Non-persistent session tokens have a Max Inactive Time of 24 hours whereas persistent session tokens have a Max Inactive Time of 90 days. Any time the SSO session token is used within its validity period, the validity period is extended another 24 hours or 90 days. If the SSO session token is not used within its Max Inactive Time period, it is considered expired and will no longer be accepted. Any changes to this default periods should be change using [Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
+Non-persistent session tokens have a Max Inactive Time of 24 hours whereas persistent session tokens have a Max Inactive Time of 90 days. Anytime the SSO session token is used within its validity period, the validity period is extended another 24 hours or 90 days. If the SSO session token isn't used within its Max Inactive Time period, it's considered expired and will no longer be accepted. Any changes to this default periods should be change using [Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
You can use PowerShell to find the policies that will be affected by the retirement. Use the [PowerShell cmdlets](configure-token-lifetimes.md#get-started) to see the all policies created in your organization, or to find which apps and service principals are linked to a specific policy. ## Policy evaluation and prioritization You can create and then assign a token lifetime policy to a specific application, to your organization, and to service principals. Multiple policies might apply to a specific application. The token lifetime policy that takes effect follows these rules:
-* If a policy is explicitly assigned to the service principal, it is enforced.
+* If a policy is explicitly assigned to the service principal, it's enforced.
* If no policy is explicitly assigned to the service principal, a policy explicitly assigned to the parent organization of the service principal is enforced. * If no policy is explicitly assigned to the service principal or to the organization, the policy assigned to the application is enforced. * If no policy has been assigned to the service principal, the organization, or the application object, the default values are enforced. (See the table in [Configurable token lifetime properties](#configurable-token-lifetime-properties).)
active-directory Custom Claims Provider Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-claims-provider-overview.md
+
+ Title: Custom claims provider overview
+
+description: Conceptual article describing the custom claims provider as part of the custom authentication extension framework.
++++++++ Last updated : 03/06/2023+++
+#Customer intent: As a developer, I want to learn about custom claims provider so that I can augment tokens with claims from an external identity system or role management system.
++
+# Custom claims provider (preview)
+
+This article provides an overview to the Azure Active Directory (Azure AD) custom claims provider.
+When a user authenticates to an application, a custom claims provider can be used to add claims into the token. A custom claims provider is made up of a custom extension that calls an external REST API, to fetch claims from external systems. A custom claims provider can be assigned to one or many applications in your directory.
+
+Key data about a user is often stored in systems external to Azure AD. For example, secondary email, billing tier, or sensitive information. Some applications may rely on these attributes for the application to function as designed. For example, the application may block access to certain features based on a claim in the token.
+
+Use a custom claims provider for the following scenarios:
+
+- **Migration of legacy systems** - You may have legacy identity systems such as Active Directory Federation Services (AD FS) or data stores (such as LDAP directory) that hold information about users. You'd like to migrate these applications, but can't fully migrate the identity data into Azure AD. Your apps may depend on certain information on the token, and can't be rearchitected.
+- **Integration with other data stores that can't be synced to the directory** - You may have third-party systems, or your own systems that store user data. Ideally this information could be consolidated, either through [synchronization](../cloud-sync/what-is-cloud-sync.md) or direct migration, in the Azure AD directory. However, that isn't always feasible. The restriction may be because of data residency, regulations, or other requirements.
+
+## Token issuance start event listener
+
+An event listener is a procedure that waits for an event to occur. The custom extension uses the **token issuance start** event listener. The event is triggered when a token is about to be issued to your application. When the event is triggered the custom extension REST API is called to fetch attributes from external systems.
+
+For an example using a custom claims provider with the **token issuance start** event listener, check out the [get started with custom claims providers](custom-extension-get-started.md) article.
+
+## Next steps
+
+- Learn how to [create and register a custom claims provider](custom-extension-get-started.md) with a sample Open ID Connect application.
+- If you already have a custom claims provider registered, you can configure a [SAML application](custom-extension-configure-saml-app.md) to receive tokens with claims sourced from an external store.
active-directory Custom Claims Provider Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-claims-provider-reference.md
+
+ Title: Custom claims provider reference
+
+description: Reference documentation for custom claims providers
++++++++ Last updated : 03/06/2023+++
+#Customer intent: As a developer, I want to learn about custom authentication extensions so that I can augment tokens with claims from an external identity system or role management system.
++
+# Custom claims providers
+
+In this reference article, you can learn about the REST API schema and claims mapping policy structure for custom claim provider events.
+
+## Token issuance start event
+
+The custom claims provider token issuance event allows you to enrich or customize application tokens with information from external systems. This information that can't be stored as part of the user profile in Azure AD directory.
+
+### Component overview
+
+To set up and, integrate a custom extension with your application requires multiple components to be connected. The following diagram shows a high level view of the configuration points, and relationships that are created to implement a custom extension.
++
+- You should have a **REST API endpoint** publicly available. In this diagram, it represented by Azure Function. The REST API generates and returns custom claims to the custom extension. It's associated with an Azure AD application registration.
+- You require to configure a **custom extension** in Azure AD, which is configured to connect to your API.
+- You require an **application** that receives the customized tokens. For example <https://jwt.ms> a Microsoft-owned web application that displays the decoded contents of a token.
+- The application, such as the <https://jwt.ms> must be registered into Azure AD using **app registration**.
+- You must create an association between your application and your custom extension.
+- You can optionally secure the Azure Function with an authentication provider, in this article we use your Azure AD.
+
+### REST API
+
+Your REST API endpoint is responsible for interfacing with downstream services. For example, databases, other REST APIs, LDAP directories, or any other stores that contain the attributes you'd like to add to the token configuration.
+
+The REST API returns an HTTP response to Azure AD containing the attributes. Attributes that return by your REST API aren't automatically added into a token. Instead, an application's claims mapping policy must be configured for any attribute to be included in the token. In Azure AD, a claims mapping policy modifies the claims emitted in tokens issued for specific applications.
+
+### REST API schema
+
+To develop your own REST API for the token issuance start event, use the following REST API data contract. The schema describes the contract to design the request and response handler.
+
+Your custom extension in Azure AD makes an HTTP call to your REST API with a JSON payload. The JSON payload contains user profile data, authentication context attributes, and information about the application the user wants to sign-in. The JSON attributes can be used to perform extra logic by your API. The request to your API is in the following format:
+
+```json
+{
+ "type": "microsoft.graph.authenticationEvent.tokenIssuanceStart",
+ "source": "/tenants/<Your tenant GUID>/applications/<Your Test Application App Id>",
+ "data": {
+ "@odata.type": "microsoft.graph.onTokenIssuanceStartCalloutData",
+ "tenantId": "<Your tenant GUID>",
+ "authenticationEventListenerId": "<GUID>",
+ "customAuthenticationExtensionId": "<Your custom extension ID>",
+ "authenticationContext": {
+ "correlationId": "<GUID>",
+ "client": {
+ "ip": "30.51.176.110",
+ "locale": "en-us",
+ "market": "en-us"
+ },
+ "protocol": "OAUTH2.0",
+ "clientServicePrincipal": {
+ "id": "<Your Test Applications servicePrincipal objectId>",
+ "appId": "<Your Test Application App Id>",
+ "appDisplayName": "My Test application",
+ "displayName": "My Test application"
+ },
+ "resourceServicePrincipal": {
+ "id": "<Your Test Applications servicePrincipal objectId>",
+ "appId": "<Your Test Application App Id>",
+ "appDisplayName": "My Test application",
+ "displayName": "My Test application"
+ },
+ "user": {
+ "createdDateTime": "2016-03-01T15:23:40Z",
+ "displayName": "Bob",
+ "givenName": "Bob Smith",
+ "id": "90847c2a-e29d-4d2f-9f54-c5b4d3f26471",
+ "mail": "bob@contoso.com",
+ "preferredLanguage": "en-us",
+ "surname": "Smith",
+ "userPrincipalName": "bob@contoso.com",
+ "userType": "Member"
+ }
+ }
+ }
+}
+```
+
+The REST API response format which Azure expects is in the following format, where the claims `DateOfBirth` and `CustomRoles` are returned to Azure:
+
+```json
+{
+ "data": {
+ "@odata.type": "microsoft.graph.onTokenIssuanceStartResponseData",
+ "actions": [
+ {
+ "@odata.type": "microsoft.graph.tokenIssuanceStart.provideClaimsForToken",
+ "claims": {
+ "DateOfBirth": "01/01/2000",
+ "CustomRoles": [
+ "Writer",
+ "Editor"
+ ]
+ }
+ }
+ ]
+ }
+}
+```
+
+When a B2B user from Fabrikam organization authenticates to Contoso's organization, the request payload sent to the REST API has the `user` element in the following format:
+
+```json
+"user": {
+ "companyName": "Fabrikam",
+ "createdDateTime": "2022-07-15T00:00:00Z",
+ "displayName": "John Wright",
+ "id": "12345678-0000-0000-0000-000000000000",
+ "mail": "johnwright@fabrikam.com",
+ "preferredDataLocation": "EUR",
+ "userPrincipalName": "johnwright_fabrikam.com#EXT#@contoso.onmicrosoft.com",
+ "userType": "Guest"
+}
+```
+
+### Supported data types
+
+The following table shows the data types supported by Custom claims providers for the token issuance start event:
+
+| Data type | Supported |
+|--|--|
+| String | True |
+| String array | True |
+| Boolean | False |
+| JSON | False |
+
+### Claims mapping policy
+
+In Azure AD, a claims mapping policy modifies the claims emitted in tokens issued for specific applications. It includes claims from your custom claims provider, and issuing them into the token.
+
+```json
+{
+ "ClaimsMappingPolicy": {
+ "Version": 1,
+ "IncludeBasicClaimSet": "true",
+ "ClaimsSchema": [{
+ "Source": "CustomClaimsProvider",
+ "ID": "dateOfBirth",
+ "JwtClaimType": "birthdate"
+ },
+ {
+ "Source": "CustomClaimsProvider",
+ "ID": "customRoles",
+ "JwtClaimType": "my_roles"
+ },
+ {
+ "Source": "CustomClaimsProvider",
+ "ID": "correlationId",
+ "JwtClaimType": "correlation_Id"
+ },
+ {
+ "Source": "CustomClaimsProvider",
+ "ID": "apiVersion",
+ "JwtClaimType": "apiVersion"
+ },
+ {
+ "Value": "tokenaug_V2",
+ "JwtClaimType": "policy_version"
+ }]
+ }
+}
+```
+
+The `ClaimsSchema` element contains the list of claims to be mapped with the following attributes:
+
+- **Source** describes the source of the attribute, the `CustomClaimsProvider`. Note, the last element contains a fixed value with the policy version, for testing purposes. Thus, the `source` attribute is omitted.
+
+- **ID** is the name of the claims at it returns from the Azure Function you created.
+
+ > [!IMPORTANT]
+ > The ID attribute's value is case sensitive. Make sure you type the claim name exactly as it returned by the Azure Function.
+- **JwtClaimType** is an optional name of claim in the emitted token for OpenID Connect app. It allows you to provide a different name that returns in the JWT token. For example, if the API response has an `ID` value of `dateOfBirth`, it can be emitted as `birthdate` in the token.
+
+Once you create your claims mapping policy, the next step is to upload it to your Azure AD tenant. Use the following [claimsMappingPolicy](/graph/api/claimsmappingpolicy-post-claimsmappingpolicies) Graph API in your tenant.
+
+> [!IMPORTANT]
+> The **definition** element should be an array with a single string value. The string should be the stringified and escaped version of your claims mapping policy. You can use tools like [https://jsontostring.com/](https://jsontostring.com/) to stringify your claims mapping policy.
+
+## Next steps
+
+- To learn how to [create and register a custom extension and API endpoint](custom-extension-get-started.md).
+- To learn how to customize the claims emitted in tokens for a specific application in their tenant using PowerShell, see [How to: Customize claims emitted in tokens for a specific app in a tenant](active-directory-claims-mapping.md)
+- To learn how to customize claims issued in the SAML token through the Azure portal, see [How to: Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md)
+- To learn more about extension attributes, see [Using directory extension attributes in claims](active-directory-schema-extensions.md).
active-directory Custom Extension Configure Saml App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-configure-saml-app.md
+
+ Title: Source claims from an external store (SAML app)
+
+description: Use a custom claims provider to augment tokens with claims from an external identity system. Configure a SAML app to receive tokens with external claims.
++++++++ Last updated : 03/06/2023+++
+#Customer intent: As an application developer, I want to source claims from a data store that is external to Azure Active Directory.
++
+# Configure a SAML app to receive tokens with claims from an external store (preview)
+
+This article describes how to configure a SAML application to receive tokens with external claims from your custom claims provider.
+
+## Prerequisites
+
+Before configuring a SAML application to receive tokens with external claims, first follow these sections:
+
+- [Create a custom claims provider API](custom-extension-get-started.md#step-1-create-an-azure-function-app)
+- [Register a custom claims extension](custom-extension-get-started.md#step-2-register-a-custom-extension)
+
+## Configure a SAML application that receives enriched tokens
+
+Individual app administrators or owners can use a custom claims provider to enrich tokens for existing applications or new applications. These apps can use tokens in either [JWT (for OpenID connect)](./custom-extension-get-started.md) or SAML formats.
+
+The following steps are for registering a demo [XRayClaims](https://adfshelp.microsoft.com/ClaimsXray/TokenRequest) application so you can test whether it can receive a token with enriched claims.
+
+### Add a new SAML application
+
+Add a new, non-gallery SAML application in your tenant:
+
+1. In the [Azure portal](https://portal.azure.com), go to **Azure Active Directory** and then **Enterprise applications**. Select **New application** and then **Create your own application**.
+
+1. Add a name for the app. For example, **AzureADClaimsXRay**. Select the **Integrate any other application you don't find in the gallery (Non-gallery)** option and select **Create**.
+
+### Configure single sign-on with SAML
+
+Set up single sign-on for the app:
+
+1. In the **Overview** page, select **Set up Single Sign-On** and then **SAML**. Select **Edit** in **Basic SAML Configuration**.
+
+1. Select **Add identifier** and add "urn:microsoft:adfs:claimsxray" as the identifier. If that identifier is already used by another application in your organization, you can use an alternative like **urn:microsoft:adfs:claimsxray12**
+
+1. Select **reply URL** and add `https://adfshelp.microsoft.com/ClaimsXray/TokenResponse` as the Reply URL.
+
+1. Select **Save**.
+
+### Configure claims
+
+Attributes that return by your custom claims provider API aren't automatically included in tokens returned by Azure AD. You need to configure your application to reference attributes returned by the custom claims provider and return them as claims in tokens.
+
+1. On the **Enterprise applications** configuration page for that new app, go to the **Single sign-on** pane.
+
+1. Select on **Edit** for the **Attributes & Claims** section
+
+1. Expand the **Advanced settings** section.
+
+1. Select on **Configure** for **Custom claims provider**.
+
+1. Select the custom extension you [registered previously](custom-extension-get-started.md#step-2-register-a-custom-extension) in the **Custom claims provider** dropdown. Select **Save**.
+
+1. Select **Add new claim** to add a new claim.
+
+1. Provide a name to the claim you want to be issued, for example "DoB". Optionally set a namespace URI.
+
+1. For **Source**, select **Attribute** and pick the attribute provided by the custom claims provider from the **Source attribute** dropdown. Attributes shown are the attributes defined as 'to be made available' by the custom claims provider in your custom claims provider configuration. Attributes provided by the custom claims provider are prefixed with **customclaimsprovider**. For example, **customclaimsprovider.DateOfBirth** and **customclaimsprovider.CustomRoles**. These claims can be single or multi-valued depend on your API response.
+
+1. Select **Save** to add the claim to the SAML token configuration.
+
+1. Close the **Manage claim** and **Attributes & Claims** windows.
+
+### Assign a user or group to the app
+
+Before testing the user sign-in, you must assign a user or group of users to the app. If you don't, the `AADSTS50105 - The signed in user is not assigned to a role for the application` error returns when signing in.
+
+1. In the application **Overview** page, select **Assign users and groups** under **Getting started**.
+
+1. In the **Users and groups** page, select **Add user/group**.
+
+1. Search for and select the user to sign into the app. Select the **Assign** button.
+
+### Test the application
+
+Test that the token is being enriched for users signing in to the application:
+
+1. In the app overview page, select **Single sign-on** in the left nav bar.
+
+1. Scroll down and select **Test** under **Test single sign-on with {app name}**.
+
+1. Select **Test sign in** and sign in. At the end of your sign-in, you should see the Token response Claims X-ray tool. The claims you configured to appear in the token should all be listed if they have non-null values, including any that use the custom claims provider as a source.
++
+## Next steps
+
+[Troubleshoot your custom claims provider API](custom-extension-troubleshoot.md).
+
+View the [Authentication Events Trigger for Azure Functions sample app](https://github.com/Azure/microsoft-azure-webJobs-extensions-authentication-events).
+
+<!-- For information on the HTTP request and response formats, read the [protocol reference](custom-claims-provider-protocol-reference.md). -->
active-directory Custom Extension Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-get-started.md
+
+ Title: Get started with custom claims providers (preview)
+
+description: Learn how to develop and register an Azure Active Directory custom extensions REST API. The custom extension allows you to source claims from a data store that is external to Azure Active Directory.
++++++++ Last updated : 03/06/2023+++
+#Customer intent: As an application developer, I want to create and register a custom authentication extensions API so I can source claims from a data store that is external to Azure Active Directory.
++
+# Configure a custom claim provider token issuance event (preview)
+
+This article describes how to configure and setup a custom claims provider with the [token issuance start event](custom-claims-provider-overview.md#token-issuance-start-event-listener) type. This event is triggered right before the token is issued, and allows you to call a REST API to add claims to the token.
+
+This how-to guide demonstrates the token issuance start event with a REST API running in Azure Functions and a sample OpenID Connect application.
+
+## Prerequisites
+
+- Before following this article, read the [custom authentication extensions](custom-extension-overview.md) overview.
+
+- To use Azure services, including Azure Functions, you need an Azure subscription. If you don't have an existing Azure account, you may sign up for a [free trial](https://azure.microsoft.com/free/dotnet/) or use your [Visual Studio Subscription](https://visualstudio.microsoft.com/subscriptions/) benefits when you [create an account](https://account.windowsazure.com/Home/Index).
+
+## Step 1. Create an Azure Function app
+
+In this step, you create an HTTP trigger function API in the Azure portal. The function API is the source of extra claims for your token. Follow these steps to create an Azure Function:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) with your administrator account.
+1. From the Azure portal menu or the **Home** page, select **Create a resource**.
+1. In the **New** page, select **Compute** > **Function App**.
+1. On the **Basics** page, use the function app settings as specified in the following table:
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Subscription** | Your subscription | The subscription under which the new function app will be created in. |
+ | **[Resource Group](/azure/azure-resource-manager/management/overview)** | *myResourceGroup* | Select and existing resource group, or name for the new one in which you'll create your function app. |
+ | **Function App name** | Globally unique name | A name that identifies the new function app. Valid characters are `a-z` (case insensitive), `0-9`, and `-`. |
+ |**Publish**| Code | Option to publish code files or a Docker container. For this tutorial, select **Code**. |
+ | **Runtime stack** | .NET | Your preferred programming language. For this tutorial, select **.NET**. |
+ |**Version**| 6 | Version of the .NET runtime. |
+ |**Region**| Preferred region | Select a [region](https://azure.microsoft.com/regions/) that's near you or near other services that your functions can access. |
+ | **Operating System** | Windows | The operating system is pre-selected for you based on your runtime stack selection. |
+ | **Plan type** | Consumption (Serverless) | Hosting plan that defines how resources are allocated to your function app. |
+
+1. Select **Review + create** to review the app configuration selections and then select **Create**.
+
+1. Select the **Notifications** icon in the upper-right corner of the portal and watch for the **Deployment succeeded** message. Then, select **Go to resource** to view your new function app.
+
+### 1.1 Create an HTTP trigger function
+
+After the Azure Function app is created, create an HTTP trigger function. The HTTP trigger lets you invoke a function with an HTTP request. This HTTP trigger will be referenced and called by your Azure AD custom extension.
+
+1. Within your **Function App**, from the menu select **Functions**.
+1. From the top menu, select **+ Create**.
+1. In the **Create Function** window, leave the **Development environment** property as **Develop in portal**, and then select the **HTTP trigger** template.
+1. Under **Template details**, enter *CustomExtensionsAPI* for the **New Function** property.
+1. For the **Authorization level**, select **Function**.
+1. Select **Create**
+
+The following screenshot demonstrates how to configure the Azure HTTP trigger function.
++
+### 1.2 Edit the function
+
+1. From the menu, select **Code + Test**
+1. Replace the entire code with the following code snippet.
+
+ ```csharp
+ #r "Newtonsoft.Json"
+ using System.Net;
+ using Microsoft.AspNetCore.Mvc;
+ using Microsoft.Extensions.Primitives;
+ using Newtonsoft.Json;
+ public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
+ {
+ log.LogInformation("C# HTTP trigger function processed a request.");
+ string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+ dynamic data = JsonConvert.DeserializeObject(requestBody);
+
+ // Read the correlation ID from the Azure AD request
+ string correlationId = data?.data.authenticationContext.correlationId;
+
+ // Claims to return to Azure AD
+ ResponseContent r = new ResponseContent();
+ r.data.actions[0].claims.CorrelationId = correlationId;
+ r.data.actions[0].claims.ApiVersion = "1.0.0";
+ r.data.actions[0].claims.DateOfBirth = "01/01/2000";
+ r.data.actions[0].claims.CustomRoles.Add("Writer");
+ r.data.actions[0].claims.CustomRoles.Add("Editor");
+ return new OkObjectResult(r);
+ }
+
+ public class ResponseContent{
+ [JsonProperty("data")]
+ public Data data { get; set; }
+ public ResponseContent()
+ {
+ data = new Data();
+ }
+ }
+
+ public class Data{
+ [JsonProperty("@odata.type")]
+ public string odatatype { get; set; }
+ public List<Action> actions { get; set; }
+ public Data()
+ {
+ odatatype = "microsoft.graph.onTokenIssuanceStartResponseData";
+ actions = new List<Action>();
+ actions.Add(new Action());
+ }
+ }
+
+ public class Action{
+ [JsonProperty("@odata.type")]
+ public string odatatype { get; set; }
+ public Claims claims { get; set; }
+ public Action()
+ {
+ odatatype = "microsoft.graph.provideClaimsForToken";
+ claims = new Claims();
+ }
+ }
+
+ public class Claims{
+ [JsonProperty(NullValueHandling = NullValueHandling.Ignore)]
+ public string CorrelationId { get; set; }
+ [JsonProperty(NullValueHandling = NullValueHandling.Ignore)]
+ public string DateOfBirth { get; set; }
+ public string ApiVersion { get; set; }
+ public List<string> CustomRoles { get; set; }
+ public Claims()
+ {
+ CustomRoles = new List<string>();
+ }
+ }
+ ```
+
+ The code starts with reading the incoming JSON object. Azure AD sends the JSON object to your API. In this example, it reads the correlation ID value. Then, the code returns a collection of claims, including the original correlation ID, the version of your Azure Function, date of birth and custom role that is returned to Azure AD.
+
+1. From the top menu, select **Get Function Url**, and copy the URL. In the next step, the function URL will be used and referred to as `{Function_Url}`.
+
+## Step 2. Register a custom extension
+
+In this step, you configure a custom extension, which will be used by Azure AD to call your Azure function. The custom extension contains information about your REST API endpoint, the claims that it parses from your REST API, and how to authenticate to your REST API. Follow these steps to register a custom extension:
+
+# [Azure portal](#tab/azure-portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Under **Azure services**, select **Azure Active Directory**.
+1. Ensure your user account has the Global Administrator or Application Administrator and Authentication Extensibility Administrator role. Otherwise, learn how to [assign a role](../roles/manage-roles-portal.md).
+1. From the menu, select **Enterprise applications**.
+1. Under **Manage**, select the **Custom authentication extensions**.
+1. Select **Create a custom extension**.
+1. In **Basics**, select the **tokenIssuanceStart** event and select **Next**.
+1. In **Endpoint Configuration**, fill in the following properties:
+
+ - **Name** - A name for your custom extension. For example, *Token issuance event*.
+ - **Target Url** - The `{Function_Url}` of your Azure Function URL.
+ - **Description** - A description for your custom extensions.
+
+1. Select **Next**.
+
+1. In **API Authentication**, select the **Create new app registration** option to create an app registration that represents your *function app*.
+
+1. Give the app a name, for example **Azure Functions authentication events API**.
+
+1. Select **Next**.
+
+1. In **Claims**, enter the attributes that you expect your custom extension to parse from your REST API and will be merged into the token. Add the following claims:
+
+ - dateOfBirth
+ - customRoles
+ - apiVersion
+ - correlationId
+
+1. Select **Next** and **Create**, which registers the custom extension and the associated application registration.
+
+# [Microsoft Graph](#tab/microsoft-graph)
+
+Create an Application Registration to authenticate your custom extension to your Azure Function.
+
+1. Sign in to the [Microsoft Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom extension in.
+1. Set the HTTP method to **POST**.
+1. Paste the URL: `https://graph.microsoft.com/v1.0/applications`
+1. Select **Request Body** and paste the following JSON:
+
+ ```json
+ {
+ "displayName": "authenticationeventsAPI"
+ }
+ ```
+
+1. Select **Run Query** to submit the request.
+
+1. Copy the **Application ID** value (*appId*) from the response. You need this value later, which is referred to as the `{authenticationeventsAPI_AppId}`. Also get the object ID of the app (*ID*), which is referred to as `{authenticationeventsAPI_ObjectId}` from the response.
+
+Create a service principal in the tenant for the authenticationeventsAPI app registration:
+
+1. Set the HTTP method to **POST**.
+1. Paste the URL: `https://graph.microsoft.com/v1.0/servicePrincipals`
+1. Select **Request Body** and paste the following JSON:
+
+ ```json
+ {
+ "appId": "{authenticationeventsAPI_AppId}"
+ }
+ ```
+
+1. Select **Run Query** to submit the request.
+
+### Set the App ID URI, access token version, and required resource access
+
+Update the newly created application to set the application ID URI value, the access token version, and the required resource access.
+
+1. Set the HTTP method to **PATCH**.
+1. Paste the URL: `https://graph.microsoft.com/v1.0/applications/{authenticationeventsAPI_ObjectId}`
+1. Select **Request Body** and paste the following JSON:
+
+ Set the application ID URI value in the *identifierUris* property. Replace `{Function_Url_Hostname}` with the hostname of the `{Function_Url}` you recorded earlier.
+
+ Set the `{authenticationeventsAPI_AppId}` value with the App ID generated from the app registration created in the previous step.
+
+ An example value would be `api://authenticationeventsAPI.azurewebsites.net/f4a70782-3191-45b4-b7e5-dd415885dd80`. Take note of this value as it is used in following steps and is referenced as `{functionApp_IdentifierUri}`.
+
+ ```json
+ {
+ "identifierUris": [
+ "api://{Function_Url_Hostname}/{authenticationeventsAPI_AppId}"
+ ],
+ "api": {
+ "requestedAccessTokenVersion": 2,
+ "acceptMappedClaims": null,
+ "knownClientApplications": [],
+ "oauth2PermissionScopes": [],
+ "preAuthorizedApplications": []
+ },
+ "requiredResourceAccess": [
+ {
+ "resourceAppId": "00000003-0000-0000-c000-000000000000",
+ "resourceAccess": [
+ {
+ "id": "214e810f-fda8-4fd7-a475-29461495eb00",
+ "type": "Role"
+ }
+ ]
+ }
+ ]
+ }
+ ```
+
+1. Select **Run Query** to submit the request.
+
+### Register a custom extension
+
+Next, you register the custom extension. You register the custom extension by associating it with the App Registration for the Azure Function, and your Azure Function endpoint `{Function_Url}`.
+
+1. Set the HTTP method to **POST**.
+1. Paste the URL: `https://graph.microsoft.com/beta/identity/customAuthenticationExtensions`
+1. Select **Request Body** and paste the following JSON:
+
+ Replace `{Function_Url}` with the hostname of your Azure Function app. Replace `{functionApp_IdentifierUri}` with the identifierUri used in the previous step.
+
+ ```json
+ {
+ "@odata.type": "#microsoft.graph.onTokenIssuanceStartCustomExtension",
+ "displayName": "onTokenIssuanceStartCustomExtension",
+ "description": "Fetch additional claims from custom user store",
+ "endpointConfiguration": {
+ "@odata.type": "#microsoft.graph.httpRequestEndpoint",
+ "targetUrl": "{Function_Url}"
+ },
+ "authenticationConfiguration": {
+ "@odata.type": "#microsoft.graph.azureAdTokenAuthentication",
+ "resourceId": "{functionApp_IdentifierUri}"
+ },
+ "clientConfiguration": {
+ "timeoutInMilliseconds": 2000,
+ "maximumRetries": 1
+ },
+ "claimsForTokenConfiguration": [
+ {
+ "claimIdInApiResponse": "DateOfBirth"
+ },
+ {
+ "claimIdInApiResponse": "CustomRoles"
+ }
+ ]
+ }
+ ```
+
+1. Select **Run Query** to submit the request.
+
+Record the ID value of the created custom claims provider object. The ID is needed in a later step and is referred to as the `{customExtensionObjectId}`.
+++
+### 2.2 Grant admin consent
+
+After your custom extension is created, you'll be taken to the **Overview** tab of the new custom extension.
+
+From the **Overview** page, select the **Grant permission** button to give admin consent to the registered app, which allows the custom extension to authenticate to your API. The custom extension uses `client_credentials` to authenticate to the Azure Function App using the `Receive custom authentication extension HTTP requests` permission.
+
+The following screenshot shows how to grant permissions.
++
+## Step 3. Configure an OpenID Connect app to receive enriched tokens
+
+To get a token and test the custom extension, you can use the <https://jwt.ms> app. It's a Microsoft-owned web application that displays the decoded contents of a token (the contents of the token never leave your browser).
+
+Follow these steps to register the **jwt.ms** web application:
+
+### 3.1 Register a test web application
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Azure Active Directory**.
+1. Select **App registrations**, and then select **New registration**.
+1. Enter a **Name** for the application. For example, **My Test application**.
+1. Under **Supported account types**, select **Accounts in this organizational directory only**.
+1. In the **Select a platform** dropdown in **Redirect URI**, select **Web** and then enter `https://jwt.ms` in the URL text box.
+1. Select **Register** to complete the app registration.
+
+The following screenshot shows how to register the *My Test application*.
++
+### 3.1 Get the application ID
+
+In your app registration, under **Overview**, copy the **Application (client) ID**. The app ID is referred to as the `{App_to_enrich_ID}` in later steps.
++
+### 3.2 Enable implicit flow
+
+The **jwt.ms** test application uses the implicit flow. Enable implicit flow in your *My Test application* registration:
+
+1. Under **Manage**, select **Authentication**.
+1. Under **Implicit grant and hybrid flows**, select the **ID tokens (used for implicit and hybrid flows)** checkbox.
+1. Select **Save**.
+
+### 3.3 Enable your App for a claims mapping policy
+
+A claims mapping policy is used to select which attributes returned from the custom extension are mapped into the token. To allow tokens to be augmented, you must explicitly enable the application registration to accept mapped claims:
+
+1. In your *My Test application* registration, under **Manage**, select **Manifest**.
+1. In the manifest, locate the `acceptMappedClaims` attribute, and set the value to `true`.
+1. Set the `accessTokenAcceptedVersion` to `2`.
+1. Select **Save** to save the changes.
+
+The following JSON snippet demonstrates how to configure these properties.
+
+```json
+{
+ "acceptMappedClaims": true,
+ "accessTokenAcceptedVersion": 2,
+ "appId": "22222222-0000-0000-0000-000000000000",
+}
+```
+
+> [!WARNING]
+> Do not set `acceptMappedClaims` property to `true` for multi-tenant apps, which can allow malicious actors to create claims-mapping policies for your app. Instead [configure a custom signing key](active-directory-claims-mapping.md#configure-a-custom-signing-key).
+
+## Step 4. Assign a custom claims provider to your app
+
+For tokens to be issued with claims incoming from the custom extension, you must assign a custom claims provider to your application. The custom claims provider relies on the custom extension configured with the **token issuance start** event listener. You can choose whether all, or a subset of claims, from the custom claims provider are mapped into the token.
+
+Follow these steps to connect the *My Test application* with your custom extension:
+
+# [Azure portal](#tab/azure-portal)
+
+First assign the custom extension as a custom claims provider source:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Azure Active Directory**.
+1. Select **App registrations**, and find the *My Test application* registration you created.
+1. In the **Overview** page, under **Managed application in local directory**, select **My Test application**.
+1. Under **Manage**, select **Single sign-on**.
+1. Under **Attributes & Claims**, select **Edit**.
+
+ :::image type="content" border="false" source="./media/custom-extension-get-started/open-id-connect-based-sign-on.png" alt-text="Screenshot that shows how to configure app claims." lightbox="./media/custom-extension-get-started/open-id-connect-based-sign-on.png":::
+
+1. Expand the **Advanced settings** menu.
+1. Select **Configure** against **Custom claims provider**.
+1. Expand the **Custom claims provider** drop-down box, and select the *Token issuance event* you created earlier.
+1. Select **Save**.
+
+Next, assign the attributes from the custom claims provider, which should be issued into the token as claims:
+
+1. Select **Add new claim** to add a new claim. Provide a name to the claim you want to be issued, for example **dateOfBirth**.
+1. Under **Source**, select `Attribute`, and choose `customClaimsProvider.DateOfBirth` from the **Source attribute** drop-down box.
+
+ :::image type="content" border="false" source="media/custom-extension-get-started/manage-claim.png" alt-text="Screenshot that shows how to add a claim mapping to your app." lightbox="media/custom-extension-get-started/manage-claim.png":::
+
+1. Select **Save**.
+1. You can repeat this process to add the `customClaimsProvider.customRoles`, `customClaimsProvider.apiVersion` and `customClaimsProvider.correlationId` attributes.
+
+# [Microsoft Graph](#tab/microsoft-graph)
+
+First create an event listener to trigger a custom extension using the token issuance start event:
+
+1. Sign in to the [Microsoft Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom extension in.
+1. Set the HTTP method to **POST**.
+1. Paste the URL: `https://graph.microsoft.com/beta/identity/authenticationEventListeners`
+1. Select **Request Body** and paste the following JSON:
+
+ Replace `{App_to_enrich_ID}` with the app ID of *My Test application* recorded earlier. Replace `{customExtensionObjectId}` with the custom extension ID recorded earlier.
+
+ ```json
+ {
+ "@odata.type": "#microsoft.graph.onTokenIssuanceStartListener",
+ "conditions": {
+ "applications": {
+ "includeAllApplications": false,
+ "includeApplications": [
+ {
+ "appId": "{App_to_enrich_ID}"
+ }
+ ]
+ }
+ },
+ "priority": 500,
+ "handler": {
+ "@odata.type": "#microsoft.graph.onTokenIssuanceStartCustomExtensionHandler",
+ "customExtension": {
+ "id": "{customExtensionObjectId}"
+ }
+ }
+ }
+ ```
+
+1. Select **Run Query** to submit the request.
+
+Next, create the claims mapping policy, which describes which claims can be issued to an application from a custom claims provider:
+
+1. Set the HTTP method to **POST**.
+1. Paste the URL: `https://graph.microsoft.com/v1.0/policies/claimsmappingpolicies`
+1. Select **Request Body** and paste the following JSON:
+
+ ```json
+ {
+ "definition": [
+ "{\"ClaimsMappingPolicy\":{\"Version\":1,\"IncludeBasicClaimSet\":\"true\",\"ClaimsSchema\":[{\"Source\":\"CustomClaimsProvider\",\"ID\":\"DateOfBirth\",\"JwtClaimType\":\"dob\"},{\"Source\":\"CustomClaimsProvider\",\"ID\":\"CustomRoles\",\"JwtClaimType\":\"my_roles\"},{\"Source\":\"CustomClaimsProvider\",\"ID\":\"CorrelationId\",\"JwtClaimType\":\"correlationId\"},{\"Source\":\"CustomClaimsProvider\",\"ID\":\"ApiVersion\",\"JwtClaimType\":\"apiVersion \"},{\"Value\":\"tokenaug_V2\",\"JwtClaimType\":\"policy_version\"}]}}"
+ ],
+ "displayName": "MyClaimsMappingPolicy",
+ "isOrganizationDefault": false
+ }
+ ```
+
+1. Record the `ID` generated in the response, later it's referred to as `{claims_mapping_policy_ID}`.
+1. Select **Run Query** to submit the request.
+
+Get the `servicePrincipal` objectId:
+
+1. Set the HTTP method to **GET**.
+1. Paste the URL: `https://graph.microsoft.com/v1.0/servicePrincipals(appId='{App_to_enrich_ID}')/claimsMappingPolicies/$ref`. Replace `{App_to_enrich_ID}` with *My Test Application* App ID.
+1. Record the `id` value, later it's referred to as `{test_App_Service_Principal_ObjectId}`.
+
+Assign the claims mapping policy to the `servicePrincipal` of *My Test Application*:
+
+1. Set the HTTP method to **POST**.
+1. Paste the URL: `https://graph.microsoft.com/v1.0/servicePrincipals/{test_App_Service_Principal_ObjectId}/claimsMappingPolicies/$ref`
+1. Select **Request Body** and paste the following JSON:
+
+ ```json
+ {
+ "@odata.id": "https://graph.microsoft.com/v1.0/policies/claimsMappingPolicies/{claims_mapping_policy_ID}"
+ }
+ ```
+
+1. Select **Run Query** to submit the request.
+++
+## Step 5. Protect your Azure Function
+
+Azure AD custom extension uses server to server flow to obtain an access token that is sent in the HTTP `Authorization` header to your Azure function. When publishing your function to Azure, especially in a production environment, you need to validate the token sent in the authorization header.
+
+To protect your Azure function, follow these steps to integrate Azure AD authentication, for validating incoming tokens with your *Azure Functions authentication events API* application registration.
+
+> [!NOTE]
+> If the Azure function app is hosted in a different Azure tenant than the tenant in which your custom extension is registered, skip to [using OpenID Connect identity provider](#51-using-openid-connect-identity-provider) step.
+
+1. In the [Azure portal](https://poral.azure.com), navigate and select the function app you previously published.
+1. Select **Authentication** in the menu on the left.
+1. Select **Add Identity provider**.
+1. Select **Microsoft** as the identity provider.
+1. Under **App registration**->**App registration type**, select **Pick an existing app registration in this directory** and pick the *Azure Functions authentication events API* app registration you [previously created](#step-2-register-a-custom-extension) when registering the custom claims provider.
+1. Under **Unauthenticated requests**, select **HTTP 401 Unauthorized** as the identity provider.
+1. Unselect the **Token store** option.
+1. Select **Add** to add authentication to your Azure Function.
+
+ :::image type="content" border="true" source="media/custom-extension-get-started/configure-auth-function-app.png" alt-text="Screenshot that shows how to add authentication to your function app." lightbox="media/custom-extension-get-started/configure-auth-function-app.png":::
+
+### 5.1 Using OpenID Connect identity provider
+
+If you configured the [Microsoft identity provider](#step-5-protect-your-azure-function), skip this step. Otherwise, if the Azure Function is hosted under a different tenant than the tenant in which your custom extension is registered, follow these steps to protect your function:
+
+1. In the [Azure portal](https://poral.azure.com), navigate and select the function app you previously published.
+1. Select **Authentication** in the menu on the left.
+1. Select **Add Identity provider**.
+1. Select **OpenID Connect** as the identity provider.
+1. Provide a name, such as *Contoso Azure AD*.
+1. Under the **Metadata entry**, enter the following URL to the **Document URL**. Replace the `{tenantId}` with your Azure AD tenant ID.
+
+ ```http
+ https://login.microsoftonline.com/{tenantId}/v2.0/.well-known/openid-configuration
+ ```
+
+1. Under the **App registration**, enter the application ID (client ID) of the *Azure Functions authentication events API* app registration [you created previously](#step-2-register-a-custom-extension).
+
+1. Go to your Azure AD tenant in which your custom extension is registered, and select **Azure Active Directory** > **App registrations**.
+ 1. Select the *Azure Functions authentication events API* app registration [you created previously](#step-2-register-a-custom-extension).
+ 1. Select **Certificates & secrets** > **Client secrets** > **New client secret**.
+ 1. Add a description for your client secret.
+ 1. Select an expiration for the secret or specify a custom lifetime.
+ 1. Select **Add**.
+ 1. Record the **secret's value** for use in your client application code. This secret value is never displayed again after you leave this page.
+1. Back to the Azure Function, under the **App registration**, enter the **Client secret**.
+1. Unselect the **Token store** option.
+1. Select **Add** to add the OpenID Connect identity provider.
+
+## Step 6. Test the application
+
+To test your custom claim provider, follow these steps:
+
+1. Open a new private browser and navigate and sign-in through the following URL.
+
+ ```http
+ https://login.microsoftonline.com/{tenant-id}/oauth2/v2.0/authorize?client_id={App_to_enrich_ID}&response_type=id_token&redirect_uri=https://jwt.ms&scope=openid&state=12345&nonce=12345
+ ```
+
+1. Replace `{tenant-id}` with your tenant ID, tenant name, or one of your verified domain names. For example, `contoso.onmicrosoft.com`.
+1. Replace `{App_to_enrich_ID}` with the [My Test application registration ID](#31-get-the-application-id).
+1. After logging in, you'll be presented with your decoded token at `https://jwt.ms`. Validate that the claims from the Azure Function are presented in the decoded token, for example, `dateOfBirth`.
+
+## Next steps
+
+- Learn how to configure a [SAML application](custom-extension-configure-saml-app.md) to receive tokens with claims sourced from an external store.
+
+- Learn more about custom claims providers with the [custom claims provider reference](custom-claims-provider-reference.md) article.
+
+- Learn how to [troubleshoot your custom extensions API](custom-extension-troubleshoot.md).
++
active-directory Custom Extension Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-overview.md
+
+ Title: Custom authentication extension
+
+description: Use Azure Active Directory custom extensions to customize your user's sign-in experience by using REST APIs or outbound webhooks.
++++++++ Last updated : 03/06/2023+++
+#Customer intent: As a developer, I want to learn about custom authentication extensions so that I can augment tokens with claims from an external identity system or role management system.
++
+# Custom authentication extensions (preview)
+
+This article provides an overview of custom authentication extensions for Azure Active Directory (Azure AD). Custom authentication extensions allow you to customize the Azure AD authentication experience, by integrating with external systems.
+
+The following diagram depicts the sign-in flow integrated with a custom extension.
++
+1. A user attempts to sign into an app and is redirected to the Azure AD sign-in page.
+1. Once a user completes a certain step in the authentication, an **event listener** is triggered.
+1. The Azure AD **event listener** service (custom extension) sends an HTTP request to your **REST API endpoint**. The request contains information about the event, the user profile, session data, and other context information.
+1. The **REST API** performs a custom workflow.
+1. The **REST API** returns an HTTP response to Azure AD.
+1. The Azure AD **custom extension** processes the response and customizes the authentication based on the event type and the HTTP response payload.
+1. A **token** is returned to the **app**.
+
+## Custom extension REST API endpoint
+
+When an event fires, Azure AD calls a REST API endpoint you own. The request to the REST API contains information about the event, the user profile, authentication request data, and other context information.
+
+You can use any programming language, framework, and hosting environment to create and host your custom extensions REST API. For a quick way to get started, use a C# Azure Function. Azure Functions lets you run your code in a serverless environment without having to first create a virtual machine (VM) or publish a web application.
+
+Your REST API must handle:
+
+- Token validation for securing the REST API calls.
+- Business logic
+- Incoming and outgoing validation of HTTP request and response schemas.
+- Auditing and logging.
+- Availability, performance and security controls.
+
+### Protect your REST API
+
+To ensure the communications between the custom extension and your REST API are secured appropriately, multiple security controls must be applied.
+
+1. When the custom extension calls your REST API, it sends an HTTP `Authorization` header with a bearer token issued by Azure AD.
+1. The bearer token contains an `appid` or `azp` claim. Validate that the respective claim contains the `99045fe1-7639-4a75-9d4a-577b6ca3810f` value. This value ensures that the Azure AD is the one who calls the REST API.
+ 1. For **V1** Applications, validate the `appid` claim.
+ 1. For **V2** Applications, validate the `azp` claim.
+1. The bearer token `aud` audience claim contains the ID of the associated application registration. Your REST API endpoint needs to validate that the bearer token is issued for that specific audience.
+
+## Custom claims provider
+
+A custom claims provider is a type of custom extension that calls a REST API to fetch claims from external systems. A custom claims provider can be assigned to one or many applications in your directory and maps claims from external systems into tokens.
+
+Learn more about [custom claims providers](custom-claims-provider-overview.md).
+
+## Next steps
+
+- Learn more about [custom claim providers](custom-claims-provider-overview.md).
+- Learn how to [create and register a custom claims provider](custom-extension-get-started.md) with a sample Open ID Connect application.
+- If you already have a custom claims provider registered, you can configure a [SAML application](custom-extension-configure-saml-app.md) to receive tokens with claims sourced from an external store.
active-directory Custom Extension Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-troubleshoot.md
+
+ Title: Troubleshoot a custom claims provider
+
+description: Troubleshoot and monitor your custom claims provider API. Learn how to use logging and Azure AD sign-in logs to find errors and issues in your custom claims provider API.
++++++++ Last updated : 03/06/2023+++
+#Customer intent: As an application developer, I want to find errors and issues in my custom claims provider API.
++
+# Troubleshoot your custom claims provider API (preview)
+
+Authentication events and [custom claims providers](custom-claims-provider-overview.md) allow you to customize the Azure Active Directory (Azure AD) authentication experience by integrating with external systems. For example, you can create a custom claims provider API and configure an [OpenID Connect app](./custom-extension-get-started.md) or [SAML app](custom-extension-configure-saml-app.md) to receive tokens with claims from an external store.
+
+## Error behavior
+
+When an API call fails, the error behavior is as follows:
+
+- For OpenId Connect apps - Azure AD redirects the user back to the client application with an error. A token isn't minted.
+- For SAML apps - Azure AD shows the user an error screen in the authentication experience. The user isn't redirected back to the client application.
+
+The error code sent back to the application or the user is generic. To troubleshoot, check the [sign-in logs](#azure-ad-sign-in-logs) for the [error codes](#error-codes-reference).
+
+## Logging
+
+In order to troubleshoot issues with your custom claims provider REST API endpoint, the REST API must handle logging. Azure Functions and other API-development platforms provide in-depth logging solutions. Use those solutions to get detailed information on your APIs behavior and troubleshoot your API logic.
+
+## Azure AD sign-in logs
+
+You can also use [Azure AD sign-in logs](/azure/active-directory/reports-monitoring/concept-sign-ins) in addition to your REST API logs, and hosting environment diagnostics solutions. Using Azure AD sign-in logs, you can find errors, which may affect the users' sign-ins. The Azure AD sign-in logs provide information about the HTTP status, error code, execution duration, and number of retries that occurred the API was called by Azure AD.
+
+Azure AD sign-in logs also integrate with [Azure Monitor](/azure/azure-monitor/). You can set up alerts and monitoring, visualize the data, and integrate with security information and event management (SIEM) tools. For example, you can set up notifications if the number of errors exceed a certain threshold that you choose.
+
+To access the Azure AD sign-in logs:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the **Enterprise apps** experience for your given application, select on the **Sign-in** logs tab.
+1. Select the latest sign-in log.
+1. For more details, select the **Authentication Events** tab. Information related to the custom extension REST API call is displayed, including any [error codes](#error-codes-reference).
+
+ :::image type="content" source="media/custom-extension-troubleshoot/authentication-events.png" alt-text="Screenshot that shows the authentication events information." :::
+
+## Error codes reference
+
+Use the following table to diagnose an error code.
+
+|Error code |Error name |Description |
+|-|-|-|
+|1003000 | EventHandlerUnexpectedError | There was an unexpected error when processing an event handler.|
+|1003001 | CustomExtenstionUnexpectedError | There was an unexpected error while calling a custom extension API.|
+|1003002 | CustomExtensionInvalidHTTPStatus | The custom extension API returned an invalid HTTP status code. Check that the API returns an accepted status code defined for that custom extension type.|
+|1003003 | CustomExtensionInvalidResponseBody | There was a problem parsing the custom extension's response body. Check that the API response body is in an acceptable schema for that custom extension type.|
+|1003004 | CustomExtensionThrottlingError | There are too many custom extension requests. This exception is thrown for custom extension API calls when throttling limits are reached.|
+|1003005 | CustomExtensionTimedOut | The custom extension didn't respond within the allowed timeout. Check that your API is responding within the configured timeout for the custom extension. It can also indicate that the access token is invalid. Follow the steps to [call your REST API directly](#call-your-rest-api-directly). |
+|1003006 | CustomExtensionInvalidResponseContentType | The custom extension's response content-type isn't 'application/json'.|
+|1003007 | CustomExtensionNullClaimsResponse | The custom extension API responded with a null claims bag.|
+|1003008 | CustomExtensionInvalidResponseApiSchemaVersion | The custom extension API didn't respond with the same apiSchemaVersion that it was called for.|
+|1003009 | CustomExtensionEmptyResponse | The custom extension API response body was null when that wasn't expected.|
+|1003010 | CustomExtensionInvalidNumberOfActions | The custom extension API response included a different number of actions than those supported for that custom extension type.|
+|1003011 | CustomExtensionNotFound | The custom extension associated with an event listener couldn't be found.|
+|1003012 | CustomExtensionInvalidActionType | The custom extension returned an invalid action type defined for that custom extension type.|
+|1003014 | CustomExtensionIncorrectResourceIdFormat | The _identifierUris_ property in the manifest for the application registration for the custom extension, should be in the format of "api://{fully qualified domain name}/{appid}.|
+|1003015 | CustomExtensionDomainNameDoesNotMatch | The targetUrl and resourceId of the custom extension should have the same fully qualified domain name.|
+|1003016 | CustomExtensionResourceServicePrincipalNotFound | The appId of the custom extension resourceId should correspond to a real service principal in the tenant.|
+|1003017 | CustomExtensionClientServicePrincipalNotFound | The custom extension resource service principal is not found in the tenant.|
+|1003018 | CustomExtensionClientServiceDisabled | The custom extension resource service principal is disabled in this tenant.|
+|1003019 | CustomExtensionResourceServicePrincipalDisabled | The custom extension resource service principal is disabled in this tenant.|
+|1003020 | CustomExtensionIncorrectTargetUrlFormat | The target URL is in an improper format. It's must be a valid URL that start with https.|
+|1003021 | CustomExtensionPermissionNotGrantedToServicePrincipal | The service principal doesn't have admin consent for the Microsoft Graph CustomAuthenticationExtensions.Receive.Payload app role (also known as application permission) which is required for the app to receive custom authentication extension HTTP requests.|
+|1003022 | CustomExtensionMsGraphServicePrincipalDisabledOrNotFound |The MS Graph service principal is disabled or not found in this tenant.|
+|1003023 | CustomExtensionBlocked | The endpoint used for the custom extension is blocked by the service.|
+|1003024 | CustomExtensionResponseSizeExceeded | The custom extension response size exceeded the maximum limit.|
+|1003025 | CustomExtensionResponseClaimsSizeExceeded | The total size of claims in the custom extension response exceeded the maximum limit.|
+|1003026 | CustomExtensionNullOrEmptyClaimKeyNotSupported | The custom extension API responded with claims containing null or empty key'|
+|1003027 | CustomExtensionConnectionError | Error connecting to the custom extension API.|
+
+## Call your REST API directly
+
+Your REST API is protected by Azure AD access token. You can test your API by obtaining an access token with the [application registration](custom-extension-get-started.md#22-grant-admin-consent) associated with the custom extensions. After you acquire an access token, pass it the HTTP `Authorization` header. To obtain an access token, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure administrator account.
+1. Select **Azure Active Directory** > **App registrations**.
+1. Select the *Azure Functions authentication events API* app registration [you created previously](custom-extension-get-started.md#step-2-register-a-custom-extension).
+1. Copy the [application ID](custom-extension-get-started.md#22-grant-admin-consent).
+1. If you haven't created an app secret, follow these steps:
+ 1. Select **Certificates & secrets** > **Client secrets** > **New client secret**.
+ 1. Add a description for your client secret.
+ 1. Select an expiration for the secret or specify a custom lifetime.
+ 1. Select **Add**.
+ 1. Record the **secret's value** for use in your client application code. This secret value is never displayed again after you leave this page.
+1. From the menu, select **Expose an API** and copy the value of the **Application ID URI**. For example, `api://contoso.azurewebsites.net/11111111-0000-0000-0000-000000000000`.
+1. Open Postman and create a new HTTP query.
+1. Change the **HTTP method** to `POST`.
+1. Enter the following URL. Replace the `{tenantID}` with your tenant ID.
+
+ ```http
+ https://login.microsoftonline.com/{tenantID}/oauth2/v2.0/token
+ ```
+
+1. Under the **Body**, select **form-data** and add the following keys:
+
+ |Key |Value |
+ |||
+ |`grant_type`| `client_credentials`|
+ |`client_id`| The **Client ID** of your application.|
+ |`client_secret`|The **Client Secret** of your application.|
+ |`scope`| The **Application ID URI** of your application, then add `.default`. For example, `api://contoso.azurewebsites.net/11111111-0000-0000-0000-000000000000/.default`|
+
+1. Run the HTTP query and copy the `access_token` into the <https://jwt.ms> web app.
+1. Compare the `iss` with the issuer name you [configured in your API](custom-extension-get-started.md#step-5-protect-your-azure-function).
+1. Compare the `aud` with the client ID you [configured in your API](custom-extension-get-started.md#step-5-protect-your-azure-function).
+
+To test your API directly from the Postman, follow these steps:
+
+1. In your REST API, disable the `appid` or `azp` [claim validation](custom-extension-overview.md#protect-your-rest-api). Check out how to [edit the function API](custom-extension-get-started.md#12-edit-the-function) you created earlier.
+1. In Postman, create new HTTP request
+1. Set the **HTTP method** to `POST`
+1. In the **Body**, select **Raw** and then select **JSON**.
+1. Pase the following JSON that imitates the request Azure AD sends to your REST API.
+
+ ```json
+ {
+ "type": "microsoft.graph.authenticationEvent.tokenIssuanceStart",
+ "source": "/tenants/<Your tenant GUID>/applications/<Your Test Application App Id>",
+ "data": {
+ "@odata.type": "microsoft.graph.onTokenIssuanceStartCalloutData",
+ "tenantId": "<Your tenant GUID>",
+ "authenticationEventListenerId": "<GUID>",
+ "customAuthenticationExtensionId": "<Your custom extension ID>",
+ "authenticationContext": {
+ "correlationId": "fcef74ef-29ea-42ca-b150-8f45c8f31ee6",
+ "client": {
+ "ip": "127.0.0.1",
+ "locale": "en-us",
+ "market": "en-us"
+ },
+ "protocol": "OAUTH2.0",
+ "clientServicePrincipal": {
+ "id": "<Your Test Applications servicePrincipal objectId>",
+ "appId": "<Your Test Application App Id>",
+ "appDisplayName": "My Test application",
+ "displayName": "My Test application"
+ },
+ "resourceServicePrincipal": {
+ "id": "<Your Test Applications servicePrincipal objectId>",
+ "appId": "<Your Test Application App Id>",
+ "appDisplayName": "My Test application",
+ "displayName": "My Test application"
+ },
+ "user": {
+ "createdDateTime": "2016-03-01T15:23:40Z",
+ "displayName": "John Smith",
+ "givenName": "John",
+ "id": "90847c2a-e29d-4d2f-9f54-c5b4d3f26471",
+ "mail": "john@contoso.com",
+ "preferredLanguage": "en-us",
+ "surname": "Smith",
+ "userPrincipalName": "john@contoso.com",
+ "userType": "Member"
+ }
+ }
+ }
+ }
+ ```
+
+1. Select **Authorization** and then select **Bearer token**.
+1. Paste the access token you received from Azure AD, and run the query.
++
+## Common performance improvements
+
+One of the most common issues is that your custom claims provider API doesn't respond within the two-seconds timeout. If your REST API doesn't respond in subsequent retries, then the authentication fails. To improve the performance of your REST API, follow the below suggestions:
+
+1. If your API accesses any downstream APIs, cache the access token used to call these APIs, so a new token doesn't have to be acquired on every execution.
+1. Performance issues are often related to downstream services. Add logging, which records the process time to call to any downstream services.
+1. If you use a cloud provider to host your API, use a hosting plan that keeps the API always "warm". For Azure Functions, it can be either [the Premium plan or Dedicated plan](../../azure-functions/functions-scale.md).
+1. [Run automated integration tests](test-automate-integration-testing.md) for your authentications. You can also use Postman or other tools to test just your API performance.
+
+## Next steps
+
+- Learn how to [create and register a custom claims provider](custom-extension-get-started.md) with a sample Open ID Connect application.
+- If you already have a custom claims provider registered, you can configure a [SAML application](custom-extension-configure-saml-app.md) to receive tokens with claims sourced from an external store.
+- Learn more about custom claims providers with the [custom claims provider reference](custom-claims-provider-reference.md) article.
active-directory Howto Add Terms Of Service Privacy Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-terms-of-service-privacy-statement.md
Previously updated : 09/27/2021 Last updated : 03/07/2023
Follow these steps in the Azure portal.
1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a> and select the correct Azure AD tenant(not B2C). 2. Navigate to the **App registrations** section and select your app.
-3. Under **Manage**, select **Branding**.
+3. Under **Manage**, select **Branding & properties**.
4. Fill out the **Terms of service URL** and **Privacy statement URL** fields. 5. Select **Save**.
Follow these steps in the Azure portal.
If you prefer to modify the app object JSON directly, you can use the manifest editor in the Azure portal or Application Registration Portal to include links to your app's terms of service and privacy statement.
-1. Navigating to the **App Registrations** section and select your app.
+1. Navigate to the **App Registrations** section and select your app.
2. Open the **Manifest** pane. 3. Ctrl+F, Search for "informationalUrls". Fill in the information.
-4. Save your changes.
+4. Save your changes by downloading the app manifest, modifying it, and uploading it.
```json "informationalUrls": {
If you prefer to modify the app object JSON directly, you can use the manifest e
### <a name="msgraph-rest-api"></a>Using the Microsoft Graph API
-To programmatically update all your apps, you can use the Microsoft Graph API to update all your apps to include links to the terms of service and privacy statement documents.
+To programmatically [update your app](/graph/api/application-update?view=graph-rest-1.0&tabs=http), you can use the Microsoft Graph API to update all your apps to include links to the terms of service and privacy statement documents.
```
-PATCH https://graph.microsoft.com/v1.0/applications/{application id}
+PATCH https://graph.microsoft.com/v1.0/applications/{applicationObjectId}
{
-    "appId": "{your application id}",
+    "appId": "{your application object id}",
    "info": {         "termsOfServiceUrl": "<your_terms_of_service_url>",         "supportUrl": null,
active-directory Howto Authenticate Service Principal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-authenticate-service-principal-powershell.md
multiple Previously updated : 11/09/2022 Last updated : 03/07/2023
Sleep 20
New-AzRoleAssignment -RoleDefinitionName Reader -ServicePrincipalName $sp.AppId ```
-The example sleeps for 20 seconds to allow some time for the new service principal to propagate throughout Azure AD. If your script doesn't wait long enough, you'll see an error stating: "Principal {ID} does not exist in the directory {DIR-ID}." To resolve this error, wait a moment then run the **New-AzRoleAssignment** command again.
+The example sleeps for 20 seconds to allow some time for the new service principal to propagate throughout Azure AD. If your script doesn't wait long enough, you'll see an error stating: "Principal {ID} doesn't exist in the directory {DIR-ID}." To resolve this error, wait a moment then run the **New-AzRoleAssignment** command again.
You can scope the role assignment to a specific resource group by using the **ResourceGroupName** parameter. You can scope to a specific resource by also using the **ResourceType** and **ResourceName** parameters.
active-directory Reply Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reply-url.md
To add a redirect URI that uses the `http` scheme with the `127.0.0.1` loopback
## Restrictions on wildcards in redirect URIs
-Wildcard URIs like `https://*.contoso.com` may seem convenient, but should be avoided due to security implications. According to the OAuth 2.0 specification ([section 3.1.2 of RFC 6749](https://tools.ietf.org/html/rfc6749#section-3.1.2)), a redirection endpoint URI must be an absolute URI.
+Wildcard URIs like `https://*.contoso.com` may seem convenient, but should be avoided due to security implications. According to the OAuth 2.0 specification ([section 3.1.2 of RFC 6749](https://tools.ietf.org/html/rfc6749#section-3.1.2)), a redirection endpoint URI must be an absolute URI. As such, when a configured wildcard URI matches a redirect URI, query strings and fragments in the redirect URI are stripped.
Wildcard URIs are currently unsupported in app registrations configured to sign in personal Microsoft accounts and work or school accounts. Wildcard URIs are allowed, however, for apps that are configured to sign in only work or school accounts in an organization's Azure AD tenant.
active-directory Test Throttle Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-throttle-service-limits.md
Previously updated : 11/09/2022 Last updated : 03/07/2023 #Customer intent: As a developer, I want to understand the throttling and service limits I might hit so that I can test my app without interruption.
# Throttling and service limits to consider for testing As a developer, you want to test your application before releasing it to production. When testing applications protected by the Microsoft identity platform, you should set up an Azure Active Directory (Azure AD) environment and tenant to be used for testing.
-Applications that integrate with Microsoft identity platform require directory objects (such as app registrations, service principals, groups, and users) to be created and managed in an Azure AD tenant. Any production tenant settings that affect your app's behavior should be replicated in the test tenant. Populate your test tenant with the needed conditional access, permission grant, claims mapping, token lifetime, and token issuance policies. Your application may also use Azure resources such as compute or storage, which need to be added to the test environment. Your test environment may require a lot of resources, depending on the app to be tested.
+Applications that integrate with Microsoft identity platform require directory objects (such as app registrations, service principals, groups, and users) to be created and managed in an Azure AD tenant. Any production tenant settings that affect your app's behavior should be replicated in the test tenant. Populate your test tenant with the needed conditional access, permission grant, claims mapping, token lifetime, and token issuance policies. Your application may also use Azure resources such as compute or storage, which need to be added to the test environment. Your test environment may require numerous resources, depending on the app to be tested.
In order to ensure reliable usage of services by all customers, Azure AD and other services limit the number of resources that can be created per customer and per tenant. When setting up a test environment and deploying directory objects and Azure resources, you may hit some of these service limits and quotas.
The following table lists Azure AD service limits to consider when setting up a
| Category | Limit | |-|-| | Tenants | A single user can create a maximum of 200 directories.|
-| Resources | <ul><li>A maximum of 50,000 Azure AD resources can be created in a single tenant by users of the Free edition of Azure Active Directory by default. If you have at least one verified domain, the default Azure AD service quota for your organization is extended to 300,000 Azure AD resources. Azure AD service quota for organizations created by self-service sign-up remains 50,000 Azure AD resources even after you performed an internal admin takeover and the organization is converted to a managed tenant with at least one verified domain. This service limit is unrelated to the pricing tier limit of 500,000 resources on the Azure AD pricing page. To go beyond the default quota, you must contact Microsoft Support.</li><li>A non-admin user can create no more than 250 Azure AD resources. Both active resources and deleted resources that are available to restore count toward this quota. Only deleted Azure AD resources that were deleted fewer than 30 days ago are available to restore. Deleted Azure AD resources that are no longer available to restore count toward this quota at a value of one-quarter for 30 days. If you have developers who are likely to repeatedly exceed this quota in the course of their regular duties, you can create and assign a custom role with permission to create a limitless number of app registrations.</li></ul>|
+| Resources | <ul><li>A maximum of 50,000 Azure AD resources can be created in a single tenant by users of the Free edition of Azure Active Directory by default. If you've at least one verified domain, the default Azure AD service quota for your organization is extended to 300,000 Azure AD resources. Azure AD service quota for organizations created by self-service sign-up remains 50,000 Azure AD resources even after you performed an internal admin takeover and the organization is converted to a managed tenant with at least one verified domain. This service limit is unrelated to the pricing tier limit of 500,000 resources on the Azure AD pricing page. To go beyond the default quota, you must contact Microsoft Support.</li><li>A non-admin user can create no more than 250 Azure AD resources. Both active resources and deleted resources that are available to restore count toward this quota. Only deleted Azure AD resources that were deleted fewer than 30 days ago are available to restore. Deleted Azure AD resources that are no longer available to restore count toward this quota at a value of one-quarter for 30 days. If you have developers who are likely to repeatedly exceed this quota in the course of their regular duties, you can create and assign a custom role with permission to create a limitless number of app registrations.</li></ul>|
| Applications| <ul><li>A user, group, or service principal can have a maximum of 1,500 app role assignments.</li><li>A user can only have a maximum of 48 apps where they have username and password credentials configured.</li></ul>| | Application manifest| A maximum of 1200 entries can be added in the Application Manifest. | | Groups | <ul><li>A non-admin user can create a maximum of 250 groups in an Azure AD organization. Any Azure AD admin who can manage groups in the organization can also create unlimited number of groups (up to the Azure AD object limit). If you assign a role to remove the limit for a user, assign them to a less privileged built-in role such as User Administrator or Groups Administrator.</li><li>An Azure AD organization can have a maximum of 5000 dynamic groups.</li><li>A maximum of 300 role-assignable groups can be created in a single Azure AD organization (tenant).</li><li>Any number of Azure AD resources can be members of a single group.</li><li>A user can be a member of any number of groups.</li></ul>|
active-directory Security Operations Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-introduction.md
Microsoft has many products and services that enable you to customize your IT en
* [Monitor sign-ins with the Azure AD sign-in log](../reports-monitoring/concept-all-sign-ins.md) * [Audit activity reports in the Azure Active Directory portal](../reports-monitoring/concept-audit-logs.md) * [Investigate risk with Azure Active Directory Identity Protection](../identity-protection/howto-identity-protection-investigate-risk.md)
- * [Connect Azure AD Identity Protection data to Microsoft Sentinel](../../sentinel/data-connectors-reference.md#azure-active-directory-identity-protection)
+ * [Connect Azure AD Identity Protection data to Microsoft Sentinel](../../sentinel/data-connectors/azure-active-directory-identity-protection.md)
* Active Directory Domain Services (AD DS)
active-directory Manage Workflow Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-properties.md
Previously updated : 01/31/2022 Last updated : 03/07/2023
You can update the following basic information without creating a new workflow.
If you change any other parameters, a new version is required to be created as outlined in the [Managing workflow versions](manage-workflow-tasks.md) article.
-If done via the Azure portal, the new version is created automatically. If done using Microsoft Graph, you will have to manually create a new version of the workflow. For more information, see [Edit the properties of a workflow using Microsoft Graph](#edit-the-properties-of-a-workflow-using-microsoft-graph).
+If done via the Azure portal, the new version is created automatically. If done using Microsoft Graph, you must manually create a new version of the workflow. For more information, see [Edit the properties of a workflow using Microsoft Graph](#edit-the-properties-of-a-workflow-using-microsoft-graph).
## Edit the properties of a workflow using the Azure portal
-To edit the properties of a workflow using the Azure portal, you'll do the following steps:
+To edit the properties of a workflow using the Azure portal, you do the following steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
To edit the properties of a workflow using the Azure portal, you'll do the follo
1. On the left menu, select **Workflows (Preview)**.
-1. Here you'll see a list of all of your current workflows. Select the workflow that you want to edit.
+1. Here you see a list of all of your current workflows. Select the workflow that you want to edit.
:::image type="content" source="media/manage-workflow-properties/manage-list.png" alt-text="Screenshot of the manage workflow list.":::
active-directory Howto Export Risk Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-export-risk-data.md
Azure Event Hubs can look at incoming data from sources like Azure AD Identity P
## Other options
-Organizations can choose to [connect Azure AD data to Microsoft Sentinel](../../sentinel/data-connectors-reference.md#azure-active-directory-identity-protection) as well for further processing.
+Organizations can choose to [connect Azure AD data to Microsoft Sentinel](../../sentinel/data-connectors/azure-active-directory-identity-protection.md) as well for further processing.
Organizations can use the [Microsoft Graph API to programatically interact with risk events](howto-identity-protection-graph-api.md).
Organizations can use the [Microsoft Graph API to programatically interact with
- [What is Azure Active Directory monitoring?](../reports-monitoring/overview-monitoring.md) - [Install and use the log analytics views for Azure Active Directory](../reports-monitoring/howto-install-use-log-analytics-views.md)-- [Connect data from Azure Active Directory (Azure AD) Identity Protection](../../sentinel/data-connectors-reference.md#azure-active-directory-identity-protection)
+- [Connect data from Azure Active Directory (Azure AD) Identity Protection](../../sentinel/data-connectors/azure-active-directory-identity-protection.md)
- [Azure Active Directory Identity Protection and the Microsoft Graph PowerShell SDK](howto-identity-protection-graph-api.md) - [Tutorial: Stream Azure Active Directory logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md)
active-directory Overview Identity Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/overview-identity-protection.md
Microsoft doesn't provide specific details about how risk is calculated. Each le
Data from Identity Protection can be exported to other tools for archive and further investigation and correlation. The Microsoft Graph based APIs allow organizations to collect this data for further processing in a tool such as their SIEM. Information about how to access the Identity Protection API can be found in the article, [Get started with Azure Active Directory Identity Protection and Microsoft Graph](howto-identity-protection-graph-api.md)
-Information about integrating Identity Protection information with Microsoft Sentinel can be found in the article, [Connect data from Azure AD Identity Protection](../../sentinel/data-connectors-reference.md#azure-active-directory-identity-protection).
+Information about integrating Identity Protection information with Microsoft Sentinel can be found in the article, [Connect data from Azure AD Identity Protection](../../sentinel/data-connectors/azure-active-directory-identity-protection.md).
Organizations can choose to store data for longer periods by changing diagnostic settings in Azure AD. They can choose to send data to a Log Analytics workspace, archive data to a storage account, stream data to Event Hubs, or send data to a partner solution. Detailed information about how to do so can be found in the article, [How To: Export risk data](howto-export-risk-data.md).
active-directory Cross Tenant Synchronization Configure Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure-graph.md
Previously updated : 02/27/2023 Last updated : 03/08/2023
This article describes the key steps to configure cross-tenant synchronization u
![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant** -- Azure AD Premium P1 or P2 license-- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings-- [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) role to configure cross-tenant synchronization-- [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator) or [Application Administrator](../roles/permissions-reference.md#application-administrator) role to assign users to a configuration and to delete a configuration-- [Global Administrator](../roles/permissions-reference.md#global-administrator) role to consent to required permissions
+- Azure AD Premium P1 or P2 license. For more information, see [License requirements](cross-tenant-synchronization-overview.md#license-requirements).
+- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings.
+- [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) role to configure cross-tenant synchronization.
+- [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator) or [Application Administrator](../roles/permissions-reference.md#application-administrator) role to assign users to a configuration and to delete a configuration.
+- [Global Administrator](../roles/permissions-reference.md#global-administrator) role to consent to required permissions.
![Icon for the target tenant.](./media/common/icon-tenant-target.png)<br/>**Target tenant** -- Azure AD Premium P1 or P2 license-- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings-- [Global Administrator](../roles/permissions-reference.md#global-administrator) role to consent to required permissions
+- Azure AD Premium P1 or P2 license. For more information, see [License requirements](cross-tenant-synchronization-overview.md#license-requirements).
+- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings.
+- [Global Administrator](../roles/permissions-reference.md#global-administrator) role to consent to required permissions.
## Step 1: Sign in to tenants and consent to permissions
active-directory Cross Tenant Synchronization Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md
Previously updated : 02/06/2023 Last updated : 03/08/2023
By the end of this article, you'll be able to:
![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant** -- Azure AD Premium P1 or P2 license-- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings-- [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) role to configure cross-tenant synchronization-- [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator) or [Application Administrator](../roles/permissions-reference.md#application-administrator) role to assign users to a configuration and to delete a configuration
+- Azure AD Premium P1 or P2 license. For more information, see [License requirements](cross-tenant-synchronization-overview.md#license-requirements).
+- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings.
+- [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) role to configure cross-tenant synchronization.
+- [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator) or [Application Administrator](../roles/permissions-reference.md#application-administrator) role to assign users to a configuration and to delete a configuration.
![Icon for the target tenant.](./media/common/icon-tenant-target.png)<br/>**Target tenant** -- Azure AD Premium P1 or P2 license-- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings
+- Azure AD Premium P1 or P2 license. For more information, see [License requirements](cross-tenant-synchronization-overview.md#license-requirements).
+- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings.
## Step 1: Plan your provisioning deployment
active-directory Recommendation Mfa From Known Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-mfa-from-known-devices.md
Previously updated : 03/02/2023 Last updated : 03/07/2023
[Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices. - This article covers the recommendation to minimize multi-factor authentication (MFA) prompts from known devices. This recommendation is called `tenantMFA` in the recommendations API in Microsoft Graph. ## Description As an admin, you want to maintain security for your companyΓÇÖs resources, but you also want your employees to easily access resources as needed.
-MFA enables you to enhance the security posture of your tenant. While enabling MFA is a good practice, you should try to keep the number of MFA prompts your users have to go through at a minimum. One option you have to accomplish this goal is to **allow users to remember multi-factor authentication on devices they trust**.
-
-The remember multi-factor authentication feature sets a persistent cookie on the browser when a user selects the Don't ask again for X days option at sign-in. The user isn't prompted again for MFA from that browser until the cookie expires. If the user opens a different browser on the same device or clears the cookies, they're prompted again to verify.
+MFA enables you to enhance the security posture of your tenant. While enabling MFA is a good practice, you should try to keep the number of MFA prompts your users have to go through at a minimum. One option you have to accomplish this goal is to **allow users to remember multi-factor authentication on trusted devices**.
-![Remember MFA on trusted devices](./media/recommendation-mfa-from-known-devices\remember-mfa-on-trusted-devices.png)
+The "remember multi-factor authentication on trusted device" feature sets a persistent cookie on the browser when a user selects the "Don't ask again for X days" option at sign-in. The user isn't prompted again for MFA from that browser until the cookie expires. If the user opens a different browser on the same device or clears the cookies, they're prompted again to verify.
For more information, see [Configure Azure AD Multi-Factor Authentication settings](../authentication/howto-mfa-mfasettings.md). -
-## Logic
-
-This recommendation shows up, if you have set the remember multi-factor authentication feature to less than 30 days.
-
+This recommendation shows up if you have set the **remember multi-factor authentication** feature to less than 30 days.
## Value
This recommendation improves your user's productivity and minimizes the sign-in
## Action plan
-1. Review [configure Azure AD Multi-Factor Authentication settings](../authentication/howto-mfa-mfasettings.md).
+1. Review the [How to configure Azure AD Multi-Factor Authentication settings](../authentication/howto-mfa-mfasettings.md) article.
+1. Go to **Azure AD** > **Multifactor authentication** > select the **Additional cloud-based multifactor authentication settings** link.
+
+ ![Screenshot of the configuration settings link in Azure AD multifactor authentication section.](media/recommendation-mfa-from-known-devices/mfa-configuration-settings.png)
+
+1. Adjust the number of days in the **remember multi-factor authentication on trusted device** section to 90 days.
+
+ ![Remember MFA on trusted devices](./media/recommendation-mfa-from-known-devices\remember-mfa-on-trusted-devices.png)
-2. Set the remember multi-factor authentication feature to 90 days.
-
## Next steps
active-directory Recommendation Migrate Apps From Adfs To Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-apps-from-adfs-to-azure-ad.md
Previously updated : 03/02/2023 Last updated : 03/07/2023
This article covers the recommendation to migrate apps from Active Directory Fed
## Description
-As an admin responsible for managing applications, I want my applications to use Azure ADΓÇÖs security features and maximize their value.
-
-## Logic
-
-If a tenant has apps on AD FS, and any of these apps are deemed 100% migratable, this recommendation shows up.
+As an admin responsible for managing applications, you want your applications to use Azure ADΓÇÖs security features and maximize their value. This recommendation shows up if your tenant has apps on ADFS that can 100% be migrated to Azure AD.
## Value
active-directory Recommendation Migrate To Authenticator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-to-authenticator.md
Previously updated : 03/02/2023 Last updated : 03/07/2023
This article covers the recommendation to migrate users to the Microsoft Authenticator app, which is currently a preview recommendation. This recommendation is called `useAuthenticatorApp` in the recommendations API in Microsoft Graph. - ## Description
-Multi-factor authentication (MFA) is a key component to improve the security posture of your Azure AD tenant. However, while keeping your tenant safe is important, you should also keep an eye on keeping the security related overhead as little as possible on your users.
+Multi-factor authentication (MFA) is a key component to improve the security posture of your Azure AD tenant. While SMS text and voice calls were once commonly used for multi-factor authentication, they are becoming increasingly less secure. You also don't want to overwhelm your users with lots of MFA methods and messages.
-One possibility to accomplish this goal is to migrate users using SMS or voice call for MFA to use the Microsoft authenticator app.
+One way to ease the burden on your users while also increasing the security of their authentication methods is to migrate anyone using SMS or voice call for MFA to use the Microsoft Authenticator app.
This recommendation appears if Azure AD detects that your tenant has users authenticating using SMS or voice instead of the Microsoft Authenticator app in the past week.
-![Screenshot of the Migrate from SMS to Microsoft Authenticator app recommendation.](media/recommendation-migrate-to-authenticator/recommendation-migrate-sms-to-authenticator.png)
- ## Value Push notifications through the Microsoft Authenticator app provide the least intrusive MFA experience for users. This method is the most reliable and secure option because it relies on a data connection rather than telephony.
active-directory Recommendation Remove Unused Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-remove-unused-apps.md
Applications that the recommendation identified appear in the list of **Impacted
1. Take note of the application name and ID that the recommendation identified. 1. Go to **Azure AD** > **App registration** and locate the application that was surfaced as part of this recommendation.+
+ ![Screenshot of the Azure AD app registration page.](media/recommendation-remove-unused-apps/app-registrations-list.png)
+ 1. Determine if the identified application is needed. - If the application is no longer needed, remove it from your tenant. - If the application is needed, we suggest you take appropriate steps to ensure the application is used in intervals of less than 30 days.
active-directory Recommendation Remove Unused Credential From Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-remove-unused-credential-from-apps.md
Applications that the recommendation identified appear in the list of **Impacted
1. Take note of the application name and ID that the recommendation identified.
-1. Go to **Azure AD** > **App registration** and locate the application that was surfaced as part of this recommendation.
+1. Go to **Azure AD** > **App registration** and select the application that was surfaced as part of this recommendation.
+
+ ![Screenshot of the Azure AD app registration page.](media/recommendation-remove-unused-credential-from-apps/app-registrations-list.png)
+ 1. Navigate to the **Certificates & Secrets** section of the app registration.+
+ ![Screenshot of the Certificates and secrets section of Azure AD.](media/recommendation-remove-unused-credential-from-apps/app-certificates-secrets.png)
+ 1. Locate the unused credential and remove it. ## Next steps
active-directory Recommendation Renew Expiring Application Credential https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-renew-expiring-application-credential.md
Applications that the recommendation identified appear in the list of **Impacted
1. Take note of the application name and ID that the recommendation identified. 1. Navigate to **Azure AD** > **App registration** and locate the application for which the credential needs to be rotated.+
+ ![Screenshot of the Azure AD app registration page.](media/recommendation-renew-expiring-application-credential/app-registrations-list.png)
+ 1. Navigate to the **Certificates & Secrets** section of the app registration. 1. Pick the credential type that you want to rotate and navigate to either **Certificates** or **Client Secret** tab and follow the prompts.+
+ ![Screenshot of the Certificates and secrets section of Azure AD.](media/recommendation-renew-expiring-application-credential/app-certificates-secrets.png)
+ 1. Once the certificate or secret is successfully added, update the service code to ensure it works with the new credential and doesn't negatively affect customers. 1. Use the Azure AD sign-in logs to validate that the Key ID of the credential matches the one that was recently added. 1. After validating the new credential, navigate back to **Azure AD** > **App registrations** > **Certificates and Secrets** for the app and remove the old credential.
active-directory Ardoq Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ardoq-provisioning-tutorial.md
Before we proceed we need to obtain a *Tenant Url* and a *Secret Token*, to conf
1. To create your *tenant URL*, use this template: `https://<YOUR-SUBDOMAIN>.ardoq.com/api/scim/v2` by replacing the placeholder text `<YOUR-SUBDOMAIN>`.This value will be entered in the **Tenant Url** field in the Provisioning tab of your Ardoq application in the Azure portal. >[!NOTE]
- >`<YOUR-SUBDOMAIN>` is the subdomain your organization has chosen to access Ardoq. This is the same URL segment you use when you access the Ardoq app. For example, if your organization accesses Ardoq at `https://acme.ardoq.com` you'd fill in `acme. If you're in the US and access Ardoq at `https://piedpiper.us.ardoq.com` then you'd fill in `piedpiper.us`.
+ >`<YOUR-SUBDOMAIN>` is the subdomain your organization has chosen to access Ardoq. This is the same URL segment you use when you access the Ardoq app. For example, if your organization accesses Ardoq at `https://acme.ardoq.com` you'd fill in `acme`. If you're in the US and access Ardoq at `https://piedpiper.us.ardoq.com` then you'd fill in `piedpiper.us`.
## Step 3. Add Ardoq from the Azure AD application gallery
aks Auto Upgrade Node Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-node-image.md
Last updated 02/03/2023
# Automatically upgrade Azure Kubernetes Service cluster node operating system images (preview)
-AKS now supports an exclusive channel dedicated to controlling node-level OS security updates. This channel, referred to as the node OS auto-upgrade channel, works in tandem with the existing [Autoupgrade][auto-upgrade] channel which is used for Kubernetes version upgrades.
+AKS now supports an exclusive channel dedicated to controlling node-level OS security updates. This channel, referred to as the node OS auto-upgrade channel, works in tandem with the existing [auto-upgrade][Autoupgrade] channel which is used for Kubernetes version upgrades.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
The following upgrade channels are available:
||| | `None`| Your nodes won't have security updates applied automatically. This means you're solely responsible for your security updates|N/A| | `Unmanaged`|OS updates will be applied automatically through the OS built-in patching infrastructure. Newly allocated machines will be unpatched initially and will be patched at some point by the OS's infrastructure|Ubuntu applies security patches through unattended upgrade roughly once a day around 06:00 UTC. Windows and Mariner don't apply security patches automatically, so this option behaves equivalently to `None`|
-| `SecurityPatch`|AKS will update the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only" on a regular basis. Where possible, patches will also be applied without disruption to existing nodes. Some patches, such as kernel patches, can't be applied to existing nodes without disruption. For such patches, the VHD will be updated and existing machines will be upgraded to that VHD following maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group.|N/A|
+| `SecurityPatch`|AKS will update the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only" on a regular basis. Where possible, patches will also be applied without disruption to existing nodes. Some patches, such as kernel patches, can't be applied to existing nodes without disruption. For such patches, the VHD will be updated and existing machines will be upgraded to that VHD following maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group. If you use this channel, Linux [unattended upgrades][unattended-upgrades] will be disabled by default.|N/A|
| `NodeImage`|AKS will update the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades][unattended-upgrades] will be disabled by default.| To set the node OS auto-upgrade channel when creating a cluster, use the *node-os-upgrade-channel* parameter, similar to the following example.
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Previously updated : 03/03/2023 Last updated : 03/06/2023 # Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
The traditional [Azure Container Networking Interface (CNI)](./configure-azure-c
With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network (VNet) subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (using the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS.
-> [!NOTE]
-> Azure CNI Overlay is currently **_unavailable_** in the **West US** region. All other public regions are supported.
-- ## Overview of overlay networking In overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet. Pods receive IPs from a private CIDR that is provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Additional nodes that are created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.
The following are additional factors to consider when planning pods IP address s
## Network security groups
-Pod to pod traffic with Azure CNI Overlay is not encapsulated and subnet [network security group][nsgs] rules are applied. If the subnet NSG contains deny rules that would impact the pod CIDR traffic, make sure the following rules are in place to ensure proper cluster functionality (in addition to all [AKS egress requirements][aks-egress]):
+Pod to pod traffic with Azure CNI Overlay is not encapsulated and subnet [network security group][nsg] rules are applied. If the subnet NSG contains deny rules that would impact the pod CIDR traffic, make sure the following rules are in place to ensure proper cluster functionality (in addition to all [AKS egress requirements][aks-egress]):
* Traffic from the node CIDR to the node CIDR on all ports and protocols * Traffic from the node CIDR to the pod CIDR on all ports and protocols (required for service traffic routing)
To learn how to utilize AKS with your own Container Network Interface (CNI) plug
[az-feature-show]: /cli/azure/feature#az-feature-show [aks-egress]: limit-egress-traffic.md [aks-network-policies]: use-network-policies.md
-[nsg]: /azure/virtual-network/network-security-groups-overview
+[nsg]: ../virtual-network/network-security-groups-overview.md
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
The output of the command resembles the following example:
- To learn how to use CSI driver for Azure Files, see [Use Azure Files with CSI driver][azure-files-csi]. - To learn how to use CSI driver for Azure Blob storage, see [Use Azure Blob storage with CSI driver][azure-blob-csi]. - For more information about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][operator-best-practices-storage].
+- For more information about disk-based storage solutions, see [Disk-based solutions in AKS][disk-based-solutions].
<!-- LINKS - external --> [access-modes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
The output of the command resembles the following example:
[enable-on-demand-bursting]: ../virtual-machines/disks-enable-bursting.md?tabs=azure-cli [az-premium-ssd]: ../virtual-machines/disks-types.md#premium-ssds [general-purpose-machine-sizes]: ../virtual-machines/sizes-general.md
+[disk-based-solutions]: /azure/cloud-adoption-framework/scenarios/app-platform/aks/storage#disk-based-solutions
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
Moving or renaming your AKS cluster and its associated resources isn't supported
Most clusters are deleted upon user request; in some cases, especially where customers are bringing their own Resource Group, or doing cross-RG tasks deletion can take more time or fail. If you have an issue with deletes, double-check that you do not have locks on the RG, that any resources outside of the RG are disassociated from the RG, and so on.
+## Can I restore my cluster after deleting it?
+
+No, you're unable to restore your cluster after deleting it. When you delete your cluster, the associated resource group and all its resources will also be deleted. If you want to keep any of your resources, move them to another resource group before deleting your cluster. If you have the **Owner** or **User Access Administrator** built-in role, you can lock Azure resources to protect them from accidental deletions and modifications. For more information, see [Lock your resources to protect your infrastructure][lock-azure-resources].
+ ## If I have pod / deployments in state 'NodeLost' or 'Unknown' can I still upgrade my cluster? You can, but we don't recommend it. Upgrades should be performed when the state of the cluster is known and healthy.
Any patch, including security patches, is automatically applied to the AKS clust
[private-clusters-github-issue]: https://github.com/Azure/AKS/issues/948 [csi-driver]: https://github.com/Azure/secrets-store-csi-driver-provider-azure [vm-sla]: https://azure.microsoft.com/support/legal/sla/virtual-machines/
+[lock-azure-resources]: ../azure-resource-manager/management/lock-resources.md
aks Monitor Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks.md
Previously updated : 07/29/2021 Last updated : 03/08/2023
-# Monitoring Azure Kubernetes Service (AKS) with Azure Monitor
-This scenario describes how to use Azure Monitor to monitor the health and performance of Azure Kubernetes Service (AKS). It includes collection of telemetry critical for monitoring, analysis and visualization of collected data to identify trends, and how to configure alerting to be proactively notified of critical issues.
+# Monitor Azure Kubernetes Service (AKS) with Azure Monitor
-The [Cloud Monitoring Guide](/azure/cloud-adoption-framework/manage/monitor/) defines the [primary monitoring objectives](/azure/cloud-adoption-framework/strategy/monitoring-strategy#formulate-monitoring-requirements) you should focus on for your Azure resources. This scenario focuses on Health and Status monitoring using Azure Monitor.
+This article describes how to use Azure Monitor to monitor the health and performance of Azure Kubernetes Service (AKS). It includes collection of telemetry critical for monitoring, analysis and visualization of collected data to identify trends, and how to configure alerting to be proactively notified of critical issues.
+
+The [Cloud Monitoring Guide](/azure/cloud-adoption-framework/manage/monitor/) defines the [primary monitoring objectives](/azure/cloud-adoption-framework/strategy/monitoring-strategy#formulate-monitoring-requirements) you should focus on for your Azure resources. This scenario focuses on health and status monitoring using Azure Monitor.
## Scope of the scenario
-This scenario is intended for customers using Azure Monitor to monitor AKS. It does not include the following, although this content may be added in subsequent updates to the scenario.
-- Monitoring of Kubernetes clusters outside of Azure except for referring to existing content for Azure Arc-enabled Kubernetes. -- Monitoring of AKS with tools other than Azure Monitor except to fill gaps in Azure Monitor and Container Insights.
+This article does *not* include information on the following scenarios:
+
+- Monitoring of Kubernetes clusters outside of Azure except for referring to existing content for Azure Arc-enabled Kubernetes
+- Monitoring of AKS with tools other than Azure Monitor except to fill gaps in Azure Monitor and Container Insights
> [!NOTE] > Azure Monitor was designed to monitor the availability and performance of cloud resources. While the operational data stored in Azure Monitor may be useful for investigating security incidents, other services in Azure were designed to monitor security. Security monitoring for AKS is done with [Microsoft Sentinel](../sentinel/overview.md) and [Microsoft Defender for Cloud](../defender-for-cloud/defender-for-cloud-introduction.md). See [Monitor virtual machines with Azure Monitor - Security monitoring](../azure-monitor/vm/monitor-virtual-machine-security.md) for a description of the security monitoring tools in Azure and their relationship to Azure Monitor. >
-> For information on using the security services to monitor AKS, see [Microsoft Defender for Kubernetes - the benefits and features](../defender-for-cloud/defender-for-kubernetes-introduction.md) and [Connect Azure Kubernetes Service (AKS) diagnostics logs to Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-kubernetes-service-aks).
-## Container insights
-AKS generates [platform metrics and resource logs](monitor-aks-reference.md), like any other Azure resource, that you can use to monitor its basic health and performance. Enable [Container insights](../azure-monitor/containers/container-insights-overview.md) to expand on this monitoring. Container insights is a feature in Azure Monitor that monitors the health and performance of managed Kubernetes clusters hosted on AKS in addition to other cluster configurations. Container insights provides interactive views and workbooks that analyze collected data for a variety of monitoring scenarios.
+> For information on using the security services to monitor AKS, see [Microsoft Defender for Kubernetes - the benefits and features](../defender-for-cloud/defender-for-kubernetes-introduction.md) and [Connect Azure Kubernetes Service (AKS) diagnostics logs to Microsoft Sentinel](../sentinel/data-connectors/azure-kubernetes-service-aks.md).
+
+## Container Insights
-[Prometheus](https://aka.ms/azureprometheus-promio) and [Grafana](https://aka.ms/azureprometheus-promio-grafana) are CNCF backed widely popular open source tools for kubernetes monitoring. AKS exposes many metrics in Prometheus format which makes Prometheus a popular choice for monitoring. [Container insights](../azure-monitor/containers/container-insights-overview.md) has native integration with AKS, collecting critical metrics and logs, alerting on identified issues, and providing visualization with workbooks. It also collects certain Prometheus metrics, and many native Azure Monitor Insights are built-up on top of Prometheus metrics. Container insights complements and completes E2E monitoring of AKS including log collection which Prometheus as stand-alone tool doesnΓÇÖt provide. Many customers use Prometheus integration and Azure Monitor together for E2E monitoring.
+AKS generates [platform metrics and resource logs](monitor-aks-reference.md) that you can use to monitor basic health and performance. Enable [Container Insights](../azure-monitor/containers/container-insights-overview.md) to expand on this monitoring. Container Insights is a feature in Azure Monitor that monitors the health and performance of managed Kubernetes clusters hosted on AKS and provides interactive views and workbooks that analyze collected data for a variety of monitoring scenarios.
-Learn more about using Container insights at [Container insights overview](../azure-monitor/containers/container-insights-overview.md). [Monitor layers of AKS with Container insights](#monitor-layers-of-aks-with-container-insights) below introduces various features of Container insights and the monitoring scenarios that they support.
+[Prometheus](https://aka.ms/azureprometheus-promio) and [Grafana](https://aka.ms/azureprometheus-promio-grafana) are popular CNCF-backed open-source tools for Kubernetes monitoring. AKS exposes many metrics in Prometheus format, which makes Prometheus a popular choice for monitoring. [Container Insights](../azure-monitor/containers/container-insights-overview.md) has native integration with AKS, like collecting critical metrics and logs, alerting on identified issues, and providing visualization with workbooks. It also collects certain Prometheus metrics. Many native Azure Monitor insights are built on top of Prometheus metrics. Container Insights complements and completes E2E monitoring of AKS, including log collection, which Prometheus as stand-alone tool doesnΓÇÖt provide. You can use Prometheus integration and Azure Monitor together for E2E monitoring.
+To learn more about using Container Insights, see the [Container Insights overview](../azure-monitor/containers/container-insights-overview.md). To learn more about features and monitoring scenarios of Container Insights, see [Monitor layers of AKS with Container Insights](#monitor-layers-of-aks-with-container-insights).
## Configure monitoring+ The following sections describe the steps required to configure full monitoring of your AKS cluster using Azure Monitor.+ ### Create Log Analytics workspace
-You require at least one Log Analytics workspace to support Container insights and to collect and analyze other telemetry about your AKS cluster. There is no cost for the workspace, but you do incur ingestion and retention costs when you collect data. See [Azure Monitor Logs pricing details](../azure-monitor/logs/cost-logs.md) for details.
-If you're just getting started with Azure Monitor, then start with a single workspace and consider creating additional workspaces as your requirements evolve. Many environments will use a single workspace for all the Azure resources they monitor. You can even share a workspace used by [Microsoft Defender for Cloud and Microsoft Sentinel](../azure-monitor/vm/monitor-virtual-machine-security.md), although many customers choose to segregate their availability and performance telemetry from security data.
+You need at least one Log Analytics workspace to support Container Insights and to collect and analyze other telemetry about your AKS cluster. There's no cost for the workspace, but you do incur ingestion and retention costs when you collect data. See [Azure Monitor Logs pricing details](../azure-monitor/logs/cost-logs.md) for details.
-See [Designing your Azure Monitor Logs deployment](../azure-monitor/logs/workspace-design.md) for details on logic that you should consider for designing a workspace configuration.
+If you're just getting started with Azure Monitor, we recommend starting with a single workspace and creating additional workspaces as your requirements evolve. Many environments will use a single workspace for all the Azure resources they monitor. You can even share a workspace used by [Microsoft Defender for Cloud and Microsoft Sentinel](../azure-monitor/vm/monitor-virtual-machine-security.md), although it's common to segregate availability and performance telemetry from security data.
-### Enable container insights
-When you enable Container insights for your AKS cluster, it deploys a containerized version of the [Log Analytics agent](../agents/../azure-monitor/agents/log-analytics-agent.md) that sends data to Azure Monitor. There are multiple methods to enable it depending whether you're working with a new or existing AKS cluster. See [Enable Container insights](../azure-monitor/containers/container-insights-onboard.md) for prerequisites and configuration options.
+For information on design considerations for a workspace configuration, see [Designing your Azure Monitor Logs deployment](../azure-monitor/logs/workspace-design.md).
+### Enable Container Insights
+
+When you enable Container Insights for your AKS cluster, it deploys a containerized version of the [Log Analytics agent](../agents/../azure-monitor/agents/log-analytics-agent.md) that sends data to Azure Monitor. For prerequisites and configuration options, see [Enable Container Insights](../azure-monitor/containers/container-insights-onboard.md).
### Configure collection from Prometheus
-Container insights allows you to send Prometheus metrics to [Azure Monitor managed service for Prometheus](../azure-monitor/essentials/prometheus-metrics-overview.md) or to your Log Analytics workspace without requiring a local Prometheus server. You can analyze this data using Azure Monitor features along with other data collected by Container insights. See [Collect Prometheus metrics with Container insights](../azure-monitor/containers/container-insights-prometheus.md) for details on this configuration.
+Container Insights allows you to send Prometheus metrics to [Azure Monitor managed service for Prometheus](../azure-monitor/essentials/prometheus-metrics-overview.md) or to your Log Analytics workspace without requiring a local Prometheus server. You can analyze this data using Azure Monitor features along with other data collected by Container Insights. For details on this configuration, see [Collect Prometheus metrics with Container Insights](../azure-monitor/containers/container-insights-prometheus.md).
### Collect resource logs
-The logs for AKS control plane components are implemented in Azure as [resource logs](../azure-monitor/essentials/resource-logs.md). Container insights doesn't currently use these logs, so you do need to create your own log queries to view and analyze them. See [How to query logs from Container insights](../azure-monitor/containers/container-insights-log-query.md#resource-logs) for details on the structure of these logs and how to write queries for them.
-You need to create a diagnostic setting to collect resource logs. Create multiple diagnostic settings to send different sets of logs to different locations. See [Create diagnostic settings to send platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md) to create diagnostic settings for your AKS cluster.
+The logs for AKS control plane components are implemented in Azure as [resource logs](../azure-monitor/essentials/resource-logs.md). Container Insights doesn't use these logs, so you need to create your own log queries to view and analyze them. For details on log structure and queries, see [How to query logs from Container Insights](../azure-monitor/containers/container-insights-log-query.md#resource-logs).
+
+You need to create a diagnostic setting to collect resource logs. You can create multiple diagnostic settings to send different sets of logs to different locations. To create diagnostic settings for your AKS cluster, see [Create diagnostic settings to send platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md).
-There is a cost for sending resource logs to a workspace, so you should only collect those log categories that you intend to use. Send logs to an Azure storage account to reduce costs if you need to retain the information but don't require it to be readily available for analysis. See [Resource logs](monitor-aks-reference.md#resource-logs) for a description of the categories that are available for AKS and See [Azure Monitor Logs pricing details](../azure-monitor/logs/cost-logs.md) for details on the cost of ingesting and retaining log data. Start by collecting a minimal number of categories and then modify the diagnostic setting to collect additional categories as your needs increase and as you understand your associated costs.
+There's a cost for sending resource logs to a workspace, so you should only collect those log categories that you intend to use. Start by collecting a minimal number of categories and then modify the diagnostic setting to collect additional categories as your needs increase and as you understand your associated costs. You can send logs to an Azure storage account to reduce costs if you need to retain the information. For a description of the categories that are available for AKS, see [Resource logs](monitor-aks-reference.md#resource-logs). For details on the cost of ingesting and retaining log data, see [Azure Monitor Logs pricing details](../azure-monitor/logs/cost-logs.md).
-If you're unsure about which resource logs to initially enable, use the recommendations in the following table which are based on the most common customer requirements. Enable the other categories if you later find that you require this information.
+If you're unsure which resource logs to initially enable, use the following recommendations:
| Category | Enable? | Destination | |:|:|:|
If you're unsure about which resource logs to initially enable, use the recommen
| kube-scheduler | Disable | | | AllMetrics | Enable | Log Analytics workspace | ---
+The recommendations are based on the most common customer requirements. You can enable other categories later if you need to.
## Access Azure Monitor features
-Access Azure Monitor features for all AKS clusters in your subscription from the **Monitoring** menu in the Azure portal or for a single AKS cluster from the **Monitor** section of the **Kubernetes services** menu. The screenshot below shows the cluster's **Monitor** menu.
+Access Azure Monitor features for all AKS clusters in your subscription from the **Monitoring** menu in the Azure portal, or for a single AKS cluster from the **Monitor** section of the **Kubernetes services** menu. The following image shows the **Monitoring** menu for your AKS cluster:
:::image type="content" source="media/monitor-aks/monitoring-menu.png" alt-text="AKS Monitoring menu" lightbox="media/monitor-aks/monitoring-menu.png"::: | Menu option | Description | |:|:|
-| Insights | Opens container insights for the current cluster. Select **Containers** from the **Monitor** menu to open container insights for all clusters. |
+| Insights | Opens Container Insights for the current cluster. Select **Containers** from the **Monitor** menu to open Container Insights for all clusters. |
| Alerts | Views alerts for the current cluster. | | Metrics | Open metrics explorer with the scope set to the current cluster. | | Diagnostic settings | Create diagnostic settings for the cluster to collect resource logs. |
Access Azure Monitor features for all AKS clusters in your subscription from the
| Logs | Open Log Analytics with the scope set to the current cluster to analyze log data and access prebuilt queries. | | Workbooks | Open workbook gallery for Kubernetes service. |
+## Monitor layers of AKS with Container Insights
--
-## Monitor layers of AKS with Container insights
-Because of the wide variance in Kubernetes implementations, each customer will have unique requirements for AKS monitoring. The approach you take should be based on factors including scale, topology, organizational roles, and multi-cluster tenancy. This section presents a common strategy that is a bottoms-up approach starting from infrastructure up through applications. Each layer has distinct monitoring requirements. These layers are illustrated in the following diagram and discussed in more detail in the following sections.
+Your monitoring approach should be based on your unique workload requirements, and factors such as scale, topology, organizational roles, and multi-cluster tenancy. This section presents a common bottoms-up approach, starting from infrastructure up through applications. Each layer has distinct monitoring requirements.
:::image type="content" source="media/monitor-aks/layers.png" alt-text="AKS layers" border="false"::: ### Level 1 - Cluster level components
-Cluster level includes the following components.
+
+The cluster level includes the following component:
| Component | Monitoring requirements | |:|:| | Node | Understand the readiness status and performance of CPU, memory, disk and IP usage for each node and proactively monitor their usage trends before deploying any workloads. |
+Use existing views and reports in Container Insights to monitor cluster level components.
-Use existing views and reports in Container Insights to monitor cluster level components. The **Cluster** view gives you a quick view of the performance of the nodes in your cluster including their CPU and memory utilization. Use the **Nodes** view to view the health of each node in addition to the health and performance of the pods running on each. See [Monitor your Kubernetes cluster performance with Container insights](../azure-monitor/containers/container-insights-analyze.md) for details on using this view and analyzing node health and performance. Use the **Subnet IP Usage** view under workbooks to get a quick view of the IP allocation and assignment on each node for a selected time-range.
-
+- Use the **Cluster** view to see the performance of the nodes in your cluster, including CPU and memory utilization.
+- Use the **Nodes** view to see the health of each node and the health and performance of the pods running on them. For more information on analyzing node health and performance, see [Monitor your Kubernetes cluster performance with Container Insights](../azure-monitor/containers/container-insights-analyze.md).
+- Under **Reports**, use the **Node Monitoring** workbooks to analyze disk capacity, disk IO, and GPU usage. For more information about these workbooks, see [Node Monitoring workbooks](../azure-monitor/containers/container-insights-reports.md#node-monitoring-workbooks).
-Use **Node** workbooks in Container Insights to analyze disk capacity and IO in addition to GPU usage. See [Node Monitoring workbooks](../azure-monitor/containers/container-insights-reports.md#node-monitoring-workbooks) for a description of these workbooks.
+ :::image type="content" source="media/monitor-aks/container-insights-cluster-view.png" alt-text="Container Insights cluster view" lightbox="media/monitor-aks/container-insights-cluster-view.png":::
--
-For troubleshooting scenarios, you may need to access the AKS nodes directly for maintenance or immediate log collection. For security purposes, the AKS nodes aren't exposed to the internet but you can `kubectl debug` to SSH to the AKS nodes. See [Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting](ssh.md) for details on this process.
+- Under **Monitoring**, you can select **Workbooks**, then **Subnet IP Usage** to see the IP allocation and assignment on each node for a selected time-range.
+ :::image type="content" source="media/monitor-aks/monitoring-workbooks-subnet-ip-usage.png" alt-text="Container Insights workbooks" lightbox="media/monitor-aks/monitoring-workbooks-subnet-ip-usage.png":::
+For troubleshooting scenarios, you may need to access the AKS nodes directly for maintenance or immediate log collection. For security purposes, the AKS nodes aren't exposed to the internet but you can use the `kubectl debug` command to SSH to the AKS nodes. For more information on this process, see [Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting](ssh.md).
### Level 2 - Managed AKS components
-Managed AKS level includes the following components.
+
+The managed AKS level includes the following components:
| Component | Monitoring | |:|:|
-| API Server | Monitor the status of API server, identifying any increase in request load and bottlenecks if the service is down. |
-| Kubelet | Monitoring Kubelet helps in troubleshooting of pod management issues, pods not starting, nodes not ready or pods getting killed. |
+| API Server | Monitor the status of API server and identify any increase in request load and bottlenecks if the service is down. |
+| Kubelet | Monitor Kubelet to help troubleshoot pod management issues, pods not starting, nodes not ready, or pods getting killed. |
-Azure Monitor and container insights don't yet provide full monitoring for the API server. You can use metrics explorer to view the **Inflight Requests** counter, but you should refer to metrics in Prometheus for a complete view of API Server performance. This includes such values as request latency and workqueue processing time. A Grafana dashboard that provides views of the critical metrics for the API server is available at [Grafana Labs](https://grafana.com/grafan)
+Azure Monitor and Container Insights don't provide full monitoring for the API server.
+- Under **Monitoring**, you can select **Metrics** to view the **Inflight Requests** counter, but you should refer to metrics in Prometheus for a complete view of the API server performance. This includes such values as request latency and workqueue processing time.
+- To see critical metrics for the API server, see [Grafana Labs](https://grafana.com/grafan).
-Use the **Kubelet** workbook to view the health and performance of each kubelet. See [Resource Monitoring workbooks](../azure-monitor/containers/container-insights-reports.md#resource-monitoring-workbooks) for details on this workbook. For troubleshooting scenarios, you can access kubelet logs using the process described at [Get kubelet logs from Azure Kubernetes Service (AKS) cluster nodes](kubelet-logs.md).
+ :::image type="content" source="media/monitor-aks/grafana-api-server.png" alt-text="Grafana API server" lightbox="media/monitor-aks/grafana-api-server.png":::
+- Under **Reports**, use the **Kubelet** workbook to see the health and performance of each kubelet. For more information about these workbooks, see [Resource Monitoring workbooks](../azure-monitor/containers/container-insights-reports.md#resource-monitoring-workbooks). For troubleshooting scenarios, you can access kubelet logs using the process described at [Get kubelet logs from Azure Kubernetes Service (AKS) cluster nodes](kubelet-logs.md).
### Resource logs
-Use [log queries with resource logs](../azure-monitor/containers/container-insights-log-query.md#resource-logs) to analyze control plane logs generated by AKS components.
+
+Use [log queries with resource logs](../azure-monitor/containers/container-insights-log-query.md#resource-logs) to analyze control plane logs generated by AKS components.
### Level 3 - Kubernetes objects and workloads
-Kubernetes objects and workloads level include the following components.
+
+The Kubernetes objects and workloads level includes the following components:
| Component | Monitoring requirements | |:|:|
-| Deployments | Monitor actual vs desired state of the deployment and the status and resource utilization of the pods running on them. |
+| Deployments | Monitor actual vs desired state of the deployment and the status and resource utilization of the pods running on them. |
| Pods | Monitor status and resource utilization, including CPU and memory, of the pods running on your AKS cluster. |
-| Containers | Monitor the resource utilization, including CPU and memory, of the containers running on your AKS cluster. |
-
+| Containers | Monitor resource utilization, including CPU and memory, of the containers running on your AKS cluster. |
-Use existing views and reports in Container Insights to monitor containers and pods. Use the **Nodes** and **Controllers** views to view the health and performance of the pods running on them and drill down to the health and performance of their containers. View the health and performance for containers directly from the **Containers** view. See [Monitor your Kubernetes cluster performance with Container insights](../azure-monitor/containers/container-insights-analyze.md) for details on using this view and analyzing container health and performance.
+Use existing views and reports in Container Insights to monitor containers and pods.
+- Use the **Nodes** and **Controllers** views to see the health and performance of the pods running on them and drill down to the health and performance of their containers.
+- Use the **Containers** view to see the health and performance for the containers. For more information on analyzing container health and performance, see [Monitor your Kubernetes cluster performance with Container Insights](../azure-monitor/containers/container-insights-analyze.md#analyze-nodes-controllers-and-container-health).
-Use the **Deployment** workbook in Container insights to view metrics collected for deployments. See [Deployment & HPA metrics with Container insights](../azure-monitor/containers/container-insights-deployment-hpa-metrics.md) for details.
+ :::image type="content" source="media/monitor-aks/container-insights-containers-view.png" alt-text="Container Insights containers view" lightbox="media/monitor-aks/container-insights-containers-view.png":::
-> [!NOTE]
-> Deployments view in Container insights is currently in public preview.
+- Under **Reports**, use the **Deployments** workbook to see deployment metrics. For more information, ee [Deployment & HPA metrics with Container Insights](../azure-monitor/containers/container-insights-deployment-hpa-metrics.md).
+ :::image type="content" source="media/monitor-aks/container-insights-deployments-workbook.png" alt-text="Container Insights deployments workbook" lightbox="media/monitor-aks/container-insights-deployments-workbook.png":::
#### Live data
-In troubleshooting scenarios, Container insights provides access to live AKS container logs (stdout/stderror), events, and pod metrics. See [How to view Kubernetes logs, events, and pod metrics in real-time](../azure-monitor/containers/container-insights-livedata-overview.md) for details on using this feature.
+
+In troubleshooting scenarios, Container Insights provides access to live AKS container logs (stdout/stderror), events and pod metrics. For more information about this feature, see [How to view Kubernetes logs, events, and pod metrics in real-time](../azure-monitor/containers/container-insights-livedata-overview.md).
:::image type="content" source="media/monitor-aks/container-insights-live-data.png" alt-text="Container insights live data" lightbox="media/monitor-aks/container-insights-live-data.png":::
-### Level 4- Applications
-The application level includes the application workloads running in the AKS cluster.
+### Level 4 - Applications
+
+The application level includes the following component:
| Component | Monitoring requirements | |:|:|
-| Applications | Monitor microservice application deployments to identify application failures and latency issues. Includes such information as request rates, response times, and exceptions. |
+| Applications | Monitor microservice application deployments to identify application failures and latency issues, including information like request rates, response times, and exceptions. |
-Application Insights provides complete monitoring of applications running on AKS and other environments. If you have a Java application, you can provide monitoring without instrumenting your code following [Zero instrumentation application monitoring for Kubernetes - Azure Monitor Application Insights](../azure-monitor/app/kubernetes-codeless.md). For complete monitoring though, you should configure code-based monitoring depending on your application.
+Application Insights provides complete monitoring of applications running on AKS and other environments. If you have a Java application, you can provide monitoring without instrumenting your code by following [Zero instrumentation application monitoring for Kubernetes - Azure Monitor Application Insights](../azure-monitor/app/kubernetes-codeless.md).
-- [ASP.NET Applications](../azure-monitor/app/asp-net.md)-- [ASP.NET Core Applications](../azure-monitor/app/asp-net-core.md)-- [.NET Console Applications](../azure-monitor/app/console.md)
+If you want complete monitoring, you should configure code-based monitoring depending on your application:
+
+- [ASP.NET applications](../azure-monitor/app/asp-net.md)
+- [ASP.NET Core applications](../azure-monitor/app/asp-net-core.md)
+- [.NET Console applications](../azure-monitor/app/console.md)
- [Java](../azure-monitor/app/opentelemetry-enable.md?tabs=java) - [Node.js](../azure-monitor/app/nodejs.md) - [Python](../azure-monitor/app/opencensus-python.md) - [Other platforms](../azure-monitor/app/app-insights-overview.md#supported-languages)
-See [What is Application Insights?](../azure-monitor/app/app-insights-overview.md)
+For more information, see [What is Application Insights?](../azure-monitor/app/app-insights-overview.md).
+
+### Level 5 - External components
-### Level 5- External components
-Components external to AKS include the following.
+The components external to AKS include the following:
| Component | Monitoring requirements | |:|:| | Service Mesh, Ingress, Egress | Metrics based on component. | | Database and work queues | Metrics based on component. |
-Monitor external components such as Service Mesh, Ingress, Egress with Prometheus and Grafana or other proprietary tools. Monitor databases and other Azure resources using other features of Azure Monitor.
+Monitor external components such as Service Mesh, Ingress, Egress with Prometheus and Grafana, or other proprietary tools. Monitor databases and other Azure resources using other features of Azure Monitor.
-## Analyze metric data with metrics explorer
-Use metrics explorer when you want to perform custom analysis of metric data collected for your containers. Metrics explorer allows you plot charts, visually correlate trends, and investigate spikes and dips in metrics' values. Create a metrics alert to proactively notify you when a metric value crosses a threshold, and pin charts to dashboards for use by different members of your organization.
+## Analyze metric data with the Metrics explorer
-See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this feature. For a list of the platform metrics collected for AKS, see [Monitoring AKS data reference metrics](monitor-aks-reference.md#metrics). When Container insights is enabled for a cluster, [addition metric values](../azure-monitor/containers/container-insights-update-metrics.md) are available.
-
+Use the **Metrics** explorer to perform custom analysis of metric data collected for your containers. It allows you plot charts, visually correlate trends, and investigate spikes and dips in your metrics values. You can create metrics alert to proactively notify you when a metric value crosses a threshold and pin charts to dashboards for use by different members of your organization.
+For more information, see [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md). For a list of the platform metrics collected for AKS, see [Monitoring AKS data reference metrics](monitor-aks-reference.md#metrics). When Container Insights is enabled for a cluster, [addition metric values](../azure-monitor/containers/container-insights-update-metrics.md) are available.
## Analyze log data with Log Analytics
-Use Log Analytics when you want to analyze resource logs or dig deeper into the data used to create the views in Container insights. Log Analytics allows you to perform custom analysis of your log data.
-
-See [How to query logs from Container insights](../azure-monitor/containers/container-insights-log-query.md) for details on using log queries to analyze data collected by Container insights. See [Using queries in Azure Monitor Log Analytics](../azure-monitor/logs/queries.md) for information on using these queries and [Log Analytics tutorial](../azure-monitor/logs/log-analytics-tutorial.md) for a complete tutorial on using Log Analytics to run queries and work with their results.
-
-For a list of the tables collected for AKS that you can analyze in metrics explorer, see [Monitoring AKS data reference logs](monitor-aks-reference.md#azure-monitor-logs-tables).
-
-In addition to Container insights data, you can use log queries to analyze resource logs from AKS. For a list of the log categories available, see [AKS data reference resource logs](monitor-aks-reference.md#resource-logs). You must create a diagnostic setting to collect each category as described in [Configure monitoring](#configure-monitoring) before that data will be collected.
+Select **Logs** to use the Log Analytics tool to analyze resource logs or dig deeper into data used to create the views in Container Insights. Log Analytics allows you to perform custom analysis of your log data.
+For more information on Log Analytics and to get started with it, see:
+- [How to query logs from Container Insights](../azure-monitor/containers/container-insights-log-query.md)
+- [Using queries in Azure Monitor Log Analytics](../azure-monitor/logs/queries.md)
+- [Monitoring AKS data reference logs](monitor-aks-reference.md#azure-monitor-logs-tables)
+- [Log Analytics tutorial](../azure-monitor/logs/log-analytics-tutorial.md)
+You can also use log queries to analyze resource logs from AKS. For a list of the log categories available, see [AKS data reference resource logs](monitor-aks-reference.md#resource-logs). You must create a diagnostic setting to collect each category as described in [Configure monitoring](#configure-monitoring) before the data can be collected.
## Alerts
-[Alerts in Azure Monitor](../azure-monitor/alerts/alerts-overview.md) proactively notify you of interesting data and patterns in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. There are no preconfigured alert rules for AKS clusters, but you can create your own based on data collected by Container insights.
+
+[Alerts in Azure Monitor](../azure-monitor/alerts/alerts-overview.md) proactively notify you of interesting data and patterns in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. There are no preconfigured alert rules for AKS clusters, but you can create your own based on data collected by Container Insights.
> [!IMPORTANT]
-> Most alert rules have a cost that's dependent on the type of rule, how many dimensions it includes, and how frequently it's run. Refer to **Alert rules** in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before you create any alert rules.
+> Most alert rules have a cost dependent on the type of rule, how many dimensions it includes, and how frequently it runs. Refer to **Alert rules** in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before creating any alert rules.
+
+### Choose an alert type
+The most common types of alert rules in Azure Monitor are [metric alerts](../azure-monitor/alerts/alerts-metric.md) and [log query alerts](../azure-monitor/alerts/alerts-log-query.md). The type of alert rule that you create for a particular scenario will depend on where the data is located that you want to set an alert for.
-### Choosing the alert type
-The most common types of alert rules in Azure Monitor are [metric alerts](../azure-monitor/alerts/alerts-metric.md) and [log query alerts](../azure-monitor/alerts/alerts-log-query.md). The type of alert rule that you create for a particular scenario will depend on where the data is located that you're alerting on. You may have cases though where data for a particular alerting scenario is available in both Metrics and Logs, and you need to determine which rule type to use.
+You may have cases where data for a particular alerting scenario is available in both **Metrics** and **Logs**, and you need to determine which rule type to use. It's typically the best strategy to use metric alerts instead of log alerts when possible, because metric alerts are more responsive and stateful. You can create a metric alert on any values you can analyze in the Metrics explorer. If the logic for your alert rule requires data in **Logs**, or if it requires more complex logic, then you can use a log query alert rule.
-It's typically the best strategy to use metric alerts instead of log alerts when possible since they're more responsive and stateful. You can create a metric alert on any values you can analyze in metrics explorer. If the logic for your alert rule requires data in Logs, or if it requires more complex logic, then you can use a log query alert rule.
+For example, if you want an alert when an application workload is consuming excessive CPU, you can create a metric alert using the CPU metric. If you need an alert when a particular message is found in a control plane log, then you'll require a log alert.
-For example, if you want to alert when an application workload is consuming excessive CPU then you can create a metric alert using the CPU metric. If you need an alert when a particular message is found in a control plane log, then you'll require a log alert.
### Metric alert rules
-Metric alert rules use the same metric values as metrics explorer. In fact, you can create an alert rule directly from metrics explorer with the data you're currently analyzing. You can use any of the values in [AKS data reference metrics](monitor-aks-reference.md#metrics) for metric alert rules.
-Container insights includes a feature in public preview that creates a recommended set of metric alert rules for your AKS cluster. This feature creates new metric values (also in preview) used by the alert rules that you can also use in metrics explorer. See [Recommended metric alerts (preview) from Container insights](../azure-monitor/containers/container-insights-metric-alerts.md) for details on this feature and on creating metric alerts for AKS.
+Metric alert rules use the same metric values as the Metrics explorer. In fact, you can create an alert rule directly from the metrics explorer with the data you're currently analyzing. You can use any of the values in [AKS data reference metrics](monitor-aks-reference.md#metrics) for metric alert rules.
+Container Insights includes a feature that creates a recommended set of metric alert rules for your AKS cluster. This feature creates new metric values used by the alert rules that you can also use in the Metrics explorer. For more information, see [Recommended metric alerts (preview) from Container Insights](../azure-monitor/containers/container-insights-metric-alerts.md).
-### Log alerts rules
-Use log alert rules to generate an alert from the results of a log query. This may be data collected by Container insights or from AKS resource logs. See [How to create log alerts from Container insights](../azure-monitor/containers/container-insights-log-alerts.md) for details on log alert rules for AKS and a set of sample queries designed for alert rules. You can also refer to [How to query logs from Container insights](../azure-monitor/containers/container-insights-log-query.md) for details on log queries that could be modified for alert rules.
+### Log alert rules
+
+Use log alert rules to generate an alert from the results of a log query. This may be data collected by Container Insights or from AKS resource logs. For more information, see [How to create log alerts from Container Insights](../azure-monitor/containers/container-insights-log-alerts.md) and [How to query logs from Container Insights](../azure-monitor/containers/container-insights-log-query.md).
### Virtual machine alerts+ AKS relies on a Virtual Machine Scale Set that must be healthy to run AKS workloads. You can alert on critical metrics such as CPU, memory, and storage for the virtual machines using the guidance at [Monitor virtual machines with Azure Monitor: Alerts](../azure-monitor/vm/monitor-virtual-machine-alerts.md). ### Prometheus alerts
-For those conditions where Azure Monitor either doesn't have the data required for an alerting condition, or where the alerting may not be responsive enough, you should configure alerts in Prometheus. One example is alerting for the API server. Azure Monitor doesn't collect critical information for the API server including whether it's available or experiencing a bottleneck. You can create a log query alert using the data from the kube-apiserver resource log category, but this can take up to several minutes before you receive an alert which may not be sufficient for your requirements.
+You can configure Prometheus alerts to cover scenarios where Azure Monitor either doesn't have the data required for an alerting condition or the alerting may not be responsive enough. For example, Azure Monitor doesn't collect critical information for the API server. You can create a log query alert using the data from the kube-apiserver resource log category, but it can take up to several minutes before you receive an alert, which may not be sufficient for your requirements. In this case, we recommend configuring Prometeus alerts.
## Next steps -- See [Monitoring AKS data reference](monitor-aks-reference.md) for a reference of the metrics, logs, and other important values created by AKS.
+- For more information about AKS metrics, logs, and other important values, see [Monitoring AKS data reference](monitor-aks-reference.md).
aks Quotas Skus Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quotas-skus-regions.md
Title: Limits for resources, SKUs, regions
+ Title: Limits for resources, SKUs, and regions in Azure Kubernetes Service (AKS)
description: Learn about the default quotas, restricted node VM SKU sizes, and region availability of the Azure Kubernetes Service (AKS). Previously updated : 03/25/2021 Last updated : 03/07/2023 # Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)
This article details the default resource limits for Azure Kubernetes Service (A
All other network, compute, and storage limitations apply to the provisioned infrastructure. For the relevant limits, see [Azure subscription and service limits](../azure-resource-manager/management/azure-subscription-service-limits.md). > [!IMPORTANT]
-> When you upgrade an AKS cluster, extra resources are temporarily consumed. These resources include available IP addresses in a virtual network subnet or virtual machine vCPU quota.
+> When you upgrade an AKS cluster, extra resources are temporarily consumed. These resources include available IP addresses in a virtual network subnet or virtual machine vCPU quota.
> > For Windows Server containers, you can perform an upgrade operation to apply the latest node updates. If you don't have the available IP address space or vCPU quota to handle these temporary resources, the cluster upgrade process will fail. For more information on the Windows Server node upgrade process, see [Upgrade a node pool in AKS][nodepool-upgrade].
The list of supported VM sizes in AKS is evolving with the release of new VM SKU
## Restricted VM sizes
-VM sizes with less than 2 CPUs may not be used with AKS.
+VM sizes with less than two CPUs may not be used with AKS.
-Each node in an AKS cluster contains a fixed amount of compute resources such as vCPU and memory. If an AKS node contains insufficient compute resources, pods might fail to run correctly. To ensure the required *kube-system* pods and your applications can be reliably scheduled, AKS requires nodes use VM sizes with at least 2 CPUs.
+Each node in an AKS cluster contains a fixed amount of compute resources such as vCPU and memory. If an AKS node contains insufficient compute resources, pods might fail to run correctly. To ensure the required *kube-system* pods and your applications can be reliably scheduled, AKS requires nodes use VM sizes with at least two CPUs.
For more information on VM types and their compute resources, see [Sizes for virtual machines in Azure][vm-skus].
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
Each number in the version indicates general compatibility with the previous ver
Aim to run the latest patch release of the minor version you're running. For example, if your production cluster is on **`1.17.7`**, **`1.17.8`** is the latest available patch version available for the *1.17* series. You should upgrade to **`1.17.8`** as soon as possible to ensure your cluster is fully patched and supported.
+## AKS Kubernetes release calendar
+
+View the upcoming version releases on the AKS Kubernetes release calendar. To see real-time updates of region release status and version release notes, visit the [AKS release status webpage][aks-release]. To learn more about the release status webpage, see [AKS release tracker][aks-tracker].
+
+> [!NOTE]
+> AKS follows 12 months of support for a generally available (GA) Kubernetes version. To read more about our support policy for Kubernetes versioning, please read our [FAQ](https://learn.microsoft.com/azure/aks/supported-kubernetes-versions?tabs=azure-cli#faq).
+
+For the past release history, see [Kubernetes history](https://en.wikipedia.org/wiki/Kubernetes#History).
+
+| K8s version | Upstream release | AKS preview | AKS GA | End of life |
+|--|-|--||-|
+| 1.22 | Aug-04-21 | Sept 2021 | Dec 2021 | Dec 2022 |
+| 1.23 | Dec 2021 | Jan 2022 | Apr 2022 | Apr 2023 |
+| 1.24 | Apr-22-22 | May 2022 | Jul 2022 | Jul 2023
+| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Dec 2023
+| 1.26 | Dec 2022 | Feb 2023 | Mar 2023 | Mar 2024
+| 1.27 | Apr 2023 | May 2023 | Jun 2023 | Jun 2024
+ ## Alias minor version > [!NOTE]
To see what patch you're on, run the `az aks show --resource-group myResourceGro
## Kubernetes version support policy
-AKS defines a generally available version as a version enabled in all SLO or SLA measurements and available in all regions. AKS supports three GA minor versions of Kubernetes:
+AKS defines a GA version as a version enabled in all SLO or SLA measurements and available in all regions. AKS supports three GA minor versions of Kubernetes:
* The latest GA minor version released in AKS (which we'll refer to as N). * Two previous minor versions.
Get-AzAksVersion -Location eastus
-## AKS Kubernetes release calendar
-
-> [!NOTE]
-> AKS follows 12 months of support for a GA Kubernetes version. To read more about our support policy for Kubernetes versioning, please read our [FAQ](https://learn.microsoft.com/azure/aks/supported-kubernetes-versions?tabs=azure-cli#faq).
-
-For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kubernetes#History).
-
-| K8s version | Upstream release | AKS preview | AKS GA | End of life |
-|--|-|--||-|
-| 1.22 | Aug-04-21 | Sept 2021 | Dec 2021 | Dec 2022 |
-| 1.23 | Dec 2021 | Jan 2022 | Apr 2022 | Apr 2023 |
-| 1.24 | Apr-22-22 | May 2022 | Jul 2022 | Jul 2023
-| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Dec 2023
-| 1.26 | Dec 2022 | Feb 2023 | Mar 2023 | Mar 2024
-| 1.27 | Apr 2023 | May 2023 | Jun 2023 | Jun 2024
-
-> [!NOTE]
-> To see real-time updates of region release status and version release notes, visit the [AKS release status webpage][aks-release]. To learn more about the release status webpage, see [AKS release tracker][aks-tracker].
- ## FAQ ### How does Microsoft notify me of new Kubernetes versions?
aks Use System Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-system-pools.md
The following limitations apply when you create and manage AKS clusters that sup
## System and user node pools
-For a system node pool, AKS automatically assigns the label **kubernetes.azure.com/mode: system** to its nodes. This causes AKS to prefer scheduling system pods on node pools that contain this label. This label doesn't prevent you from scheduling application pods on system node pools. However, we recommend you isolate critical system pods from your application pods to prevent misconfigured or rogue application pods from accidentally killing system pods.
+For a system node pool, AKS automatically assigns the label **kubernetes.azure.com/mode: system** to its nodes. This causes AKS to prefer scheduling system pods on node pools that contain this label. This label doesn't prevent you from scheduling application pods on system node pools. However, we recommend you isolate critical system pods from your application pods to prevent misconfigured or rogue application pods from accidentally killing system pods.
+ You can enforce this behavior by creating a dedicated system node pool. Use the `CriticalAddonsOnly=true:NoSchedule` taint to prevent application pods from being scheduled on system node pools. System node pools have the following restrictions:
You can do the following operations with node pools:
* You can delete system node pools, provided you have another system node pool to take its place in the AKS cluster. * An AKS cluster may have multiple system node pools and requires at least one system node pool. * If you want to change various immutable settings on existing node pools, you can create new node pools to replace them. One example is to add a new node pool with a new maxPods setting and delete the old node pool.
+* Use [node affinity][node-affinity] to *require* or *prefer* which nodes can be scheduled based on node labels. You can set `key` to `kubernetes.azure.com`, `operator` to `In`, and `values` of either `user` or `system` to your YAML, applying this definition using `kubectl apply -f yourYAML.yaml`.
## Create a new AKS cluster with a system node pool
New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCoun
### [Azure CLI](#tab/azure-cli)
-You can add one or more system node pools to existing AKS clusters. It's recommended to schedule your application pods on user node pools, and dedicate system node pools to only critical system pods. This prevents rogue application pods from accidentally killing system pods. Enforce this behavior with the `CriticalAddonsOnly=true:NoSchedule` [taint][aks-taints] for your system node pools.
+You can add one or more system node pools to existing AKS clusters. It's recommended to schedule your application pods on user node pools, and dedicate system node pools to only critical system pods. This prevents rogue application pods from accidentally killing system pods. Enforce this behavior with the `CriticalAddonsOnly=true:NoSchedule` [taint][aks-taints] for your system node pools.
The following command adds a dedicated node pool of mode type system with a default count of three nodes.
$myAKSCluster | Set-AzAksCluster
## Show details for your node pool
-You can check the details of your node pool with the following command.
+You can check the details of your node pool with the following command.
### [Azure CLI](#tab/azure-cli)
In this article, you learned how to create and manage system node pools in an AK
[maximum-pods]: configure-azure-cni.md#maximum-pods-per-node [update-node-pool-mode]: use-system-pools.md#update-existing-cluster-system-and-user-node-pools [start-stop-nodepools]: /start-stop-nodepools.md
+[node-affinity]: operator-best-practices-advanced-scheduler.md#node-affinity
app-service Tutorial Auth Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-auth-aad.md
description: Learn how to use App Service authentication and authorization to se
keywords: app service, azure app service, authN, authZ, secure, security, multi-tiered, azure active directory, azure ad ms.devlang: csharp Previously updated : 09/23/2021- Last updated : 3/08/2023+ zone_pivot_groups: app-service-platform-windows-linux
+# Requires non-internal subscription - internal subscriptons doesn't provide permission to correctly configure AAD apps
# Tutorial: Authenticate and authorize users end-to-end in Azure App Service ::: zone pivot="platform-windows"
-[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service. In addition, App Service has built-in support for [user authentication and authorization](overview-authentication-authorization.md). This tutorial shows how to secure your apps with App Service authentication and authorization. It uses a ASP.NET Core app with an Angular.js front end as an example. App Service authentication and authorization support all language runtimes, and you can learn how to apply it to your preferred language by following the tutorial.
+[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service. In addition, App Service has built-in support for [user authentication and authorization](overview-authentication-authorization.md). This tutorial shows how to secure your apps with App Service authentication and authorization. It uses an Express.js with views frontend as an example. App Service authentication and authorization support all language runtimes, and you can learn how to apply it to your preferred language by following the tutorial.
::: zone-end ::: zone pivot="platform-linux"
-[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system. In addition, App Service has built-in support for [user authentication and authorization](overview-authentication-authorization.md). This tutorial shows how to secure your apps with App Service authentication and authorization. It uses an ASP.NET Core app with an Angular.js front end as an example. App Service authentication and authorization support all language runtimes, and you can learn how to apply it to your preferred language by following the tutorial.
+[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system. In addition, App Service has built-in support for [user authentication and authorization](overview-authentication-authorization.md). This tutorial shows how to secure your apps with App Service authentication and authorization. It uses an Express.js with views. App Service authentication and authorization support all language runtimes, and you can learn how to apply it to your preferred language by following the tutorial.
::: zone-end
-![Simple authentication and authorization](./media/tutorial-auth-aad/simple-auth.png)
-
-It also shows you how to secure a multi-tiered app, by accessing a secured back-end API on behalf of the authenticated user, both [from server code](#call-api-securely-from-server-code) and [from browser code](#call-api-securely-from-browser-code).
-
-![Advanced authentication and authorization](./media/tutorial-auth-aad/advanced-auth.png)
-
-These are only some of the possible authentication and authorization scenarios in App Service.
-
-Here's a more comprehensive list of things you learn in the tutorial:
+In the tutorial, you learn:
> [!div class="checklist"] > * Enable built-in authentication and authorization
Here's a more comprehensive list of things you learn in the tutorial:
> * Use access tokens from server code > * Use access tokens from client (browser) code
-You can follow the steps in this tutorial on macOS, Linux, Windows.
-
+> [!TIP]
+> After completing this scenario, continue to the next procedure to learn how to connect to Azure services as an authenticated user. Common scenarios include accessing Azure Storage or a database as the user who has specific abilities or access to specific tables or files.
-## Prerequisites
+The authentication in this procedure is provided at the hosting platform layer by Azure App Service. You must deploy the frontend and backend app and configure authentication for this web app to be used successfully.
-To complete this tutorial:
-- <a href="https://git-scm.com/" target="_blank">Install Git</a>-- <a href="https://dotnet.microsoft.com/download/dotnet-core/3.1" target="_blank">Install the latest .NET Core 3.1 SDK</a>
+## Get the user profile
-## Create local .NET Core app
+The frontend app is configured to securely use the backend API. The frontend application provides a Microsoft sign-in for the user, then allows the user to get their **_fake_** profile from the backend. This tutorial uses a fake profile to simplify the steps to complete the scenario.
-In this step, you set up the local .NET Core project. You use the same project to deploy a back-end API app and a front-end web app.
+Before your source code is executed on the frontend, the App Service injects the authenticated `accessToken` from the App Service `x-ms-token-aad-access-token` header. The frontend source code then accesses and sends the accessToken to the backend server as the `bearerToken` to securely access the backend API. The backend server validates the bearerToken before it's passed into your backend source code. Once your backend source code receives the bearerToken, it can be used.
-### Clone and run the sample application
+ _In [the next article](tutorial-connect-app-access-microsoft-graph-as-user-javascript.md) in this series_, the bearerToken is exchanged for a token with a scope to access the Microsoft Graph API. The Microsoft Graph API returns the user's profile information.
-1. Run the following commands to clone the sample repository and run it.
+## Prerequisites
- ```bash
- git clone https://github.com/Azure-Samples/dotnet-core-api
- cd dotnet-core-api
- dotnet run
- ```
-
-1. Navigate to `http://localhost:5000` and try adding, editing, and removing todo items.
- ![ASP.NET Core API running locally](./media/tutorial-auth-aad/local-run.png)
+- [Node.js (LTS)](https://nodejs.org/download/)
-1. To stop ASP.NET Core, press `Ctrl+C` in the terminal.
+## 1. Clone the sample application
-1. Make sure the default branch is `main`.
+1. In the [Azure Cloud Shell](https://shell.azure.com), run the following command to clone the sample repository.
- ```bash
- git branch -m main
+ ```azurecli-interactive
+ git clone https://github.com/Azure-Samples/js-e2e-web-app-easy-auth-app-to-app
```
-
- > [!TIP]
- > The branch name change isn't required by App Service. However, since many repositories are changing their default branch to `main`, this tutorial also shows you how to deploy a repository from `main`. For more information, see [Change deployment branch](deploy-local-git.md#change-deployment-branch).
-
-## Deploy apps to Azure
-In this step, you deploy the project to two App Service apps. One is the front-end app and the other is the back-end app.
+## 2. Create and deploy apps
-### Configure a deployment user
+Create the resource group, web app plan, the web app and deploy in a single step.
-### Create Azure resources
+1. Change into the frontend web app directory.
+ ```azurecli-interactive
+ cd frontend
+ ```
-In the Cloud Shell, run the following commands to create two Windows web apps. Replace _\<front-end-app-name>_ and _\<back-end-app-name>_ with two globally unique app names (valid characters are `a-z`, `0-9`, and `-`). For more information on each command, see [Host a RESTful API with CORS in Azure App Service](app-service-web-tutorial-rest-api.md).
+1. Create and deploy the frontend web app with [az webapp up](/cli/azure/webapp#az-webapp-up). Because web app name has to be globally unique, replace `<front-end-app-name>` with a unique name.
-```azurecli-interactive
-az group create --name myAuthResourceGroup --location "West Europe"
-az appservice plan create --name myAuthAppServicePlan --resource-group myAuthResourceGroup --sku FREE
-az webapp create --resource-group myAuthResourceGroup --plan myAuthAppServicePlan --name <front-end-app-name> --deployment-local-git --query deploymentLocalGitUrl
-az webapp create --resource-group myAuthResourceGroup --plan myAuthAppServicePlan --name <back-end-app-name> --deployment-local-git --query deploymentLocalGitUrl
-```
+ ```azurecli-interactive
+ az webapp up --resource-group myAuthResourceGroup --name <front-end-app-name> --plan myPlan --sku FREE --location "West Europe"--runtime "NODE:16-lts"
+ ```
+1. Change into the backend web app directory.
+ ```azurecli-interactive
+ cd ../backend
+ ```
-In the Cloud Shell, run the following commands to create two web apps. Replace _\<front-end-app-name>_ and _\<back-end-app-name>_ with two globally unique app names (valid characters are `a-z`, `0-9`, and `-`). For more information on each command, see [Create a .NET Core app in Azure App Service](quickstart-dotnetcore.md).
+1. Deploy the backend web app to same resource group and app plan. Because web app name has to be globally unique, replace `<back-end-app-name>` with a unique set of initials or numbers.
-```azurecli-interactive
-az group create --name myAuthResourceGroup --location "West Europe"
-az appservice plan create --name myAuthAppServicePlan --resource-group myAuthResourceGroup --sku FREE --is-linux
-az webapp create --resource-group myAuthResourceGroup --plan myAuthAppServicePlan --name <front-end-app-name> --runtime "DOTNETCORE|3.1" --deployment-local-git --query deploymentLocalGitUrl
-az webapp create --resource-group myAuthResourceGroup --plan myAuthAppServicePlan --name <back-end-app-name> --runtime "DOTNETCORE|3.1" --deployment-local-git --query deploymentLocalGitUrl
-```
+ ```azurecli-interactive
+ az webapp up --resource-group myAuthResourceGroup --name <back-end-app-name> --plan myPlan --runtime "NODE:16-lts"
+ ```
::: zone-end
-> [!NOTE]
-> Save the URLs of the Git remotes for your front-end app and back-end app, which are shown in the output from `az webapp create`.
->
-
-### Push to Azure from Git
-1. Since you're deploying the `main` branch, you need to set the default deployment branch for your two App Service apps to `main` (see [Change deployment branch](deploy-local-git.md#change-deployment-branch)). In the Cloud Shell, set the `DEPLOYMENT_BRANCH` app setting with the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command.
+1. Change into the frontend web app directory.
```azurecli-interactive
- az webapp config appsettings set --name <front-end-app-name> --resource-group myAuthResourceGroup --settings DEPLOYMENT_BRANCH=main
- az webapp config appsettings set --name <back-end-app-name> --resource-group myAuthResourceGroup --settings DEPLOYMENT_BRANCH=main
+ cd frontend
```
-1. Back in the _local terminal window_, run the following Git commands to deploy to the back-end app. Replace _\<deploymentLocalGitUrl-of-back-end-app>_ with the URL of the Git remote that you saved from [Create Azure resources](#create-azure-resources). When prompted for credentials by Git Credential Manager, make sure that you enter [your deployment credentials](deploy-configure-credentials.md), not the credentials you use to sign in to the Azure portal.
+1. Create and deploy the frontend web app with [az webapp up](/cli/azure/webapp#az-webapp-up). Because web app name has to be globally unique, replace `<front-end-app-name>` with a unique set of initials or numbers.
- ```bash
- git remote add backend <deploymentLocalGitUrl-of-back-end-app>
- git push backend main
+ ```azurecli-interactive
+ az webapp up --resource-group myAuthResourceGroup --name <front-end-app-name> --plan myPlan --sku FREE --location "West Europe" --os-type Linux --runtime "NODE:16-lts"
```
-1. In the local terminal window, run the following Git commands to deploy the same code to the front-end app. Replace _\<deploymentLocalGitUrl-of-front-end-app>_ with the URL of the Git remote that you saved from [Create Azure resources](#create-azure-resources).
+1. Change into the backend web app directory.
- ```bash
- git remote add frontend <deploymentLocalGitUrl-of-front-end-app>
- git push frontend main
+ ```azurecli-interactive
+ cd ../backend
```
-### Browse to the apps
-
-Navigate to the following URLs in a browser and see the two apps working.
-
-```
-http://<back-end-app-name>.azurewebsites.net
-http://<front-end-app-name>.azurewebsites.net
-```
+1. Deploy the backend web app to same resource group and app plan. Because web app name has to be globally unique, replace `<back-end-app-name>` with a unique set of initials or numbers.
-
-> [!NOTE]
-> If your app restarts, you may have noticed that new data has been erased. This behavior by design because the sample ASP.NET Core app uses an in-memory database.
->
->
-
-## Call back-end API from front end
-
-In this step, you point the front-end app's server code to access the back-end API. Later, you enable authenticated access from the front end to the back end.
-
-### Modify front-end code
+ ```azurecli-interactive
+ az webapp up --resource-group myAuthResourceGroup --name <back-end-app-name> --plan myPlan --runtime "NODE:16-lts"
+ ```
-1. In the local repository, open _Controllers/TodoController.cs_. At the beginning of the `TodoController` class, add the following lines and replace _\<back-end-app-name>_ with the name of your back-end app:
- ```cs
- private static readonly HttpClient _client = new HttpClient();
- private static readonly string _remoteUrl = "https://<back-end-app-name>.azurewebsites.net";
- ```
+## 3. Configure app setting
-1. Find the method that's decorated with `[HttpGet]` and replace the code inside the curly braces with:
+The frontend application needs to the know the URL of the backend application for API requests. Use the following Azure CLI command to configure the app setting. The URL should be in the format of `https://<back-end-app-name>.azurewebsites.net`.
- ```cs
- var data = await _client.GetStringAsync($"{_remoteUrl}/api/Todo");
- return JsonConvert.DeserializeObject<List<TodoItem>>(data);
- ```
+```azurecli-interactive
+az webapp config appsettings set --resource-group myAuthResourceGroup --name <front-end-app-name> --settings BACKEND_URL="https://<back-end-app-name>.azurewebsites.net"
+```
- The first line makes a `GET /api/Todo` call to the back-end API app.
+## 4. Frontend calls the backend
-1. Next, find the method that's decorated with `[HttpGet("{id}")]` and replace the code inside the curly braces with:
+Browse to the frontend app and return the _fake_ profile from the backend. This action validates that the frontend is successfully requesting the profile from the backend, and the backend is returning the profile.
- ```cs
- var data = await _client.GetStringAsync($"{_remoteUrl}/api/Todo/{id}");
- return Content(data, "application/json");
- ```
+1. Open the frontend web app in a browser, `https://<front-end-app-name>.azurewebsites.net`.
- The first line makes a `GET /api/Todo/{id}` call to the back-end API app.
+ :::image type="content" source="./media/tutorial-auth-aad/app-home-page.png" alt-text="Screenshot of web browser showing frontend application after successfully completing authentication.":::
-1. Next, find the method that's decorated with `[HttpPost]` and replace the code inside the curly braces with:
+1. Select the `Get user's profile` link.
+1. View the _fake_ profile returned from the backend web app.
- ```cs
- var response = await _client.PostAsJsonAsync($"{_remoteUrl}/api/Todo", todoItem);
- var data = await response.Content.ReadAsStringAsync();
- return Content(data, "application/json");
- ```
+ :::image type="content" source="./media/tutorial-auth-aad/app-profile-without-authentication.png" alt-text="Screenshot of browser with fake profile returned from server.":::
- The first line makes a `POST /api/Todo` call to the back-end API app.
+ The `withAuthentication` value of **false** indicates the authentication _isn't_ set up yet.
-1. Next, find the method that's decorated with `[HttpPut("{id}")]` and replace the code inside the curly braces with:
+## 5. Configure authentication
- ```cs
- var res = await _client.PutAsJsonAsync($"{_remoteUrl}/api/Todo/{id}", todoItem);
- return new NoContentResult();
- ```
+In this step, you enable authentication and authorization for the two web apps. This tutorial uses Azure Active Directory as the identity provider.
- The first line makes a `PUT /api/Todo/{id}` call to the back-end API app.
+You also configure the frontend app to:
-1. Next, find the method that's decorated with `[HttpDelete("{id}")]` and replace the code inside the curly braces with:
+- Grant the frontend app access to the backend app
+- Configure App Service to return a usable token
+- Use the token in your code.
- ```cs
- var res = await _client.DeleteAsync($"{_remoteUrl}/api/Todo/{id}");
- return new NoContentResult();
- ```
+For more information, see [Configure Azure Active Directory authentication for your App Services application](configure-authentication-provider-aad.md).
- The first line makes a `DELETE /api/Todo/{id}` call to the back-end API app.
+### Enable authentication and authorization for backend app
-1. Save all your changes. In the local terminal window, deploy your changes to the front-end app with the following Git commands:
+1. In the [Azure portal](https://portal.azure.com) menu, select **Resource groups** or search for and select *Resource groups* from any page.
- ```bash
- git add .
- git commit -m "call back-end API"
- git push frontend main
- ```
+1. In **Resource groups**, find and select your resource group. In **Overview**, select your backend app.
-### Check your changes
+1. In your backend app's left menu, select **Authentication**, and then select **Add identity provider**.
-1. Navigate to `http://<front-end-app-name>.azurewebsites.net` and add a few items, such as `from front end 1` and `from front end 2`.
+1. In the **Add an identity provider** page, select **Microsoft** as the **Identity provider** to sign in Microsoft and Azure AD identities.
-1. Navigate to `http://<back-end-app-name>.azurewebsites.net` to see the items added from the front-end app. Also, add a few items, such as `from back end 1` and `from back end 2`, then refresh the front-end app to see if it reflects the changes.
+1. Accept the default settings and select **Add**.
- :::image type="content" source="./media/tutorial-auth-aad/remote-api-call-run.png" alt-text="Screenshot of an Azure App Service REST API Sample in a browser window, which shows a To do list app with items added from the front-end app.":::
+ :::image type="content" source="./media/tutorial-auth-aad/configure-auth-back-end.png" alt-text="Screenshot of the backend app's left menu showing Authentication/Authorization selected and settings selected in the right menu.":::
-## Configure auth
+1. The **Authentication** page opens. Copy the **Client ID** of the Azure AD application to a notepad. You need this value later.
-In this step, you enable authentication and authorization for the two apps. You also configure the front-end app to generate an access token that you can use to make authenticated calls to the back-end app.
+ :::image type="content" source="./media/tutorial-auth-aad/get-application-id-back-end.png" alt-text="Screenshot of the Azure Active Directory Settings window showing the Azure AD App, and the Azure AD Applications window showing the Client ID to copy.":::
-You use Azure Active Directory as the identity provider. For more information, see [Configure Azure Active Directory authentication for your App Services application](configure-authentication-provider-aad.md).
+If you stop here, you have a self-contained app that's already secured by the App Service authentication and authorization. The remaining sections show you how to secure a multi-app solution by "flowing" the authenticated user from the frontend to the backend.
-### Enable authentication and authorization for back-end app
+### Enable authentication and authorization for frontend app
1. In the [Azure portal](https://portal.azure.com) menu, select **Resource groups** or search for and select *Resource groups* from any page.
-1. In **Resource groups**, find and select your resource group. In **Overview**, select your back-end app's management page.
+1. In **Resource groups**, find and select your resource group. In **Overview**, select your backend app's management page.
- :::image type="content" source="./media/tutorial-auth-aad/portal-navigate-back-end.png" alt-text="Screenshot of the Resource groups window, showing the Overview for an example resource group and a back-end app's management page selected.":::
+ :::image type="content" source="./media/tutorial-auth-aad/portal-navigate-back-end.png" alt-text="Screenshot of the Resource groups window, showing the Overview for an example resource group and a backend app's management page selected.":::
-1. In your back-end app's left menu, select **Authentication**, and then click **Add identity provider**.
+1. In your backend app's left menu, select **Authentication**, and then select **Add identity provider**.
1. In the **Add an identity provider** page, select **Microsoft** as the **Identity provider** to sign in Microsoft and Azure AD identities.
-1. Accept the default settings and click **Add**.
+1. Accept the default settings and select **Add**.
- :::image type="content" source="./media/tutorial-auth-aad/configure-auth-back-end.png" alt-text="Screenshot of the back-end app's left menu showing Authentication/Authorization selected and settings selected in the right menu.":::
+ :::image type="content" source="./media/tutorial-auth-aad/configure-auth-back-end.png" alt-text="Screenshot of the backend app's left menu showing Authentication/Authorization selected and settings selected in the right menu.":::
1. The **Authentication** page opens. Copy the **Client ID** of the Azure AD application to a notepad. You need this value later. :::image type="content" source="./media/tutorial-auth-aad/get-application-id-back-end.png" alt-text="Screenshot of the Azure Active Directory Settings window showing the Azure AD App, and the Azure AD Applications window showing the Client ID to copy.":::
-If you stop here, you have a self-contained app that's already secured by the App Service authentication and authorization. The remaining sections show you how to secure a multi-app solution by "flowing" the authenticated user from the front end to the back end.
-
-### Enable authentication and authorization for front-end app
-Follow the same steps for the front-end app, but skip the last step. You don't need the client ID for the front-end app. However, stay on the **Authentication** page for the front-end app because you'll use it in the next step.
+### Grant frontend app access to backend
-If you like, navigate to `http://<front-end-app-name>.azurewebsites.net`. It should now direct you to a secured sign-in page. After you sign in, *you still can't access the data from the back-end app*, because the back-end app now requires Azure Active Directory sign-in from the front-end app. You need to do three things:
+Now that you've enabled authentication and authorization to both of your apps, each of them is backed by an AD application. To complete the authentication, you need to do three things:
-- Grant the front end access to the back end
+- Grant the frontend app access to the backend app
- Configure App Service to return a usable token-- Use the token in your code
+- Use the token in your code.
> [!TIP] > If you run into errors and reconfigure your app's authentication/authorization settings, the tokens in the token store may not be regenerated from the new settings. To make sure your tokens are regenerated, you need to sign out and sign back in to your app. An easy way to do it is to use your browser in private mode, and close and reopen the browser in private mode after changing the settings in your apps.
-### Grant front-end app access to back end
-
-Now that you've enabled authentication and authorization to both of your apps, each of them is backed by an AD application. In this step, you give the front-end app permissions to access the back end on the user's behalf. (Technically, you give the front end's _AD application_ the permissions to access the back end's _AD application_ on the user's behalf.)
+In this step, you **grant the frontend app access to the backend app** on the user's behalf. (Technically, you give the frontend's _AD application_ the permissions to access the backend's _AD application_ on the user's behalf.)
-1. In the **Authentication** page for the front-end app, select your front-end app name under **Identity provider**. This app registration was automatically generated for you. Select **API permissions** in the left menu.
+1. In the **Authentication** page for the frontend app, select your frontend app name under **Identity provider**. This app registration was automatically generated for you. Select **API permissions** in the left menu.
-1. Select **Add a permission**, then select **My APIs** > **\<back-end-app-name>**.
+1. Select **Add a permission**, then select **My APIs** > **\<front-end-app-name>**.
-1. In the **Request API permissions** page for the back-end app, select **Delegated permissions** and **user_impersonation**, then select **Add permissions**.
+1. In the **Request API permissions** page for the backend app, select **Delegated permissions** and **user_impersonation**, then select **Add permissions**.
:::image type="content" source="./media/tutorial-auth-aad/select-permission-front-end.png" alt-text="Screenshot of the Request API permissions page showing Delegated permissions, user_impersonation, and the Add permission button selected."::: ### Configure App Service to return a usable access token
-The front-end app now has the required permissions to access the back-end app as the signed-in user. In this step, you configure App Service authentication and authorization to give you a usable access token for accessing the back end. For this step, you need the back end's client ID, which you copied from [Enable authentication and authorization for back-end app](#enable-authentication-and-authorization-for-back-end-app).
+The frontend app now has the required permissions to access the backend app as the signed-in user. In this step, you configure App Service authentication and authorization to give you a usable access token for accessing the backend. For this step, you need the backend's client ID, which you copied from [Enable authentication and authorization for backend app](#enable-authentication-and-authorization-for-backend-app).
-In the Cloud Shell, run the following commands on the front-end app to add the `scope` parameter to the authentication setting `identityProviders.azureActiveDirectory.login.loginParameters`. Replace *\<front-end-app-name>* and *\<back-end-client-id>*.
+In the Cloud Shell, run the following commands on the frontend app to add the `scope` parameter to the authentication setting `identityProviders.azureActiveDirectory.login.loginParameters`. Replace *\<front-end-app-name>* and *\<back-end-client-id>*.
```azurecli-interactive authSettings=$(az webapp auth show -g myAuthResourceGroup -n <front-end-app-name>)
authSettings=$(echo "$authSettings" | jq '.properties' | jq '.identityProviders.
az webapp auth set --resource-group myAuthResourceGroup --name <front-end-app-name> --body "$authSettings" ```
-The commands effectively adds a `loginParameters` property with additional custom scopes. Here's an explanation of the requested scopes:
+The commands effectively add a `loginParameters` property with additional custom scopes. Here's an explanation of the requested scopes:
- `openid`, `profile`, and `email` are requested by App Service by default already. For information, see [OpenID Connect Scopes](../active-directory/develop/v2-permissions-and-consent.md#openid-connect-scopes).-- `api://<back-end-client-id>/user_impersonation` is an exposed API in your back-end app registration. It's the scope that gives you a JWT token that includes the back end app as a [token audience](https://wikipedia.org/wiki/JSON_Web_Token). -- [offline_access](../active-directory/develop/v2-permissions-and-consent.md#offline_access) is included here for convenience (in case you want to [refresh tokens](#when-access-tokens-expire)).
+- `api://<back-end-client-id>/user_impersonation` is an exposed API in your backend app registration. It's the scope that gives you a JWT token that includes the backend app as a [token audience](https://wikipedia.org/wiki/JSON_Web_Token).
+- [offline_access](../active-directory/develop/v2-permissions-and-consent.md#offline_access) is included here for convenience (in case you want to [refresh tokens](#what-happens-when-the-frontend-token-expires)).
> [!TIP]
-> - To view the `api://<back-end-client-id>/user_impersonation` scope in the Azure portal, go to the **Authentication** page for the back-end app, click the link under **Identity provider**, then click **Expose an API** in the left menu.
+> - To view the `api://<back-end-client-id>/user_impersonation` scope in the Azure portal, go to the **Authentication** page for the backend app, click the link under **Identity provider**, then click **Expose an API** in the left menu.
> - To configure the required scopes using a web interface instead, see the Microsoft steps at [Refresh auth tokens](configure-authentication-oauth-tokens.md#refresh-auth-tokens).
-> - Some scopes require admin or user consent. This requirement causes the consent request page to be displayed when a user signs into the front-end app in the browser. To avoid this consent page, add the front end's app registration as an authorized client application in the **Expose an API** page by clicking **Add a client application** and supplying the client ID of the front end's app registration.
+> - Some scopes require admin or user consent. This requirement causes the consent request page to be displayed when a user signs into the frontend app in the browser. To avoid this consent page, add the frontend's app registration as an authorized client application in the **Expose an API** page by clicking **Add a client application** and supplying the client ID of the frontend's app registration.
::: zone pivot="platform-linux"
-> [!NOTE]
-> For Linux apps, There's a temporary requirement to configure a versioning setting for the back-end app registration. In the Cloud Shell, configure it with the following commands. Be sure to replace *\<back-end-client-id>* with your back end's client ID.
->
-> ```azurecli-interactive
-> id=$(az ad app show --id <back-end-client-id> --query id --output tsv)
-> az rest --method PATCH --url https://graph.microsoft.com/v1.0/applications/$id --body "{'api':{'requestedAccessTokenVersion':2}}"
-> ```
- ::: zone-end
-Your apps are now configured. The front end is now ready to access the back end with a proper access token.
+Your apps are now configured. The frontend is now ready to access the backend with a proper access token.
For information on how to configure the access token for other providers, see [Refresh identity provider tokens](configure-authentication-oauth-tokens.md#refresh-auth-tokens).
-## Call API securely from server code
+## 6. Frontend calls the authenticated backend
-In this step, you enable your previously modified server code to make authenticated calls to the back-end API.
+The frontend app needs to pass the user's authentication with the correct `user_impersonation` scope to the backend. The following steps review the code provided in the sample for this functionality.
-Your front-end app now has the required permission and also adds the back end's client ID to the login parameters. Therefore, it can obtain an access token for authentication with the back-end app. App Service supplies this token to your server code by injecting a `X-MS-TOKEN-AAD-ACCESS-TOKEN` header to each authenticated request (see [Retrieve tokens in app code](configure-authentication-oauth-tokens.md#retrieve-tokens-in-app-code)).
+View the frontend app's source code:
-> [!NOTE]
-> These headers are injected for all supported languages. You access them using the standard pattern for each respective language.
+1. Use the frontend App Service injected `x-ms-token-aad-access-token` header to programmatically get the user's accessToken.
-1. In the local repository, open _Controllers/TodoController.cs_ again. Under the `TodoController(TodoContext context)` constructor, add the following code:
+ ```javascript
+ // ./src/server.js
+ const accessToken = req.headers['x-ms-token-aad-access-token'];
+ ```
- ```cs
- public override void OnActionExecuting(ActionExecutingContext context)
- {
- base.OnActionExecuting(context);
-
- _client.DefaultRequestHeaders.Accept.Clear();
- _client.DefaultRequestHeaders.Authorization =
- new AuthenticationHeaderValue("Bearer", Request.Headers["X-MS-TOKEN-AAD-ACCESS-TOKEN"]);
+1. Use the accessToken in the `Authentication` header as the `bearerToken` value.
+
+ ```javascript
+ // ./src/remoteProfile.js
+ // Get profile from backend
+ const response = await fetch(remoteUrl, {
+ cache: "no-store", // no caching -- for demo purposes only
+ method: 'GET',
+ headers: {
+ 'Authorization': `Bearer ${accessToken}`
+ }
+ });
+ if (response.ok) {
+ const { profile } = await response.json();
+ console.log(`profile: ${profile}`);
+ } else {
+ // error handling
} ```
- This code adds the standard HTTP header `Authorization: Bearer <access-token>` to all remote API calls. In the ASP.NET Core MVC request execution pipeline, `OnActionExecuting` executes just before the respective action does, so each of your outgoing API call now presents the access token.
+ This tutorial returns a _fake_ profile to simplify the scenario. The [next tutorial](tutorial-connect-app-access-microsoft-graph-as-user-javascript.md) in this series demonstrates how to exchange the backend bearerToken for a new token with the scope of a downstream Azure service, such as Microsoft Graph.
-1. Save all your changes. In the local terminal window, deploy your changes to the front-end app with the following Git commands:
+## <a name="call-api-securely-from-server-code"></a>7. Backend returns profile to frontend
- ```bash
- git add .
- git commit -m "add authorization header for server code"
- git push frontend main
- ```
+If the request from the frontend isn't authorized, the backend App service rejects the request with a 401 HTTP error code _before_ the request reaches your application code. When the backend code is reached (because it including an authorized token), extract the bearerToken to get the accessToken.
-1. Sign in to `https://<front-end-app-name>.azurewebsites.net` again. At the user data usage agreement page, click **Accept**.
+View the backend app's source code:
- You should now be able to create, read, update, and delete data from the back-end app as before. The only difference now is that both apps are now secured by App Service authentication and authorization, including the service-to-service calls.
+```javascript
+// ./src/server.js
+const bearerToken = req.headers['Authorization'] || req.headers['authorization'];
-Congratulations! Your server code is now accessing the back-end data on behalf of the authenticated user.
+if (bearerToken) {
+ const accessToken = bearerToken.split(' ')[1];
+ console.log(`backend server.js accessToken: ${!!accessToken ? 'found' : 'not found'}`);
-## Call API securely from browser code
+ // TODO: get profile from Graph API
+ // provided in next article in this series
+ // return await getProfileFromMicrosoftGraph(accessToken)
-In this step, you point the front-end Angular.js app to the back-end API. This way, you learn how to retrieve the access token and make API calls to the back-end app with it.
+ // return fake profile for this tutorial
+ return {
+ "displayName": "John Doe",
+ "withAuthentication": !!accessToken ? true : false
+ }
+}
+```
-While the server code has access to request headers, client code can access `GET /.auth/me` to get the same access tokens (see [Retrieve tokens in app code](configure-authentication-oauth-tokens.md#retrieve-tokens-in-app-code)).
+## 8. Browse to the apps
-> [!TIP]
-> This section uses the standard HTTP methods to demonstrate the secure HTTP calls. However, you can use [Microsoft Authentication Library for JavaScript](https://github.com/AzureAD/microsoft-authentication-library-for-js) to help simplify the Angular.js application pattern.
->
+1. Use the frontend web site in a browser. The URL is in the formate of `https://<front-end-app-name>.azurewebsites.net/`.
+1. The browser requests your authentication to the web app. Complete the authentication.
+1. After authentication completes, the frontend application returns the home page of the app.
-### Configure CORS
+ :::image type="content" source="./media/tutorial-auth-aad/app-home-page.png" alt-text="Screenshot of web browser showing frontend application after successfully completing authentication.":::
-In the Cloud Shell, enable CORS to your client's URL by using the [`az webapp cors add`](/cli/azure/webapp/cors#az-webapp-cors-add) command. Replace the _\<back-end-app-name>_ and _\<front-end-app-name>_ placeholders.
+1. Select `Get user's profile`. This passes your authentication in the bearer token to the backend.
+1. The backend end responds with the _fake_ hard-coded profile name: `John Doe`.
-```azurecli-interactive
-az webapp cors add --resource-group myAuthResourceGroup --name <back-end-app-name> --allowed-origins 'https://<front-end-app-name>.azurewebsites.net'
-```
+ :::image type="content" source="./media/tutorial-auth-aad/app-profile.png" alt-text="Screenshot of web browser showing frontend application after successfully getting fake profile from backend app.":::
+
+ The `withAuthentication` value of **true** indicates the authentication _is_ set up yet.
-This step is not related to authentication and authorization. However, you need it so that your browser allows the cross-domain API calls from your Angular.js app. For more information, see [Add CORS functionality](app-service-web-tutorial-rest-api.md#add-cors-functionality).
+## 9. Clean up resources
-### Point Angular.js app to back-end API
+In the preceding steps, you created Azure resources in a resource group.
-1. In the local repository, open _wwwroot/https://docsupdatetracker.net/index.html_.
+1. Delete the resource group by running the following command in the Cloud Shell. This command may take a minute to run.
-1. In Line 51, set the `apiEndpoint` variable to the HTTPS URL of your back-end app (`https://<back-end-app-name>.azurewebsites.net`). Replace _\<back-end-app-name>_ with your app name in App Service.
-1. In the local repository, open _wwwroot/app/scripts/todoListSvc.js_ and see that `apiEndpoint` is prepended to all the API calls. Your Angular.js app is now calling the back-end APIs.
+ ```azurecli-interactive
+ az group delete --name myAuthResourceGroup
+ ```
-### Add access token to API calls
-1. In _wwwroot/app/scripts/todoListSvc.js_, above the list of API calls (above the line `getItems : function(){`), add the following function to the list:
+1. Use the authentication apps' **Client ID**, you previously found and made note of in the `Enable authentication and authorization` sections for the backend and frontend apps.
+1. Delete app registrations for both frontend and backend apps.
- ```javascript
- setAuth: function (token) {
- $http.defaults.headers.common['Authorization'] = 'Bearer ' + token;
- },
+ ```azurecli-interactive
+ # delete app - do this for both frontend and backend client ids
+ az ad app delete <client-id>
```
- This function is called to set the default `Authorization` header with the access token. You call it in the next step.
+## Frequently asked questions
-1. In the local repository, open _wwwroot/app/scripts/app.js_ and find the following code:
+### How do I test this authentication on my local development machine?
- ```javascript
- $routeProvider.when("/Home", {
- controller: "todoListCtrl",
- templateUrl: "/App/Views/TodoList.html",
- }).otherwise({ redirectTo: "/Home" });
- ```
+The authentication in this procedure is provided at the hosting platform layer by Azure App Service. There's no equivalent emulator. You must deploy the frontend and backend app and configuration authentication for each in order to use the authentication.
-1. Replace the entire code block with the following code:
+### The app isn't displaying _fake_ profile, how do I debug it?
- ```javascript
- $routeProvider.when("/Home", {
- controller: "todoListCtrl",
- templateUrl: "/App/Views/TodoList.html",
- resolve: {
- token: ['$http', 'todoListSvc', function ($http, todoListSvc) {
- return $http.get('/.auth/me').then(function (response) {
- todoListSvc.setAuth(response.data[0].access_token);
- return response.data[0].access_token;
- });
- }]
- },
- }).otherwise({ redirectTo: "/Home" });
- ```
+The frontend and backend apps both have `/debug` routes to help debug the authentication when this application doesn't return the _fake_ profile. The frontend debug route provides the critical pieces to validate:
- The new change adds the `resolve` mapping that calls `/.auth/me` and sets the access token. It makes sure you have the access token before instantiating the `todoListCtrl` controller. That way all API calls by the controller includes the token.
+* Environment variables:
+ * The `BACKEND_URL` is configured correctly as `https://<back-end-app-name>.azurewebsites.net`. Don't include that trailing forward slash or the route.
+* HTTP headers:
+ * The `x-ms-token-*` headers are injected.
+* Microsoft Graph profile name for signed in user is displayed.
+* Frontend app's **scope** for the token has `user_impersonation`. If your scope doesn't include this, it could be an issue of timing. Verify your frontend app's `login` parameters in [Azure resources](https://resources.azure.com). Wait a few minutes for the replication of the authentication.
-### Deploy updates and test
+### Did the application source code deploy correctly to each web app?
-1. Save all your changes. In the local terminal window, deploy your changes to the front-end app with the following Git commands:
+1. In the Azure portal for the web app, select **Development Tools -> Advanced Tools**, then select **Go ->**. This opens a new browser tab or window.
+1. In the new browser tab, select **Browse Directory -> Site wwwroot**.
+1. Verify the following are in the directory:
- ```bash
- git add .
- git commit -m "add authorization header for Angular"
- git push frontend main
- ```
+ * package.json
+ * node_modules.tar.gz
+ * /src/index.js
-1. Navigate to `https://<front-end-app-name>.azurewebsites.net` again. You should now be able to create, read, update, and delete data from the back-end app, directly in the Angular.js app.
+1. Verify the package.json's `name` property is the same as the web name, either `frontend` or `backend`.
+1. If you changed the source code, and need to redeploy, use [az webapp up](/cli/azure/webapp#az-webapp-up) from the directory that has the package.json file for that app.
-Congratulations! Your client code is now accessing the back-end data on behalf of the authenticated user.
+### Did the application start correctly
-## When access tokens expire
+Both the web apps should return something when the home page is requested. If you can't reach `/debug` on a web app, the app didn't start correctly. Review the error logs for that web app.
-Your access token expires after some time. For information on how to refresh your access tokens without requiring users to reauthenticate with your app, see [Refresh identity provider tokens](configure-authentication-oauth-tokens.md#refresh-auth-tokens).
+1. In the Azure portal for the web app, select **Development Tools -> Advanced Tools**, then select **Go ->**. This opens a new browser tab or window.
+1. In the new browser tab, select **Browse Directory -> Deployment Logs**.
+1. Review each log to find any reported issues.
-## Clean up resources
+### Is the frontend app able to talk to the backend app?
-In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell:
+Because the frontend app calls the backend app from server source code, this isn't something you can see in the browser network traffic. Use the following list to determine the backend profile request success:
-```azurecli-interactive
-az group delete --name myAuthResourceGroup
-```
+* The backend web app returns any errors to the frontend app if it was reached. If it wasn't reached, the frontend app reports the status code and message.
+ * 401: The user didn't pass authentication correctly. This can indicate the scope isn't set correctly.
+ * 404: The URL to the server doesn't match a route the server has
+* Use the backend app's streaming logs to watch as you make the frontend request for the user's profile. There's debug information in the source code with `console.log` which helps determine where the failure happened.
+
+### What happens when the frontend token expires?
+
+Your access token expires after some time. For information on how to refresh your access tokens without requiring users to reauthenticate with your app, see [Refresh identity provider tokens](configure-authentication-oauth-tokens.md#refresh-auth-tokens).
-This command may take a minute to run.
<a name="next"></a> ## Next steps
What you learned:
> * Use access tokens from server code > * Use access tokens from client (browser) code
-Advance to the next tutorial to learn how to map a custom DNS name to your app.
+Advance to the next tutorial to learn how to use this user's identity to access an Azure service.
> [!div class="nextstepaction"]
-> [Secure with custom domain and certificate](tutorial-secure-domain-certificate.md)
+> [Create a secure n-tier app in Azure App Service](tutorial-secure-ntier-app.md)
applied-ai-services Managed Identities Secured Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities-secured-access.md
To get started, you need:
* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/)ΓÇöif you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-* A [**Form Recognizer**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) or [**Cognitive Services**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal. For detailed steps, _see_ [Create a Cognitive Services resource using the Azure portal](../../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows).
+* A [**Form Recognizer**](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**Cognitive Services**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal. For detailed steps, _see_ [Create a Cognitive Services resource using the Azure portal](../../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows).
* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Form Recognizer resource. Create containers to store and organize your blob data within your storage account.
azure-cache-for-redis Cache Remove Tls 10 11 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-remove-tls-10-11.md
description: Learn how to remove TLS 1.0 and 1.1 from your application when comm
Previously updated : 05/25/2021 Last updated : 03/07/2023 ms.devlang: csharp, golang, java, javascript, php, python
Most applications use Redis client libraries to handle communication with their
Redis .NET clients use the earliest TLS version by default on .NET Framework 4.5.2 or earlier, and use the latest TLS version on .NET Framework 4.6 or later. If you're using an older version of .NET Framework, enable TLS 1.2 manually:
-* **StackExchange.Redis:** Set `ssl=true` and `sslprotocols=tls12` in the connection string.
+* **StackExchange.Redis:** Set `ssl=true` and `sslProtocols=tls12` in the connection string.
* **ServiceStack.Redis:** Follow the [ServiceStack.Redis](https://github.com/ServiceStack/ServiceStack.Redis#servicestackredis-ssl-support) instructions and requires ServiceStack.Redis v5.6 at a minimum. ### .NET Core
azure-cache-for-redis Create Manage Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/scripts/create-manage-cache.md
In this scenario, you learn how to create an Azure Cache for Redis. You then lea
### Run the script
+<!--
+This sample is broken. When it is fixed, we can fix this include.
+-->
+
+```azurecli
+
+# Variable block
+let "randomIdentifier=$RANDOM*$RANDOM"
+location="East US"
+resourceGroup="msdocs-redis-cache-rg-$randomIdentifier"
+tag="create-manage-cache"
+cache="msdocs-redis-cache-$randomIdentifier"
+sku="basic"
+size="C0"
+
+# Create a resource group
+echo "Creating $resourceGroup in "$location"..."
+az group create --resource-group $resourceGroup --location "$location" --tags $tag
+
+# Create a Basic C0 (256 MB) Redis Cache
+echo "Creating $cache"
+az redis create --name $cache --resource-group $resourceGroup --location "$location" --sku $sku --vm-size $size
+
+# Get details of an Azure Cache for Redis
+echo "Showing details of $cache"
+az redis show --name "$cache" --resource-group $resourceGroup
+
+# Retrieve the hostname and ports for an Azure Redis Cache instance
+redis=($(az redis show --name "$cache" --resource-group $resourceGroup --query [hostName,enableNonSslPort,port,sslPort] --output tsv))
+
+# Retrieve the keys for an Azure Redis Cache instance
+keys=($(az redis list-keys --name "$cache" --resource-group $resourceGroup --query [primaryKey,secondaryKey] --output tsv))
+
+# Display the retrieved hostname, keys, and ports
+echo "Hostname:" ${redis[0]}
+echo "Non SSL Port:" ${redis[2]}
+echo "Non SSL Port Enabled:" ${redis[1]}
+echo "SSL Port:" ${redis[3]}
+echo "Primary Key:" ${keys[0]}
+echo "Secondary Key:" ${keys[1]}
+
+# Delete a redis cache
+echo "Deleting $cache"
+az redis delete --name "$cache" --resource-group $resourceGroup -y
+
+# echo "Deleting all resources"
+az group delete --resource-group $resourceGroup -y
+
+```
## Clean up resources [!INCLUDE [cli-clean-up-resources.md](../../../includes/cli-clean-up-resources.md)] ```azurecli
-az group delete --name $resourceGroup
+az group delete --reourceg $resourceGroup
``` ## Sample reference
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Using Azure Monitor agent, you get immediate benefits as shown below:
- **Cost savings** by [using data collection rules](data-collection-rule-azure-monitor-agent.md): - Enables targeted and granular data collection for a machine or subset(s) of machines, as compared to the "all or nothing" approach of legacy agents.
- - Allows filtering rules and data transformations to reduce the overall data volume being uploaded, thus lowering ingestion and storage costs significantly
+ - Allows filtering rules and data transformations to reduce the overall data volume being uploaded, thus lowering ingestion and storage costs significantly.
- **Simpler management** including efficient troubleshooting:
- - Supports data uploads multiple destinations (multiple Log Analytics workspaces, i.e. *multihoming* on Windows and Linux) including cross-region and cross-tenant data collection (using Azure LightHouse)
- - Centralized, agent configuration "in the cloud" for enterprise scale throughout the data collection lifecycle, from onboarding to deployment to updates and changes over time.
- - Any change(s) in configuration is rolled out to all agents automatically, without requiring a client side deployment
+ - Supports data uploads to multiple destinations (multiple Log Analytics workspaces, i.e. *multihoming* on Windows and Linux) including cross-region and cross-tenant data collection (using Azure LightHouse).
+ - Centralized agent configuration "in the cloud" for enterprise scale throughout the data collection lifecycle, from onboarding to deployment to updates and changes over time.
+ - Any change in configuration is rolled out to all agents automatically, without requiring a client side deployment.
- Greater transparency and control of more capabilities and services, such as Microsoft Sentinel, Defender for Cloud, and VM Insights. - **Security and Performance**
- - Enhanced security through Managed Identity and Azure Active Directory (Azure AD) tokens (for clients)
+ - Enhanced security through Managed Identity and Azure Active Directory (Azure AD) tokens (for clients).
- Higher event throughput that is 25% better than the legacy Log Analytics (MMA/OMS) agents.-- **A single agent** that servers all data collection needs across servers and client devices running Windows 10 or 11. A single agent is the goal, although Azure Monitor Agent currently converges with the Log Analytics agents.
+- **A single agent** that serves all data collection needs across [supported](https://learn.microsoft.com/azure/azure-monitor/agents/agents-overview#supported-operating-systems) servers and client devices. A single agent is the goal, although Azure Monitor Agent is currently converging with the Log Analytics agents.
## Consolidating legacy agents
-Deploy Azure Monitor Agent on all new virtual machines, scale sets and on-premises servers to collect data for [supported services and features](#supported-services-and-features).
+Deploy Azure Monitor Agent on all new virtual machines, scale sets, and on-premises servers to collect data for [supported services and features](#supported-services-and-features).
If you have machines already deployed with legacy Log Analytics agents, we recommend you [migrate to Azure Monitor Agent](./azure-monitor-agent-migration.md) as soon as possible. The legacy Log Analytics agent will not be supported after August 2024.
Azure Monitor Agent uses [data collection rules](../essentials/data-collection-r
> [!NOTE] > On rsyslog-based systems, Azure Monitor Linux Agent adds forwarding rules to the default ruleset defined in the rsyslog configuration. If multiple rulesets are used, inputs bound to non-default ruleset(s) are **not** forwarded to Azure Monitor Agent. For more information about multiple rulesets in rsyslog, see the [official documentation](https://www.rsyslog.com/doc/master/concepts/multi_ruleset.html).
+ > [!NOTE]
+ > Azure Monitor Agent also supports Azure service [SQL Best Practices Assessment](/sql/sql-server/azure-arc/assess/) which is currently Generally available. For more information, refer [Configure best practices assessment using Azure Monitor Agent](/sql/sql-server/azure-arc/assess#enable-best-practices-assessment).
+ ## Supported services and features In addition to the generally available data collection listed above, Azure Monitor Agent also supports these Azure Monitor features in preview:
In addition to the generally available data collection listed above, Azure Monit
| Azure service | Current support | Other extensions installed | More information | | : | : | : | : | | [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Public preview | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Auto-deployment of Azure Monitor Agent (Preview)](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md) |
-| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows DNS logs: [Public preview](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: [Public preview](../../sentinel/connect-cef-ama.md#set-up-the-common-event-format-cef-via-ama-connector)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | - |
+| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors/windows-forwarded-events.md)</li><li>Windows DNS logs: [Public preview](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: [Public preview](../../sentinel/connect-cef-ama.md#set-up-the-common-event-format-cef-via-ama-connector)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | - |
| [Change Tracking](../../automation/change-tracking/overview.md) | Public preview | Change Tracking extension | [Change Tracking and Inventory using Azure Monitor Agent](../../automation/change-tracking/overview-monitoring-agent.md) | | [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](../../update-center/index.yml) | | [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Public preview | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) |
-| [SQL Best Practices Assessment](/sql/sql-server/azure-arc/assess/) | Generally available | | [Configure best practices assessment using Azure Monitor Agent](/sql/sql-server/azure-arc/assess#enable-best-practices-assessment) |
> [!NOTE] > Features and services listed above in preview **may not be available in Azure Government and China clouds**. They will be available typically within a month *after* the features/services become generally available.
azure-monitor Alerts Metric Near Real Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-near-real-time.md
Previously updated : 9/14/2022 Last updated : 3/8/2023 ms.reviwer: harelbr
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.ClassicStorage/storageAccounts/fileServices | Yes | No | [Azure Files storage accounts (classic)](../essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsfileservices) | |Microsoft.ClassicStorage/storageAccounts/queueServices | Yes | No | [Azure Queue Storage accounts (classic)](../essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsqueueservices) | |Microsoft.ClassicStorage/storageAccounts/tableServices | Yes | No | [Azure Table Storage accounts (classic)](../essentials/metrics-supported.md#microsoftclassicstoragestorageaccountstableservices) |
+|Microsoft.CloudTest/hostedpools | Yes | No | [1ES Hosted Pools](../essentials/metrics-supported.md#microsoftcloudtesthostedpools) |
+|Microsoft.CloudTest/pools | Yes | No | [CloudTest Pools](../essentials/metrics-supported.md#microsoftcloudtestpools) |
|Microsoft.CognitiveServices/accounts | Yes | No | [Azure Cognitive Services](../essentials/metrics-supported.md#microsoftcognitiveservicesaccounts) | |Microsoft.Compute/cloudServices | Yes | No | [Azure Cloud Services](../essentials/metrics-supported.md#microsoftcomputecloudservices) | |Microsoft.Compute/cloudServices/roles | Yes | No | [Azure Cloud Services roles](../essentials/metrics-supported.md#microsoftcomputecloudservicesroles) | |Microsoft.Compute/virtualMachines | Yes | Yes<sup>1</sup> | [Azure Virtual Machines](../essentials/metrics-supported.md#microsoftcomputevirtualmachines) | |Microsoft.Compute/virtualMachineScaleSets | Yes | No |[Azure Virtual Machine Scale Sets](../essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets) |
+|Microsoft.Communication/CommunicationServices | Yes | No |[Communication Services](../essentials/metrics-supported.md#microsoftcommunicationcommunicationservices) |
|Microsoft.ConnectedVehicle/platformAccounts | Yes | No |[Connected Vehicle Platform Accounts](../essentials/metrics-supported.md) | |Microsoft.ContainerInstance/containerGroups | Yes| No | [Container groups](../essentials/metrics-supported.md#microsoftcontainerinstancecontainergroups) | |Microsoft.ContainerRegistry/registries | No | No | [Azure Container Registry](../essentials/metrics-supported.md#microsoftcontainerregistryregistries) |
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.MachineLearningServices/workspaces | Yes | No | [Azure Machine Learning](../essentials/metrics-supported.md#microsoftmachinelearningservicesworkspaces) | |Microsoft.MachineLearningServices/workspaces/onlineEndpoints | Yes | No | Azure Machine Learning endpoints | |Microsoft.MachineLearningServices/workspaces/onlineEndpoints/deployments | Yes | No | Azure Machine Learning endpoint deployments |
+|Microsoft.ManagedNetworkFabric/networkDevices | Yes | No |[Managed Network Fabric Devices](../essentials/metrics-supported.md#microsoftmanagednetworkfabricnetworkdevices) |
|Microsoft.Maps/accounts | Yes | No | [Azure Maps accounts](../essentials/metrics-supported.md#microsoftmapsaccounts) | |Microsoft.Medi#microsoftmediamediaservices) |
+|Microsoft.Medi#microsoftmediamediaservicesliveevents) |
|Microsoft.Medi#microsoftmediamediaservicesstreamingendpoints) |
+|Microsoft.Monitor/accounts | Yes | No | [Azure Monitor workspaces](../essentials/metrics-supported.md#microsoftmonitoraccounts) |
|Microsoft.NetApp/netAppAccounts/capacityPools | Yes | Yes | [Azure NetApp Files capacity pools](../essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypools) | |Microsoft.NetApp/netAppAccounts/capacityPools/volumes | Yes | Yes | [Azure NetApp Files volumes](../essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypoolsvolumes) | |Microsoft.Network/applicationGateways | Yes | No | [Azure Application Gateway](../essentials/metrics-supported.md#microsoftnetworkapplicationgateways) |
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.Purview/accounts | Yes | No | [Azure Purview accounts](../essentials/metrics-supported.md#microsoftpurviewaccounts) | |Microsoft.RecoveryServices/vaults | Yes | Yes | [Recovery Services vaults](../essentials/metrics-supported.md) | |Microsoft.Relay/namespaces | Yes | No | [Relays](../essentials/metrics-supported.md#microsoftrelaynamespaces) |
-|Microsoft.Search/searchServices | No | No | [Search services](../essentials/metrics-supported.md#microsoftsearchsearchservices) |
+|Microsoft.Search/searchServices | Yes | No | [Search services](../essentials/metrics-supported.md#microsoftsearchsearchservices) |
|Microsoft.ServiceBus/namespaces | Yes | No | [Azure Service Bus](../essentials/metrics-supported.md#microsoftservicebusnamespaces) | |Microsoft.SignalRService/WebPubSub | Yes | No | [Azure Web PubSub service](../essentials/metrics-supported.md#microsoftsignalrservicewebpubsub) | |Microsoft.Sql/managedInstances | No | No | [Azure SQL Managed Instance](../essentials/metrics-supported.md#microsoftsqlmanagedinstances) |
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
Application Insights now supports [Azure Active Directory (Azure AD) authentication](../../active-directory/authentication/overview-authentication.md#what-is-azure-active-directory-authentication). By using Azure AD, you can ensure that only authenticated telemetry is ingested in your Application Insights resources.
-Using various authentication systems can be cumbersome and risky because it's difficult to manage credentials at scale. You can now choose to [opt out of local authentication](#disable-local-authentication) to ensure only telemetry exclusively authenticated by using [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) and [Azure AD](../../active-directory/fundamentals/active-directory-whatis.md) is ingested in your resource. This feature is a step to enhance the security and reliability of the telemetry used to make critical operational ([alerting](../alerts/alerts-overview.md#what-are-azure-monitor-alerts)and [autoscale](../autoscale/autoscale-overview.md#overview-of-autoscale-in-microsoft-azure)) and business decisions.
+Using various authentication systems can be cumbersome and risky because it's difficult to manage credentials at scale. You can now choose to [opt out of local authentication](#disable-local-authentication) to ensure only telemetry exclusively authenticated by using [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) and [Azure AD](../../active-directory/fundamentals/active-directory-whatis.md) is ingested in your resource. This feature is a step to enhance the security and reliability of the telemetry used to make critical operational ([alerting](../alerts/alerts-overview.md#what-are-azure-monitor-alerts)and [autoscale](../autoscale/autoscale-overview.md#overview-of-autoscale-in-azure)) and business decisions.
## Prerequisites
The following SDKs and features are unsupported for use with Azure AD authentica
1. If you don't already have an identity, create one by using either a managed identity or a service principal.
- 1. We recommend using a managed identity:
+ - We recommend using a managed identity:
[Set up a managed identity for your Azure service](../../active-directory/managed-identities-azure-resources/services-support-managed-identities.md) (Virtual Machines or App Service).
- 1. We don't recommend using a service principal:
+ - We don't recommend using a service principal:
For more information on how to create an Azure AD application and service principal that can access resources, see [Create a service principal](../../active-directory/develop/howto-create-service-principal-portal.md).
You can inspect network traffic by using a tool like Fiddler. To enable the traf
} ```
-Or add the following JVM args while running your application:`-Djava.net.useSystemProxies=true -Dhttps.proxyHost=localhost -Dhttps.proxyPort=8888`
+Or add the following JVM args while running your application: `-Djava.net.useSystemProxies=true -Dhttps.proxyHost=localhost -Dhttps.proxyPort=8888`
If Azure AD is enabled in the agent, outbound traffic will include the HTTP header `Authorization`.
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Legacy table: availability
|operation_ParentId|string|OperationParentId|string| |operation_SyntheticSource|string|OperationSyntheticSource|string| |performanceBucket|string|PerformanceBucket|string|
-|sdkVersion|string|SdkVersion|string|
+|sdkVersion|string|SDKVersion|string|
|session_Id|string|SessionId|string| |size|real|Size|real| |success|string|Success|Bool|
Legacy table: browserTimings
|performanceBucket|string|PerformanceBucket|string| |processingDuration|real|ProcessingDurationMs|real| |receiveDuration|real|ReceiveDurationMs|real|
-|sdkVersion|string|SdkVersion|string|
+|sdkVersion|string|SDKVersion|string|
|sendDuration|real|SendDurationMs|real| |session_Id|string|SessionId|string| |timestamp|datetime|TimeGenerated|datetime|
Legacy table: dependencies
|operation_SyntheticSource|string|OperationSyntheticSource|string| |performanceBucket|string|PerformanceBucket|string| |resultCode|string|ResultCode|string|
-|sdkVersion|string|SdkVersion|string|
+|sdkVersion|string|SDKVersion|string|
|session_Id|string|SessionId|string| |success|string|Success|Bool| |target|string|Target|string|
Legacy table: customEvents
|operation_Name|string|OperationName|string| |operation_ParentId|string|OperationParentId|string| |operation_SyntheticSource|string|OperationSyntheticSource|string|
-|sdkVersion|string|SdkVersion|string|
+|sdkVersion|string|SDKVersion|string|
|session_Id|string|SessionId|string| |timestamp|datetime|TimeGenerated|datetime| |user_AccountId|string|UserAccountId|string|
Legacy table: customMetrics
|operation_Name|string|OperationName|string| |operation_ParentId|string|OperationParentId|string| |operation_SyntheticSource|string|OperationSyntheticSource|string|
-|sdkVersion|string|SdkVersion|string|
+|sdkVersion|string|SDKVersion|string|
|session_Id|string|SessionId|string| |timestamp|datetime|TimeGenerated|datetime| |user_AccountId|string|UserAccountId|string|
Legacy table: pageViews
|operation_ParentId|string|OperationParentId|string| |operation_SyntheticSource|string|OperationSyntheticSource|string| |performanceBucket|string|PerformanceBucket|string|
-|sdkVersion|string|SdkVersion|string|
+|sdkVersion|string|SDKVersion|string|
|session_Id|string|SessionId|string| |timestamp|datetime|TimeGenerated|datetime| |url|string|Url|string|
Legacy table: performanceCounters
|operation_Name|string|OperationName|string| |operation_ParentId|string|OperationParentId|string| |operation_SyntheticSource|string|OperationSyntheticSource|string|
-|sdkVersion|string|SdkVersion|string|
+|sdkVersion|string|SDKVersion|string|
|session_Id|string|SessionId|string| |timestamp|datetime|TimeGenerated|datetime| |user_AccountId|string|UserAccountId|string|
Legacy table: requests
|operation_SyntheticSource|string|OperationSyntheticSource|string| |performanceBucket|string|PerformanceBucket|String| |resultCode|string|ResultCode|String|
-|sdkVersion|string|SdkVersion|string|
+|sdkVersion|string|SDKVersion|string|
|session_Id|string|SessionId|string| |source|string|Source|String| |success|string|Success|Bool|
Legacy table: exceptions
|outerMethod|string|OuterMethod|string| |outerType|string|OuterType|string| |problemId|string|ProblemId|string|
-|sdkVersion|string|SdkVersion|string|
+|sdkVersion|string|SDKVersion|string|
|session_Id|string|SessionId|string| |severityLevel|int|SeverityLevel|int| |timestamp|datetime|TimeGenerated|datetime|
Legacy table: traces
|operation_Name|string|OperationName|string| |operation_ParentId|string|OperationParentId|string| |operation_SyntheticSource|string|OperationSyntheticSource|string|
-|sdkVersion|string|SdkVersion|string|
+|sdkVersion|string|SDKVersion|string|
|session_Id|string|SessionId|string| |severityLevel|int|SeverityLevel|int| |timestamp|datetime|TimeGenerated|datetime|
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Dependencies
Autocollected metrics
-* Micrometer, including Spring Boot Actuator metrics
+* Micrometer Metrics, including Spring Boot Actuator metrics
* JMX Metrics #### [Node.js](#tab/nodejs)
The following table represents the currently supported custom telemetry types:
| **Java** | | | | | | | | | &nbsp;&nbsp;&nbsp;OpenTelemetry API | | Yes | Yes | Yes | | Yes | | | &nbsp;&nbsp;&nbsp;Logback, Log4j, JUL | | | | Yes | | | Yes |
-| &nbsp;&nbsp;&nbsp;Micrometer | | Yes | | | | | |
+| &nbsp;&nbsp;&nbsp;Micrometer Metrics | | Yes | | | | | |
| &nbsp;&nbsp;&nbsp;AI Classic API | Yes | Yes | Yes | Yes | Yes | Yes | Yes | | | | | | | | | | | **Node.js** | | | | | | | |
azure-monitor Autoscale Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-best-practices.md
Title: Best practices for autoscale
-description: Autoscale patterns in Azure for Web Apps, virtual machine scale sets, and Cloud Services
+description: Autoscale patterns in the Web Apps feature of Azure App Service, Azure Virtual Machine Scale Sets, and Azure Cloud Services.
Last updated 09/13/2022
-# Best practices for Autoscale
-Azure Monitor autoscale applies only to [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Cloud Services](https://azure.microsoft.com/services/cloud-services/), [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [API Management services](../../api-management/api-management-key-concepts.md).
+# Best practices for autoscale
+Azure Monitor autoscale applies only to [Azure Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/), the [Web Apps feature of Azure App Service](https://azure.microsoft.com/services/app-service/web/), and [Azure API Management](../../api-management/api-management-key-concepts.md).
## Autoscale concepts
-* A resource can have only *one* autoscale setting
-* An autoscale setting can have one or more profiles and each profile can have one or more autoscale rules.
+* A resource can have only *one* autoscale setting.
+* An autoscale setting can have one or more profiles, and each profile can have one or more autoscale rules.
* An autoscale setting scales instances horizontally, which is *out* by increasing the instances and *in* by decreasing the number of instances.
- An autoscale setting has a maximum, minimum, and default value of instances.
+* An autoscale setting has a maximum, minimum, and default value of instances.
* An autoscale job always reads the associated metric to scale by, checking if it has crossed the configured threshold for scale-out or scale-in. You can view a list of metrics that autoscale can scale by at [Azure Monitor autoscaling common metrics](autoscale-common-metrics.md).
-* All thresholds are calculated at an instance level. For example, "scale out by one instance when average CPU > 80% when instance count is 2", means scale-out when the average CPU across all instances is greater than 80%.
-* All autoscale failures are logged to the Activity Log. You can then configure an [activity log alert](../alerts/activity-log-alerts.md) so that you can be notified via email, SMS, or webhooks whenever there's an autoscale failure.
-* Similarly, all successful scale actions are posted to the Activity Log. You can then configure an activity log alert so that you can be notified via email, SMS, or webhooks whenever there's a successful autoscale action. You can also configure email or webhook notifications to get notified for successful scale actions via the notifications tab on the autoscale setting.
+* All thresholds are calculated at an instance level. An example is "scale out by one instance when average CPU > 80% when instance count is 2." It means scale-out when the average CPU across all instances is greater than 80%.
+* All autoscale failures are logged to the activity log. You can then configure an [activity log alert](../alerts/activity-log-alerts.md) so that you can be notified via email, SMS, or webhooks whenever there's an autoscale failure.
+* Similarly, all successful scale actions are posted to the activity log. You can then configure an activity log alert so that you can be notified via email, SMS, or webhooks whenever there's a successful autoscale action. You can also configure email or webhook notifications to get notified for successful scale actions via the notifications tab on the autoscale setting.
## Autoscale best practices Use the following best practices as you use autoscale. ### Ensure the maximum and minimum values are different and have an adequate margin between them
-If you have a setting that has minimum=2, maximum=2 and the current instance count is 2, no scale action can occur. Keep an adequate margin between the maximum and minimum instance counts, which are inclusive. Autoscale always scales between these limits.
+If you have a setting that has minimum=2, maximum=2, and the current instance count is 2, no scale action can occur. Keep an adequate margin between the maximum and minimum instance counts, which are inclusive. Autoscale always scales between these limits.
-### Manual scaling is reset by autoscale min and max
-If you manually update the instance count to a value above or below the maximum, the autoscale engine automatically scales back to the minimum (if below) or the maximum (if above). For example, you set the range between 3 and 6. If you have one running instance, the autoscale engine scales to three instances on its next run. Likewise, if you manually set the scale to eight instances, on the next run autoscale will scale it back to six instances on its next run. Manual scaling is temporary unless you reset the autoscale rules as well.
+### Manual scaling is reset by autoscale minimum and maximum
+If you manually update the instance count to a value above or below the maximum, the autoscale engine automatically scales back to the minimum (if below) or the maximum (if above). For example, you set the range between 3 and 6. If you have one running instance, the autoscale engine scales to three instances on its next run. Likewise, if you manually set the scale to eight instances, on the next run autoscale will scale it back to six instances on its next run. Manual scaling is temporary unless you also reset the autoscale rules.
### Always use a scale-out and scale-in rule combination that performs an increase and decrease
-If you use only one part of the combination, autoscale will only take action in a single direction (scale out, or in) until it reaches the maximum, or minimum instance counts, as defined in the profile. This isn't optimal, ideally you want your resource to scale up at times of high usage to ensure availability. Similarly, at times of low usage you want your resource to scale down, so you can realize cost savings.
+If you use only one part of the combination, autoscale only takes action in a single direction (scale out or in) until it reaches the maximum, or minimum instance counts, as defined in the profile. This situation isn't optimal. Ideally, you want your resource to scale up at times of high usage to ensure availability. Similarly, at times of low usage, you want your resource to scale down so that you can realize cost savings.
-When you use a scale-in and scale-out rule, ideally use the same metric to control both. Otherwise, itΓÇÖs possible that the scale-in and scale-out conditions could be met at the same time resulting in some level of flapping. For example, the following rule combination isn't* recommended because there's no scale-in rule for memory usage:
+When you use a scale-in and scale-out rule, ideally use the same metric to control both. Otherwise, it's possible that the scale-in and scale-out conditions could be met at the same time and result in some level of flapping. For example, we don't recommend the following rule combination because there's no scale-in rule for memory usage:
-* If CPU > 90%, scale-out by 1
-* If Memory > 90%, scale-out by 1
-* If CPU < 45%, scale-in by 1
+* If CPU > 90%, scale out by 1
+* If Memory > 90%, scale out by 1
+* If CPU < 45%, scale in by 1
-In this example, you can have a situation in which the memory usage is over 90% but the CPU usage is under 45%. This can lead to flapping for as long as both conditions are met.
+In this example, you can have a situation in which the memory usage is over 90% but the CPU usage is under 45%. This scenario can lead to flapping for as long as both conditions are met.
### Choose the appropriate statistic for your diagnostics metric
-For diagnostics metrics, you can choose among *Average*, *Minimum*, *Maximum* and *Total* as a metric to scale by. The most common statistic is *Average*.
+For diagnostics metrics, you can choose among **Average**, **Minimum**, **Maximum**, and **Total** as a metric to scale by. The most common statistic is **Average**.
### Considerations for scaling threshold values for special metrics
-For special metrics such as Storage or Service Bus Queue length metric, the threshold is the average number of messages available per current number of instances. Carefully choose the threshold value for this metric.
+For special metrics such as an Azure Storage or Azure Service Bus queue length metric, the threshold is the average number of messages available per current number of instances. Carefully choose the threshold value for this metric.
-Let's illustrate it with an example to ensure you understand the behavior better.
+Let's illustrate it with an example to ensure you understand the behavior better:
-* Increase instances by 1 count when Storage Queue message count >= 50
-* Decrease instances by 1 count when Storage Queue message count <= 10
+* Increase instances by 1 count when Storage queue message count >= 50
+* Decrease instances by 1 count when Storage queue message count <= 10
Consider the following sequence:
-1. There are two storage queue instances.
-2. Messages keep coming and when you review the storage queue, the total count reads 50. You might assume that autoscale should start a scale-out action. However, note that it's still 50/2 = 25 messages per instance. So, scale-out doesn't occur. For the first scale-out to happen, the total message count in the storage queue should be 100.
-3. Next, assume that the total message count reaches 100.
-4. A third storage queue instance is added due to a scale-out action. The next scale-out action won't happen until the total message count in the queue reaches 150 because 150/3 = 50.
-5. Now the number of messages in the queue gets smaller. With three instances, the first scale-in action happens when the total messages in all queues add up to 30 because 30/3 = 10 messages per instance, which is the scale-in threshold.
+1. There are two Storage queue instances.
+1. Messages keep coming and when you review the Storage queue, the total count reads 50. You might assume that autoscale should start a scale-out action. However, notice that it's still 50/2 = 25 messages per instance. So, scale-out doesn't occur. For the first scale-out action to happen, the total message count in the Storage queue should be 100.
+1. Next, assume that the total message count reaches 100.
+1. A third Storage queue instance is added because of a scale-out action. The next scale-out action won't happen until the total message count in the queue reaches 150 because 150/3 = 50.
+1. Now the number of messages in the queue gets smaller. With three instances, the first scale-in action happens when the total messages in all queues add up to 30 because 30/3 = 10 messages per instance, which is the scale-in threshold.
### Considerations for scaling when multiple rules are configured in a profile
-There are cases where you may have to set multiple rules in a profile. The following autoscale rules are used by the autoscale engine when multiple rules are set.
+There are cases where you might have to set multiple rules in a profile. The following autoscale rules are used by the autoscale engine when multiple rules are set:
-On *scale-out*, autoscale runs if any rule is met.
-On *scale-in*, autoscale require all rules to be met.
+- On *scale-out*, autoscale runs if any rule is met.
+- On *scale-in*, autoscale requires all rules to be met.
-To illustrate, assume that you have the following four autoscale rules:
+To illustrate, assume that you have four autoscale rules:
-* If CPU < 30%, scale-in by 1
-* If Memory < 50%, scale-in by 1
-* If CPU > 75%, scale-out by 1
-* If Memory > 75%, scale-out by 1
+* If CPU < 30%, scale in by 1
+* If Memory < 50%, scale in by 1
+* If CPU > 75%, scale out by 1
+* If Memory > 75%, scale out by 1
-Then the follow occurs:
+Then the following action occurs:
* If CPU is 76% and Memory is 50%, we scale out. * If CPU is 50% and Memory is 76%, we scale out.
-On the other hand, if CPU is 25% and memory is 51% autoscale does **not** scale-in. In order to scale-in, CPU must be 29% and Memory 49%.
+On the other hand, if CPU is 25% and Memory is 51%, autoscale *doesn't* scale in. To scale in, CPU must be 29% and Memory 49%.
### Always select a safe default instance count
-The default instance count is important because autoscale scales your service to that count when metrics aren't available. Therefore, select a default instance count that's safe for your workloads.
+The default instance count is important because autoscale scales your service to that count when metrics aren't available. As a result, select a default instance count that's safe for your workloads.
### Configure autoscale notifications
-Autoscale will post to the Activity Log if any of the following conditions occur:
+Autoscale posts to the activity log if any of the following conditions occur:
* Autoscale issues a scale operation. * Autoscale service successfully completes a scale action. * Autoscale service fails to take a scale action. * Metrics aren't available for autoscale service to make a scale decision. * Metrics are available (recovery) again to make a scale decision.
-* Autoscale detects flapping and aborts the scale attempt. You'll see a log type of `Flapping` in this situation. If you see this, consider whether your thresholds are too narrow.
-* Autoscale detects flapping but is still able to successfully scale. You'll see a log type of `FlappingOccurred` in this situation. If you see this, the autoscale engine has attempted to scale (for example, from 4 instances to 2), but has determined that this would cause flapping. Instead, the autoscale engine has scaled to a different number of instances (for example, using 3 instances instead of 2), which no longer causes flapping, so it has scaled to this number of instances.
+* Autoscale detects flapping and aborts the scale attempt. You see a log type of `Flapping` in this situation. If you see this log type, consider whether your thresholds are too narrow.
+* Autoscale detects flapping but is still able to successfully scale. You see a log type of `FlappingOccurred` in this situation. If you see this log type, the autoscale engine has attempted to scale (for example, from four instances to two) but has determined that this change would cause flapping. Instead, the autoscale engine has scaled to a different number of instances (for example, using three instances instead of two), which no longer causes flapping, so it has scaled to this number of instances.
-You can also use an Activity Log alert to monitor the health of the autoscale engine. Here are examples to [create an Activity Log Alert to monitor all autoscale engine operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert) or to [create an Activity Log Alert to monitor all failed autoscale scale in/scale out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert).
+You can also use an activity log alert to monitor the health of the autoscale engine. One example shows how to [create an activity log alert to monitor all autoscale engine operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert). Another example shows how to [create an activity log alert to monitor all failed autoscale scale-in/scale-out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert).
In addition to using activity log alerts, you can also configure email or webhook notifications to get notified for scale actions via the notifications tab on the autoscale setting.
-## Send data securely using TLS 1.2
+## Send data securely by using TLS 1.2
-To ensure the security of data in transit to Azure Monitor, we strongly encourage you to configure the agent to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**, and the industry is quickly moving to abandon support for these older protocols.
+To ensure the security of data in transit to Azure Monitor, we strongly encourage you to configure the agent to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable. Although they still currently work to allow backwards compatibility, we *don't* recommend them. The industry is quickly moving to abandon support for these older protocols.
-The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a deadline of [June 30th, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf) to disable older versions of TLS/SSL and upgrade to more secure protocols. Once Azure drops legacy support, if your agents can't communicate over at least TLS 1.2 you wouldn't be able to send data to Azure Monitor Logs.
+The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a deadline of [June 30, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf), to disable older versions of TLS/SSL and upgrade to more secure protocols. After Azure drops legacy support, if your agents can't communicate over at least TLS 1.2, you won't be able to send data to Azure Monitor Logs.
-We recommend you do NOT explicit set your agent to only use TLS 1.2 unless absolutely necessary. Allowing the agent to automatically detect, negotiate, and take advantage of future security standards is preferable. Otherwise you may miss the added security of the newer standards and possibly experience problems if TLS 1.2 is ever deprecated in favor of those newer standards.
+We recommend that you *don't* explicitly set your agent to only use TLS 1.2 unless necessary. Allowing the agent to automatically detect, negotiate, and take advantage of future security standards is preferable. Otherwise, you might miss the added security of the newer standards and possibly experience problems if TLS 1.2 is ever deprecated in favor of those newer standards.
-
-## Next Steps
+## Next steps
- [Autoscale flapping](./autoscale-flapping.md)-- [Create an Activity Log Alert to monitor all autoscale engine operations on your subscription.](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert)-- [Create an Activity Log Alert to monitor all failed autoscale scale in/scale out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert)
+- [Create an activity log alert to monitor all autoscale engine operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert)
+- [Create an activity log alert to monitor all failed autoscale scale-in/scale-out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert)
azure-monitor Autoscale Common Scale Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-common-scale-patterns.md
Title: Overview of common autoscale patterns
-description: Learn some of the common patterns to auto scale your resource in Azure.
+description: Learn some of the common patterns to use with autoscale for your resource in Azure.
Last updated 11/17/2022
# Overview of common autoscale patterns
-Autoscale settings help ensure that you have the right amount of resources running to handle the fluctuating load of your application. You can configure autoscale settings to be triggered based on metrics that indicate load or performance, or triggered at a scheduled date and time.
+Autoscale settings help ensure that you have the right amount of resources running to handle the fluctuating load of your application. You can configure autoscale settings to be triggered based on metrics that indicate load or performance, or triggered at a scheduled date and time.
-Azure autoscale supports many resource types. For more information about supported resources, see [autoscale supported resources](./autoscale-overview.md#supported-services-for-autoscale).
+Azure autoscale supports many resource types. For more information about supported resources, see [Autoscale supported resources](./autoscale-overview.md#supported-services-for-autoscale).
This article describes some of the common patterns you can use to scale your resources in Azure.+ ## Prerequisites
-This article assumes that you're familiar with auto scale. [Get started here to scale your resource](./autoscale-get-started.md).
+This article assumes that you're familiar with autoscale. For more information, see [Get started here to scale your resource](./autoscale-get-started.md).
## Scale based on metrics
-Scale your resource based on metrics produce by the resource itself or any other resource.
+Scale your resource based on metrics produced by the resource itself or any other resource.
For example:
-* Scale your Virtual Machine Scale Set based on the CPU usage of the virtual machine.
+
+* Scale your virtual machine scale set based on the CPU usage of the virtual machine.
* Ensure a minimum number of instances.
-* Set a maximum limit on the number of instances.
+* Set a maximum limit on the number of instances.
-The image below shows a default scale condition for a Virtual Machine Scale Set
- * The **Scale rule** tab shows that the metric source is the scale set itself and the metric used is Percentage CPU.
- * The minimum number of instances running is set to 2.
- * The maximum number of instances is set to 10.
- * When the scale set starts, the default number of instances is 3.
+The following image shows a default scale condition for a virtual machine scale set:
+ * The **Scale rule** tab shows that the metric source is the scale set itself and the metric used is **Percentage CPU**.
+ * The minimum number of instances running is set to **2**.
+ * The maximum number of instances is set to **10**.
+ * When the scale set starts, the default number of instances is **3**.
+ ## Scale based on another resource's metric
-Scale a resource based on the metrics from a different resource.
-The image below shows a scale rule that is scaling a Virtual Machine Scale Set based on the number of allocated ports on a load balancer.
+Scale a resource based on the metrics from a different resource. The following image shows a scale rule that's scaling a virtual machine scale set based on the number of allocated ports on a load balancer.
## Scale differently on weekends
-You can scale your resources differently on different days of the week.
-For example, you have a Virtual Machine Scale Set and want to:
-- Set a minimum of 3 instances on weekdays, scaling based on inbound flows.-- Scale-in to a fixed 1 instance on weekends when there's less traffic.
+You can scale your resources differently on different days of the week. For example, you might have a virtual machine scale set and want to:
+
+- Set a minimum of **3** instances on weekdays, scaling based on inbound flows.
+- Scale in to a fixed **1** instance on weekends when there's less traffic.
In this example:
-+ The weekend profile starts at 00:01, Saturday morning and ends at 04:00 on Monday morning.
-+ The end times are left blank. The weekday profile will end when the weekend profile starts and vice-versa.
-+ The default profile is irrelevant as there's no time that isn't covered by the other profiles.
+
+- The weekend profile starts at 00:01 Saturday morning and ends at 04:00 on Monday morning.
+- The end times are left blank. The weekday profile ends when the weekend profile starts and vice-versa.
+- The default profile is irrelevant because there's no time that isn't covered by the other profiles.
>[!Note]
-> Creating a recurring profile with no end time is only supported via the portal and ARM templates. For more information on creating recurring profiles with ARM templates, see [Add a recurring profile using ARM templates](./autoscale-multiprofile.md?tabs=templates#add-a-recurring-profile-using-arm-templates).
-> If the end-time is not included in the CLI command, a default end-time of 23:59 will be implemented by creating a copy of the default profile with the naming convention `"name": {\"name\": \"Auto created default scale condition\", \"for\": \"<non-default profile name>\"}`
-
+> Creating a recurring profile with no end time is only supported via the Azure portal and Azure Resource Manager templates (ARM templates). For more information on how to create recurring profiles with ARM templates, see [Add a recurring profile by using ARM templates](./autoscale-multiprofile.md?tabs=templates#add-a-recurring-profile-using-arm-templates).
+>
+> If the end time isn't included in the CLI command, a default end time of 23:59 will be implemented by creating a copy of the default profile with the naming convention `"name": {\"name\": \"Auto created default scale condition\", \"for\": \"<non-default profile name>\"}`.
+ ## Scale differently during specific events
-You can set your scale rules and instance limits differently for specific events.
-For example:
-- Set a minimum of 3 instances by default-- For the week of Back Friday, set the minimum number of instances to 10 to handle the anticipated traffic.
+You can set your scale rules and instance limits differently for specific events. For example:
+
+- Set a minimum of **3** instances by default.
+- For the week of Black Friday, set the minimum number of instances to **10** to handle the anticipated traffic.
+ :::image type="content" source="./media/autoscale-common-scale-patterns/scale-for-event.png" alt-text="Screenshot that shows two autoscale profiles, one default and one for a specific date range." lightbox="./media/autoscale-common-scale-patterns/scale-for-event.png":::
## Scale based on custom metrics
-Scale by custom metrics generated by your application.
-For example, you have a web front end and an API tier that communicates with the backend, and you want to scale the API tier based on custom events in the front end.
+Scale by custom metrics generated by your application. For example, you might have a web front end and an API tier that communicates with the back end and you want to scale the API tier based on custom events in the front end.
-Next steps
+## Next steps
-Learn more about autoscale by referring to the following articles:
+Learn more about autoscale in the following articles:
* [Azure Monitor autoscale common metrics](./autoscale-common-metrics.md) * [Azure Monitor autoscale custom metrics](./autoscale-custom-metric.md) * [Autoscale with multiple profiles](./autoscale-multiprofile.md)
-* [Flapping in Autoscale](./autoscale-custom-metric.md)
+* [Flapping in autoscale](./autoscale-custom-metric.md)
* [Use autoscale actions to send email and webhook alert notifications](./autoscale-webhook-email.md) * [Autoscale REST API](/rest/api/monitor/autoscalesettings)
azure-monitor Autoscale Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-diagnostics.md
Title: Autoscale diagnostics
-description: Configure diagnostics in autoscale.
+description: This article shows you how to configure diagnostics in autoscale.
Last updated 06/22/2022
-# Customer intent: As a devops admin, I want to collect and analyze autoscale metrics and logs.
+# Customer intent: As a DevOps admin, I want to collect and analyze autoscale metrics and logs.
-# Diagnostic settings in Autoscale
+# Diagnostic settings in autoscale
-Autoscale has two log categories and a set of metrics that can be enabled via the **Diagnostics settings** tab on the autoscale setting page.
+Autoscale has two log categories and a set of metrics that can be enabled via the **Diagnostics settings** tab on the **Autoscale setting** page.
+The two categories are:
-The two categories are
-* [Autoscale Evaluations](https://learn.microsoft.com/azure/azure-monitor/reference/tables/autoscaleevaluationslog) containing log data relating to rule evaluation.
-* [Autoscale Scale Actions](https://learn.microsoft.com/azure/azure-monitor/reference/tables/autoscalescaleactionslog) log data relating to each scale event.
+* [Autoscale Evaluations](/azure/azure-monitor/reference/tables/autoscaleevaluationslog) contain log data relating to rule evaluation.
+* [Autoscale Scale Actions](/azure/azure-monitor/reference/tables/autoscalescaleactionslog) log data relating to each scale event.
-Information about Autoscale Metrics can be found in the [Supported metrics](../essentials/metrics-supported.md#microsoftinsightsautoscalesettings) reference.
+For more information about autoscale metrics, see the [Supported metrics](../essentials/metrics-supported.md#microsoftinsightsautoscalesettings) document.
+
+You can send logs and metrics to various destinations:
-Both the logs and metrics can be sent to various destinations including:
* Log Analytics workspaces * Storage accounts * Event hubs * Partner solutions
-For more information on diagnostics, see [Diagnostic settings in Azure Monitor](../essentials/diagnostic-settings.md?tabs=portal)
+For more information on diagnostics, see [Diagnostic settings in Azure Monitor](../essentials/diagnostic-settings.md?tabs=portal).
## Run history
-View the history of your autoscale activity in the run history tab. The run history tab includes a chart of resource instance counts over time and the resource activity log entries for autoscale.
+View the history of your autoscale activity on the **Run history** tab. The **Run history** tab includes a chart of resource instance counts over time and the resource activity log entries for autoscale.
-## Resource log schemas
+## Resource log schemas
-The following are the general formats for autoscale resource logs with example data included. Not all examples below are properly formed JSON as they may include a list of valid for a given field.
+The following examples are the general formats for autoscale resource logs with example data included. Not all the examples are properly formed JSON because they might include a valid list for a given field.
Use these logs to troubleshoot issues in autoscale. For more information, see [Troubleshooting autoscale problems](autoscale-troubleshoot.md).
-## Autoscale Evaluations Log
+## Autoscale evaluations log
The following schemas appear in the autoscale evaluations log. ### Profile evaluation
-Logged when autoscale first looks at an autoscale profile
+Logged when autoscale first looks at an autoscale profile:
```JSON {
Logged when autoscale first looks at an autoscale profile
} ```
-### Profile cooldown evaluation
+### Profile cool-down evaluation
-Logged when autoscale evaluates if it shouldn't scale because of a cool down period.
+Logged when autoscale evaluates if it shouldn't scale because of a cool-down period:
```JSON {
Logged when autoscale evaluates if it shouldn't scale because of a cool down pe
### Rule evaluation
-Logged when autoscale first starts evaluating a particular scale rule.
+Logged when autoscale first starts evaluating a particular scale rule:
```JSON {
Logged when autoscale first starts evaluating a particular scale rule.
### Metric evaluation
-Logged when autoscale evaluated the metric being used to trigger a scale action.
+Logged when autoscale evaluates the metric being used to trigger a scale action:
```JSON {
Logged when autoscale evaluated the metric being used to trigger a scale action.
### Instance count evaluation
-Logged when autoscale evaluates the number of instances already running in preparation for deciding if it should start more, shut down some, or do nothing.
+Logged when autoscale evaluates the number of instances already running in preparation for deciding if it should start more, shut down some, or do nothing:
```JSON {
Logged when autoscale evaluates the number of instances already running in prepa
### Scale action evaluation
-Logged when autoscale starts evaluation if a scale action should take place.
+Logged when autoscale starts evaluation if a scale action should take place:
```JSON {
Logged when autoscale starts evaluation if a scale action should take place.
### Instance update evaluation
-Logged when autoscale updates the number of compute instances running, either up or down.
+Logged when autoscale updates the number of compute instances running, either up or down:
```JSON {
Logged when autoscale updates the number of compute instances running, either up
} ```
-## Autoscale Scale Actions Log
+## Autoscale scale actions log
The following schemas appear in the autoscale evaluations log. - ### Scale action
-Logged when autoscale initiates a scale action, either up or down.
+Logged when autoscale initiates a scale action, either up or down:
+ ```JSON { "time": "2018-09-10 18:12:00.6132593",
Logged when autoscale initiates a scale action, either up or down.
### Scale action tracking
-Logged at different intervals of an instance scale action.
+Logged at different intervals of an instance scale action:
```JSON {
Logged at different intervals of an instance scale action.
} ```
-## Activity Logs
-The following events are logged to the Activity log with a `CategoryValue` of `Autoscale`.
-
-* Autoscale scale up initiated
-* Autoscale scale up completed
-* Autoscale scale down initiated
-* Autoscale scale down completed
-* Predictive Autoscale scale up initiated
-* Predictive Autoscale scale up completed
-* Metric Failure
-* Metric Recovery
-* Predictive Metric Failure
+## Activity logs
+The following events are logged to the activity log with a `CategoryValue` of `Autoscale`:
+
+* Autoscale scale-up initiated
+* Autoscale scale-up completed
+* Autoscale scale-down initiated
+* Autoscale scale-down completed
+* Predictive autoscale scale-up initiated
+* Predictive autoscale scale-up completed
+* Metric failure
+* Metric recovery
+* Predictive metric failure
* Flapping
-An extract of each log event name, showing the relevant parts of the `Properties` element are shown below:
+An extract of each log event name, showing the relevant parts of the `Properties` element, are shown next.
-### Autoscale action
+### Autoscale action
-Logged when autoscale attempts to scale in or out.
+Logged when autoscale attempts to scale in or out:
```JSON {
Logged when autoscale attempts to scale in or out.
```
-### Get Operation Status Result
+### Get operation status result
-Logged following a scale event.
+Logged following a scale event:
```JSON
Logged following a scale event.
### Metric failure
-Logged when autoscale can't determine the value of the metric used in the scale rule.
+Logged when autoscale can't determine the value of the metric used in the scale rule:
```JSON "Properties":{
Logged when autoscale can't determine the value of the metric used in the scale
"activityStatusValue": "Failed" } ```+ ### Metric recovery
-Logged when autoscale can once again determine the value of the metric used in the scale rule after a `MetricFailure` event
+Logged when autoscale can once again determine the value of the metric used in the scale rule after a `MetricFailure` event:
```JSON "Properties":{
Logged when autoscale can once again determine the value of the metric used in t
"activityStatusValue": "Succeeded" } ```
-### Predictive Metric Failure
-Logged when autoscale can't calculate predicted scale events due to the metric being unavailable.
+### Predictive metric failure
+
+Logged when autoscale can't calculate predicted scale events because the metric is unavailable:
+ ```JSON "Properties": { "eventCategory": "Autoscale",
Logged when autoscale can't calculate predicted scale events due to the metric b
"activityStatusValue": "Failed" } ```
-### Flapping Occurred
-Logged when autoscale detects flapping could occur, and scales differently to avoid it.
+### Flapping occurred
+
+Logged when autoscale detects flapping could occur and scales differently to avoid it:
```JSON "Properties":{
Logged when autoscale detects flapping could occur, and scales differently to av
### Flapping
-Logged when autoscale detects flapping could occur, and defers scaling in to avoid it.
+Logged when autoscale detects flapping could occur and defers scaling in to avoid it:
```JSON "Properties": {
Logged when autoscale detects flapping could occur, and defers scaling in to avo
## Next steps
-* [Troubleshooting Autoscale](./autoscale-troubleshoot.md)
-* [Autoscale Flapping](./autoscale-flapping.md)
+* [Troubleshooting autoscale](./autoscale-troubleshoot.md)
+* [Autoscale flapping](./autoscale-flapping.md)
* [Autoscale settings](./autoscale-understanding-settings.md)
azure-monitor Autoscale Multiprofile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-multiprofile.md
The example below shows an autoscale setting with a default profile and recurrin
:::image type="content" source="./media/autoscale-multiple-profiles/autoscale-default-recurring-profiles.png" alt-text="A screenshot showing an autoscale setting with default and recurring profile or scale condition":::
-In the above example, on Monday after 6 AM, the recurring profile will be used. If the instance count is less than 3, autoscale scales to the new minimum of three. Autoscale continues to use this profile and scales based on CPU% until Monday at 6 PM. At all other times scaling will be done according to the default profile, based on the number of requests. After 6 PM on Monday, autoscale switches to the default profile. If for example, the number of instances at the time is 12, autoscale scales in to 10, which the maximum allowed for the default profile.
+In the above example, on Monday after 3 AM, the recurring profile will cease to be used. If the instance count is less than 3, autoscale scales to the new minimum of three. Autoscale continues to use this profile and scales based on CPU% until Monday at 8 PM. At all other times scaling will be done according to the default profile, based on the number of requests. After 8 PM on Monday, autoscale switches to the default profile. If for example, the number of instances at the time is 12, autoscale scales in to 10, which the maximum allowed for the default profile.
## Multiple contiguous profiles Autoscale transitions between profiles based on their start times. The end time for a given profile is determined by the start time of the following profile.
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
Title: Autoscale in Microsoft Azure
-description: "Autoscale in Microsoft Azure"
+ Title: Autoscale in Azure Monitor
+description: This article describes the autoscale feature in Azure Monitor and its benefits.
Previously updated : 08/01/2022-- Last updated : 03/08/2023+
-# Overview of autoscale in Microsoft Azure
+# Overview of autoscale in Azure
-This article describes Microsoft Azure autoscale and its benefits.
+This article describes the autoscale feature in Azure Monitor and its benefits.
-Azure autoscale supports many resource types. For more information about supported resources, see [autoscale supported resources](#supported-services-for-autoscale).
+Autoscale supports many resource types. For more information about supported resources, see [Autoscale supported resources](#supported-services-for-autoscale).
> [!NOTE]
-> [Availability sets](/archive/blogs/kaevans/autoscaling-azurevirtual-machines) are an older scaling feature for virtual machines with limited support. We recommend migrating to [virtual machine scale sets](../../virtual-machine-scale-sets/overview.md) for faster and more reliable autoscale support.
+> [Availability sets](/archive/blogs/kaevans/autoscaling-azurevirtual-machines) are an older scaling feature for virtual machines with limited support. We recommend migrating to [Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/overview.md) for faster and more reliable autoscale support.
-## What is autoscale
+## What is autoscale?
-Autoscale is a service that allows you to automatically add and remove resources according to the load on your application.
+Autoscale is a service that you can use to automatically add and remove resources according to the load on your application.
-When your application experiences higher load, autoscale adds resources to handle the increased load. When load is low, autoscale reduces the number of resources, lowering your costs. You can scale your application based on metrics like CPU usage, queue length, and available memory, or based on a schedule. Metrics and schedules are set up in rules. The rules include a minimum level of resources that you need to run your application, and a maximum level of resources that won't be exceeded.
+When your application experiences higher load, autoscale adds resources to handle the increased load. When load is low, autoscale reduces the number of resources, which lowers your costs. You can scale your application based on metrics like CPU usage, queue length, and available memory. You can also scale based on a schedule. Metrics and schedules are set up in rules. The rules include a minimum level of resources that you need to run your application and a maximum level of resources that won't be exceeded.
-For example, scale out your application by adding VMs when the average CPU usage per VM is above 70%. Scale it back in removing VMs when CPU usage drops to 40%.
+For example, scale out your application by adding VMs when the average CPU usage per VM is above 70%. Scale it back by removing VMs when CPU usage drops to 40%.
:::image type="content" source="./media/autoscale-overview/AutoscaleConcept.png" alt-text="A diagram that shows scaling out by adding virtual machine instances.":::
-When the conditions in the rules are met, one or more autoscale actions are triggered, adding or removing VMs. In addition, you can perform other actions like sending email notifications, or webhooks to trigger processes in other systems.
+When the conditions in the rules are met, one or more autoscale actions are triggered, adding or removing VMs. You can also perform other actions like sending email, notifications, or webhooks to trigger processes in other systems.
-## Scaling out and scaling up
+## Scale out and scale up
-Autoscale scales in and out, which is an increase, or decrease of the number of resource instances. Scaling in and out is also called horizontal scaling. For example, for a Virtual Machine Scale Set, scaling out means adding more virtual machines. Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load.
+Autoscale scales in and out, which is an increase or decrease of the number of resource instances. Scaling in and out is also called horizontal scaling. For example, for a virtual machine scale set, scaling out means adding more virtual machines. Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation because you can use it to run a large number of VMs to handle load.
-In contrast, scaling up and down, or vertical scaling, keeps the number of resources constant, but gives those resources more capacity in terms of memory, CPU speed, disk space and network. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling may also require a restart of the virtual machine during the scaling process.
+In contrast, scaling up and down, or vertical scaling, keeps the number of resources constant but gives those resources more capacity in terms of memory, CPU speed, disk space, and network. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling might also require a restart of the virtual machine during the scaling process.
:::image type="content" source="./media/autoscale-overview/vertical-scaling.png" alt-text="A diagram that shows scaling up by adding CPU and memory to a virtual machine.":::
-When the conditions in the rules are met, one or more autoscale actions are triggered, adding or removing VMs. In addition, you can perform other actions like sending email notifications, or webhooks to trigger processes in other systems.
+When the conditions in the rules are met, one or more autoscale actions are triggered, adding or removing VMs. You can also perform other actions like sending email, notifications, or webhooks to trigger processes in other systems.
### Predictive autoscale
-[Predictive autoscale](./autoscale-predictive.md) uses machine learning to help manage and scale Azure Virtual Machine Scale Sets with cyclical workload patterns. It forecasts the overall CPU load on your Virtual Machine Scale Set, based on historical CPU usage patterns. The scale set can then be scaled out in time to meet the predicted demand.
+[Predictive autoscale](./autoscale-predictive.md) uses machine learning to help manage and scale virtual machine scale sets with cyclical workload patterns. It forecasts the overall CPU load on your virtual machine scale set, based on historical CPU usage patterns. The scale set can then be scaled out in time to meet the predicted demand.
## Autoscale setup
You can set up autoscale via:
* [Azure portal](autoscale-get-started.md) * [PowerShell](../powershell-samples.md#create-and-manage-autoscale-settings)
-* [Cross-platform Command Line Interface (CLI)](../cli-samples.md#autoscale)
+* [Cross-platform command-line interface (CLI)](../cli-samples.md#autoscale)
* [Azure Monitor REST API](/rest/api/monitor/autoscalesettings) ## Architecture
-The following diagram shows the autoscale architecture.
+The following diagram shows the autoscale architecture.
- ![Autoscale Flow Diagram](./media/autoscale-overview/Autoscale_Overview_v4.png)
+ ![Diagram that shows autoscale flow.](./media/autoscale-overview/Autoscale_Overview_v4.png)
### Resource metrics
-Resources generate metrics that are used in autoscale rules to trigger scale events. Virtual Machine Scale Sets use telemetry data from Azure diagnostics agents to generate metrics. Telemetry for Web apps and Cloud services comes directly from the Azure Infrastructure. Some commonly used metrics include CPU usage, memory usage, thread counts, queue length, and disk usage. See [Autoscale Common Metrics](autoscale-common-metrics.md) for a list of available metrics.
+Resources generate metrics that are used in autoscale rules to trigger scale events. Virtual machine scale sets use telemetry data from Azure diagnostics agents to generate metrics. Telemetry for the Web Apps feature of Azure App Service and Azure Cloud Services comes directly from the Azure infrastructure. Some commonly used metrics include CPU usage, memory usage, thread counts, queue length, and disk usage. For a list of available metrics, see [Autoscale Common Metrics](autoscale-common-metrics.md).
### Custom metrics
-Use your own custom metrics that your application generates. Configure your application to send metrics to [Application Insights](../app/app-insights-overview.md) so you can use those metrics decide when to scale.
+Use your own custom metrics that your application generates. Configure your application to send metrics to [Application Insights](../app/app-insights-overview.md) so that you can use those metrics to decide when to scale.
### Time
-Set up schedule-based rules to trigger scale events. Use schedule-based rules when you see time patterns in your load, and want to scale before an anticipated change in load occurs.
+Set up schedule-based rules to trigger scale events. Use schedule-based rules when you see time patterns in your load and want to scale before an anticipated change in load occurs.
### Rules
-Rules define the conditions needed to trigger a scale event, the direction of the scaling, and the amount to scale by. Combine multiple rules using different metrics, for example CPU usage and queue length. Define up to 10 rules per profile.
+Rules define the conditions needed to trigger a scale event, the direction of the scaling, and the amount to scale by. Combine multiple rules by using different metrics like CPU usage and queue length. Define up to 10 rules per profile.
Rules can be:
-* Metric-based
-Trigger based on a metric value, for example when CPU usage is above 50%.
-* Time-based
-Trigger based on a schedule, for example, every Saturday at 8am.
-
+* **Metric-based**: Trigger based on a metric value, for example, when CPU usage is above 50%.
+* **Time-based**: Trigger based on a schedule, for example, every Saturday at 8 AM.
-Autoscale scales out if *any* of the rules are met, whereas autoscale scales in only if *all* the rules are met.
-In terms of logic operators, the OR operator is used when scaling out with multiple rules. The AND operator is used when scaling in with multiple rules.
+Autoscale scales out if *any* of the rules are met. Autoscale scales in only if *all* the rules are met.
+In terms of logic operators, the OR operator is used for scaling out with multiple rules. The AND operator is used for scaling in with multiple rules.
### Actions and automation Rules can trigger one or more actions. Actions include:
-* Scale - Scale resources in or out.
-* Email - Send an email to the subscription admins, co-admins, and/or any other email address.
-* Webhooks - Call webhooks to trigger multiple complex actions inside or outside Azure. In Azure, you can:
+* **Scale**: Scale resources in or out.
+* **Email**: Send an email to the subscription admins, co-admins, and/or any other email address.
+* **Webhooks**: Call webhooks to trigger multiple complex actions inside or outside Azure. In Azure, you can:
* Start an [Azure Automation runbook](../../automation/overview.md).
- * Call an [Azure Function](../../azure-functions/functions-overview.md).
- * Trigger an [Azure Logic App](../../logic-apps/logic-apps-overview.md).
+ * Call an [Azure function](../../azure-functions/functions-overview.md).
+ * Trigger an [Azure logic app](../../logic-apps/logic-apps-overview.md).
## Autoscale settings
-Autoscale settings contain the autoscale configuration. The setting including scale conditions that define rules, limits, and schedules and notifications. Define one or more scale conditions in the settings, and one notification setup.
+Autoscale settings contain the autoscale configuration. The setting includes scale conditions that define rules, limits, and schedules and notifications. Define one or more scale conditions in the settings and one notification setup.
-Autoscale uses the following terminology and structure. The UI and JSON
+Autoscale uses the following terminology and structure.
| UI | JSON/CLI | Description | ||--|-|
-| Scale conditions | profiles | A collection of rules, instance limits and schedules, based on a metric or time. You can define one or more scale conditions or profiles. |
-| Rules | rules | A set of time or metric-based conditions that trigger a scale action. You can define one or more rules for both scale-in and scale-out actions. |
-| Instance limits | capacity | Each scale condition or profile defines th default, max, and min number of instances that can run under that profile. |
-| Schedule | recurrence | Indicates when autoscale should put this scale condition or profile into effect. You can have multiple scale conditions, which allow you to handle different and overlapping requirements. For example, you can have different scale conditions for different times of day, or days of the week. |
-| Notify | notification | Defines the notifications to send when an autoscale event occurs. Autoscale can notify one or more email addresses or make a call one or more webhooks. You can configure multiple webhooks in the JSON but only one in the UI. |
+| Scale conditions | profiles | A collection of rules, instance limits, and schedules based on a metric or time. You can define one or more scale conditions or profiles. Define up to 20 profiles per autoscale setting. |
+| Rules | rules | A set of conditions based on time or metrics that triggers a scale action. You can define one or more rules for both scale-in and scale-out actions. Define up to a total of 10 rules per profile. |
+| Instance limits | capacity | Each scale condition or profile defines the default, maximum, and minimum number of instances that can run under that profile. |
+| Schedule | recurrence | Indicates when autoscale should put this scale condition or profile into effect. You can have multiple scale conditions, which allow you to handle different and overlapping requirements. For example, you can have different scale conditions for different times of day or days of the week. |
+| Notify | notification | Defines the notifications to send when an autoscale event occurs. Autoscale can notify one or more email addresses or make a call by using one or more webhooks. You can configure multiple webhooks in the JSON but only one in the UI. |
-![Azure autoscale setting, profile, and rule structure](./media/autoscale-overview/azure-resource-manager-rule-structure-3.png)
+![Diagram that shows Azure autoscale setting, profile, and rule structure.](./media/autoscale-overview/azure-resource-manager-rule-structure-3.png)
The full list of configurable fields and descriptions is available in the [Autoscale REST API](/rest/api/monitor/autoscalesettings).
-For code examples, see
+For code examples, see:
+
+* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](../../virtual-machine-scale-sets/tutorial-autoscale-template.md)
+* [Tutorial: Automatically scale a virtual machine scale set with the Azure CLI](../../virtual-machine-scale-sets/tutorial-autoscale-cli.md)
+* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md)
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](../../virtual-machine-scale-sets/tutorial-autoscale-template.md)
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI](../../virtual-machine-scale-sets/tutorial-autoscale-cli.md)
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md)
-## Horizontal vs vertical scaling
+## Horizontal vs. vertical scaling
-Autoscale scales horizontally, which is an increase, or decrease of the number of resource instances. For example, in a Virtual Machine Scale Set, scaling out means adding more virtual machines Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load.
+Autoscale scales horizontally, which is an increase or decrease of the number of resource instances. For example, in a virtual machine scale set, scaling out means adding more virtual machines. Scaling in means removing VMs. Horizontal scaling is flexible in a cloud situation because it allows you to run a large number of VMs to handle load.
-In contrast, vertical scaling, keeps the same number of resources constant, but gives them more capacity in terms of memory, CPU speed, disk space and network. Adding or removing capacity in vertical scaling is known as scaling or down. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling may also require a restart of the virtual machine during the scaling process.
+In contrast, vertical scaling keeps the same number of resources constant but gives them more capacity in terms of memory, CPU speed, disk space, and network. Adding or removing capacity in vertical scaling is known as scaling down. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling might also require a restart of the VM during the scaling process.
## Supported services for autoscale
-The following services are supported by autoscale:
+Autoscale supports the following services.
-| Service | Schema & Documentation |
+| Service | Schema and documentation |
||--|
-| Azure Virtual machines scale sets | [Overview of autoscale with Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview.md) |
-| Web apps | [Scaling Web Apps](autoscale-get-started.md) |
+| Azure Virtual Machines Scale Sets | [Overview of autoscale with Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview.md) |
+| Web Apps feature of Azure App Service | [Scaling Web Apps](autoscale-get-started.md) |
| Azure API Management service | [Automatically scale an Azure API Management instance](../../api-management/api-management-howto-autoscale.md) |
-| Azure Data Explorer Clusters | [Manage Azure Data Explorer clusters scaling to accommodate changing demand](/azure/data-explorer/manage-cluster-horizontal-scaling) |
-| Azure Stream Analytics | [Autoscale streaming units (Preview)](../../stream-analytics/stream-analytics-autoscale.md) |
-| Azure SignalR Service (Premium tier) | [Automatically scale units of an Azure SignalR service](https://learn.microsoft.com/azure/azure-signalr/signalr-howto-scale-autoscale) |
-| Azure Machine Learning Workspace | [Autoscale an online endpoint](../../machine-learning/how-to-autoscale-endpoints.md) |
+| Azure Data Explorer clusters | [Manage Azure Data Explorer clusters scaling to accommodate changing demand](/azure/data-explorer/manage-cluster-horizontal-scaling) |
+| Azure Stream Analytics | [Autoscale streaming units (preview)](../../stream-analytics/stream-analytics-autoscale.md) |
+| Azure SignalR Service (Premium tier) | [Automatically scale units of an Azure SignalR service](/azure/azure-signalr/signalr-howto-scale-autoscale) |
+| Azure Machine Learning workspace | [Autoscale an online endpoint](../../machine-learning/how-to-autoscale-endpoints.md) |
| Azure Spring Apps | [Set up autoscale for applications](../../spring-apps/how-to-setup-autoscale.md) |
-| Media Services | [Autoscaling in Media Services](/azure/media-services/latest/release-notes#autoscaling) |
-| Service Bus | [Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md) |
-| Logic Apps - Integration Service Environment(ISE) | [Add ISE capacity](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity) |
+| Azure Media Services | [Autoscaling in Media Services](/azure/media-services/latest/release-notes#autoscaling) |
+| Azure Service Bus | [Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md) |
+| Azure Logic Apps - Integration service environment (ISE) | [Add ISE capacity](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity) |
## Next steps
To learn more about autoscale, see the following resources:
* [Azure Monitor autoscale common metrics](autoscale-common-metrics.md) * [Use autoscale actions to send email and webhook alert notifications](autoscale-webhook-email.md)
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](../../virtual-machine-scale-sets/tutorial-autoscale-template.md)
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI](../../virtual-machine-scale-sets/tutorial-autoscale-cli.md)
-* [Tutorial: Automatically scale a Virtual Machine Scale Set with Azure PowerShell](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md)
+* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](../../virtual-machine-scale-sets/tutorial-autoscale-template.md)
+* [Tutorial: Automatically scale a virtual machine scale set with the Azure CLI](../../virtual-machine-scale-sets/tutorial-autoscale-cli.md)
+* [Tutorial: Automatically scale a virtual machine scale set with Azure PowerShell](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md)
* [Autoscale CLI reference](/cli/azure/monitor/autoscale) * [ARM template resource definition](/azure/templates/microsoft.insights/autoscalesettings)
-* [PowerShell Az.Monitor Reference](/powershell/module/az.monitor/#monitor)
-* [REST API reference. Autoscale Settings](/rest/api/monitor/autoscale-settings).
+* [PowerShell Az.Monitor reference](/powershell/module/az.monitor/#monitor)
+* [REST API reference: Autoscale settings](/rest/api/monitor/autoscale-settings)
azure-monitor Autoscale Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-troubleshoot.md
Title: Troubleshooting Azure Monitor autoscale
-description: Tracking down problems with Azure Monitor autoscaling used in Service Fabric, Virtual Machines, Web Apps, and cloud services.
+ Title: Troubleshoot Azure Monitor autoscale
+description: Tracking down problems with Azure Monitor autoscaling used in Azure Service Fabric, Azure Virtual Machines, the Web Apps feature of Azure App Service, and Azure Cloud Services.
+# Troubleshoot Azure Monitor autoscale
-# Troubleshooting Azure Monitor autoscale
-
-Azure Monitor autoscale helps you to have the right amount of resources running to handle the load on your application. It enables you to add resources to handle increases in load and also save money by removing resources that are sitting idle. You can scale based on a schedule, fixed date-time, or resource metric you choose. For more information, see [Autoscale Overview](autoscale-overview.md).
+Azure Monitor autoscale helps you to have the right amount of resources running to handle the load on your application. It enables you to add resources to handle increases in load and also save money by removing resources that are sitting idle. You can scale based on a schedule, a fixed date-time, or a resource metric you choose. For more information, see [Autoscale overview](autoscale-overview.md).
-The autoscale service provides you metrics and logs to understand what scale actions have occurred and the evaluation of the conditions that led to those actions. You can find answers to questions such as:
+The autoscale service provides metrics and logs to help you understand what scale actions occurred and the evaluation of the conditions that led to those actions. You can find answers to questions like:
-- Why did my service scale-out or in?
+- Why did my service scale-out or scale-in?
- Why did my service not scale? - Why did an autoscale action fail? - Why is an autoscale action taking time to scale? ## Autoscale metrics
-Autoscale provides you with [four metrics](../essentials/metrics-supported.md#microsoftinsightsautoscalesettings) to understand its operation.
+Autoscale provides you with [four metrics](../essentials/metrics-supported.md#microsoftinsightsautoscalesettings) to understand its operation:
-- **Observed Metric Value** - The value of the metric you chose to take the scale action on, as seen or computed by the autoscale engine. Because a single autoscale setting can have multiple rules and therefore multiple metric sources, you can filter using "metric source" as a dimension.-- **Metric Threshold** - The threshold you set to take the scale action. Because a single autoscale setting can have multiple rules and therefore multiple metric sources, you can filter using "metric rule" as a dimension.-- **Observed Capacity** - The active number of instances of the target resource as seen by Autoscale engine.-- **Scale Actions Initiated** - The number of scale-out and scale-in actions initiated by the autoscale engine. You can filter by scale-out vs. scale in actions.
+- **Observed Metric Value**: The value of the metric you chose to take the scale action on, as seen or computed by the autoscale engine. Because a single autoscale setting can have multiple rules and therefore multiple metric sources, you can filter by using "metric source" as a dimension.
+- **Metric Threshold**: The threshold you set to take the scale action. Because a single autoscale setting can have multiple rules and therefore multiple metric sources, you can filter by using "metric rule" as a dimension.
+- **Observed Capacity**: The active number of instances of the target resource as seen by the autoscale engine.
+- **Scale Actions Initiated**: The number of scale-out and scale-in actions initiated by the autoscale engine. You can filter by scale-out versus scale-in actions.
-You can use the [Metrics Explorer](../essentials/metrics-getting-started.md) to chart the above metrics all in one place. The chart should show:
+You can use the [metrics explorer](../essentials/metrics-getting-started.md) to chart the preceding metrics all in one place. The chart should show the:
- - the actual metric
- - the metric as seen/computed by autoscale engine
- - the threshold for a scale action
- - the change in capacity
+ - Actual metric.
+ - Metric as seen/computed by autoscale engine.
+ - Threshold for a scale action.
+ - Change in capacity.
-## Example 1 - Analyzing a simple autoscale rule
+## Example 1: Analyze an autoscale rule
-We have a simple autoscale setting for a virtual machine scale set that:
+An autoscale setting for a virtual machine scale set:
-- scales out when the average CPU percentage of a set is greater than 70% for 10 minutes -- scales in when the CPU percentage of the set is less than 5% for more than 10 minutes.
+- Scales out when the average CPU percentage of a set is greater than 70% for 10 minutes.
+- Scales in when the CPU percentage of the set is less than 5% for more than 10 minutes.
-LetΓÇÖs review the metrics from the autoscale service.
-
-![Screenshot shows a virtual machine scale set percentage CPU example.](media/autoscale-troubleshoot/autoscale-vmss-CPU-ex-full-1.png)
+Let's review the metrics from the autoscale service.
-![Virtual machine scale set percentage CPU example](media/autoscale-troubleshoot/autoscale-vmss-CPU-ex-full-2.png)
+The following chart shows a **Percentage CPU** metric for a virtual machine scale set.
-***Figure 1a - Percentage CPU metric for virtual machine scale set and the Observed Metric Value metric for autoscale setting***
+![Screenshot that shows a virtual machine scale set percentage CPU example.](media/autoscale-troubleshoot/autoscale-vmss-CPU-ex-full-1.png)
-![Metric Threshold and Observed Capacity](media/autoscale-troubleshoot/autoscale-metric-threshold-capacity-ex-full.png)
+The next chart shows the **Observed Metric Value** metric for an autoscale setting.
-***Figure 1b - Metric Threshold and Observed Capacity***
+![Screenshot that shows another virtual machine scale set percentage CPU example.](media/autoscale-troubleshoot/autoscale-vmss-CPU-ex-full-2.png)
-In figure 1b, the **Metric Threshold** (light blue line) for the scale-out rule is 70. The **Observed Capacity** (dark blue line) shows the number of active instances, which is currently 3.
+The final chart shows the **Metric Threshold** and **Observed Capacity** metrics. The **Metric Threshold** metric at the top for the scale-out rule is 70. The **Observed Capacity** metric at the bottom shows the number of active instances, which is currently 3.
-> [!NOTE]
-> You will need to filter the **Metric Threshold** by the metric trigger rule dimension scale out (increase) rule to see the scale-out threshold and by the scale in rule (decrease).
+![Screenshot that shows Metric Threshold and Observed Capacity.](media/autoscale-troubleshoot/autoscale-metric-threshold-capacity-ex-full.png)
-## Example 2 - Advanced autoscaling for a virtual machine scale set
+> [!NOTE]
+> You can filter **Metric Threshold** by the metric trigger rule dimension scale-out (increase) rule to see the scale-out threshold and by the scale-in rule (decrease).
-We have an autoscale setting that allows a virtual machine scale set resource to scale out based on its own metric **Outbound Flows**. Notice that the **divide metric by instance count** option for the metric threshold is checked.
+## Example 2: Advanced autoscaling for a virtual machine scale set
-The scale action rule is:
+An autoscale setting allows a virtual machine scale set resource to scale out based on its own **Outbound Flows** metric. The **Divide metric by instance count** option for the metric threshold is selected.
-If the value of **Outbound Flow per instance** is greater than 10, then autoscale service should scale out by 1 instance.
+The scale action rule is if the value of **Outbound Flow per instance** is greater than 10, the autoscale service should scale out by 1 instance.
-In this case, the autoscale engineΓÇÖs observed metric value is calculated as the actual metric value divided by the number of instances. If the observed metric value is less than the threshold, no scale-out action is initiated.
-
-![Screenshot shows the Average Outbound Flows page with an example of a virtual machine scale set autoscale metrics charts.](media/autoscale-troubleshoot/autoscale-vmss-metric-chart-ex-1.png)
+In this case, the autoscale engine's observed metric value is calculated as the actual metric value divided by the number of instances. If the observed metric value is less than the threshold, no scale-out action is initiated.
-![Virtual machine scale set autoscale metrics charts example](media/autoscale-troubleshoot/autoscale-vmss-metric-chart-ex-2.png)
+The following screenshots show two metric charts.
-***Figure 2 - Virtual machine scale set autoscale metrics charts example***
+The **Avg Outbound Flows** chart shows the value of the **Outbound Flows** metric. The actual value is 6.
-In figure 2, you can see two metric charts.
+![Screenshot that shows the Average Outbound Flows page with an example of a virtual machine scale set autoscale metrics chart.](media/autoscale-troubleshoot/autoscale-vmss-metric-chart-ex-1.png)
-The chart on top shows the actual value of the **Outbound Flows** metric. The actual value is 6.
+The following chart shows a few values:
-The chart on the bottom shows a few values.
+ - The **Observed Metric Value** metric in the middle is 3 because there are 2 active instances, and 6 divided by 2 is 3.
+ - The **Observed Capacity** metric at the bottom shows the instance count seen by an autoscale engine.
+ - The **Metric Threshold** metric at the top is set to 10.
-If there are multiple scale action rules, you can use splitting or the **add filter** option in the Metrics explorer chart to look at metric by a specific source or rule. For more information on splitting a metric chart, see [Advanced features of metric charts - splitting](../essentials/metrics-charts.md#apply-splitting)
+ ![Screenshot that shows a virtual machine scale set autoscale metrics charts example.](media/autoscale-troubleshoot/autoscale-vmss-metric-chart-ex-2.png)
-## Example 3 - Understanding autoscale events
+If there are multiple scale action rules, you can use splitting or the **add filter** option in the metrics explorer chart to look at a metric by a specific source or rule. For more information on splitting a metric chart, see [Advanced features of metric charts - splitting](../essentials/metrics-charts.md#apply-splitting).
-In the autoscale setting screen, go to the **Run history** tab to see the most recent scale actions. The tab also shows the change in **Observed Capacity** over time. To find more details about all autoscale actions including operations such as update/delete autoscale settings, view the activity log and filter by autoscale operations.
+## Example 3: Understand autoscale events
-![Autoscale settings run history](media/autoscale-troubleshoot/autoscale-setting-run-history-smaller.png)
+In the autoscale setting screen, go to the **Run history** tab to see the most recent scale actions. The tab also shows the change in **Observed Capacity** over time. To find more information about all autoscale actions, including operations such as update/delete autoscale settings, view the activity log and filter by autoscale operations.
-## Autoscale Resource Logs
+![Screenshot that shows autoscale settings run history.](media/autoscale-troubleshoot/autoscale-setting-run-history-smaller.png)
-Same as any other Azure resource, the autoscale service provides [resource logs](../essentials/platform-logs-overview.md). There are two categories of logs.
+## Autoscale resource logs
-- **Autoscale Evaluations** - The autoscale engine records log entries for every single condition evaluation every time it does a check. The entry includes details on the observed values of the metrics, the rules evaluated, and if the evaluation resulted in a scale action or not.
+The autoscale service provides [resource logs](../essentials/platform-logs-overview.md). There are two categories of logs:
-- **Autoscale Scale Actions** - The engine records scale action events initiated by autoscale service and the results of those scales actions (success, failure, and how much scaling occurred as seen by the autoscale service).
+- **Autoscale Evaluations**: The autoscale engine records log entries for every single condition evaluation every time it does a check. The entry includes details on the observed values of the metrics, the rules evaluated, and if the evaluation resulted in a scale action or not.
+- **Autoscale Scale Actions**: The engine records scale action events initiated by the autoscale service and the results of those scale actions (success, failure, and how much scaling occurred as seen by the autoscale service).
-As with any Azure Monitor supported service, you can use [Diagnostic Settings](../essentials/diagnostic-settings.md) to route these logs:
+As with any Azure Monitor supported service, you can use [diagnostic settings](../essentials/diagnostic-settings.md) to route these logs to:
-- to your Log Analytics workspace for detailed analytics-- to Event Hubs and then to non-Azure tools-- to your Azure storage account for archival
+- Your Log Analytics workspace for detailed analytics.
+- Azure Event Hubs and then to non-Azure tools.
+- Your Azure Storage account for archive.
-![Autoscale Diagnostic Settings](media/autoscale-troubleshoot/diagnostic-settings.png)
+![Screenshot that shows autoscale diagnostic settings.](media/autoscale-troubleshoot/diagnostic-settings.png)
-The previous picture shows the Azure portal autoscale diagnostic settings. There you can select the Diagnostic/Resource Logs tab and enable log collection and routing. You can also perform the same action using REST API, CLI, PowerShell, Resource Manager templates for Diagnostic Settings by choosing the resource type as *Microsoft.Insights/AutoscaleSettings*.
+The preceding screenshot shows the Azure portal autoscale **Diagnostics settings** pane. There you can select the **Diagnostic/Resource Logs** tab and enable log collection and routing. You can also perform the same action by using the REST API, the Azure CLI, PowerShell, and Azure Resource Manager templates for diagnostic settings by choosing the resource type as **Microsoft.Insights/AutoscaleSettings**.
-## Troubleshooting using autoscale logs
+## Troubleshoot by using autoscale logs
-For best troubleshooting experience, we recommend routing your logs to Azure Monitor Logs (Log Analytics) through a workspace when you create the autoscale setting. This process is shown in the picture in the previous section. You can validate the evaluations and scale actions better using Log Analytics.
+For the best troubleshooting experience, we recommend routing your logs to Azure Monitor Logs (Log Analytics) through a workspace when you create the autoscale setting. This process is shown in the screenshot in the previous section. You can validate the evaluations and scale actions better by using Log Analytics.
-Once you have configured your autoscale logs to be sent to the Log Analytics workspace, you can execute the following queries to check the logs.
+After you've configured your autoscale logs to be sent to the Log Analytics workspace, you can execute the following queries to check the logs.
To get started, try this query to view the most recent autoscale evaluation logs:
AutoscaleScaleActionsLog
| limit 50 ```
-Use the following sections to these questions.
+Use the following sections to answer these questions.
-## A scale action occurred that I didnΓÇÖt expect
+## A scale action occurred that you didn't expect
-First execute the query for scale action to find the scale action you are interested in. If it is the latest scale action, use the following query:
+First, execute the query for a scale action to find the scale action you're interested in. If it's the latest scale action, use the following query:
```Kusto AutoscaleScaleActionsLog | take 1 ```
-Select the CorrelationId field from the scale actions log. Use the CorrelationId to find the right Evaluation log. Executing the below query will display all the rules and conditions evaluated leading to that scale action.
+Select the `CorrelationId` field from the scale actions log. Use `CorrelationId` to find the right evaluation log. Executing the following query displays all the rules and conditions that were evaluated and led to that scale action.
```Kusto AutoscaleEvaluationsLog
AutoscaleEvaluationsLog
## What profile caused a scale action?
-A scaled action occurred, but you have overlapping rules and profiles and need to track down which caused the action.
+A scaled action occurred, but you have overlapping rules and profiles and need to track down which one caused the action.
-Find the correlationId of the scale action (as explained in example 1) and then execute the query on evaluation logs to learn more about the profile.
+Find the `CorrelationId` of the scale action, as explained in example 1. Then execute the query on evaluation logs to learn more about the profile.
```Kusto AutoscaleEvaluationsLog
AutoscaleEvaluationsLog
| project ProfileEvaluationTime, Profile, ProfileSelected, EvaluationResult ```
-The whole profile evaluation can also be understood better using the following query
+The whole profile evaluation can also be understood better by using the following query:
```Kusto AutoscaleEvaluationsLog
AutoscaleEvaluationsLog
| project OperationName, Profile, ProfileEvaluationTime, ProfileSelected, EvaluationResult ```
-## A scale action did not occur
+## A scale action didn't occur
-I expected a scale action and it did not occur. There may be no scale action events or logs.
+You expected a scale action and it didn't occur. There might be no scale action events or logs.
-Review the autoscale metrics if you are using a metric-based scale rule. It's possible that the **Observed metric value** or **Observed Capacity** are not what you expected them to be and therefore the scale rule did not fire. You would still see evaluations, but not a scale-out rule. It's also possible that the cool-down time kept a scale action from occurring.
-
- Review the autoscale evaluation logs during the time period you expected the scale action to occur. Review all the evaluations it did and why it decided to not trigger a scale action.
+Review the autoscale metrics if you're using a metric-based scale rule. It's possible that the **Observed Metric** value or **Observed Capacity** value aren't what you expected them to be, so the scale rule didn't fire. You would still see evaluations, but not a scale-out rule. It's also possible that the cool-down time kept a scale action from occurring.
+ Review the autoscale evaluation logs during the time period when you expected the scale action to occur. Review all the evaluations it did and why it decided to not trigger a scale action.
```Kusto AutoscaleEvaluationsLog
AutoscaleEvaluationsLog
## Scale action failed
-There may be a case where autoscale service took the scale action but the system decided not to scale or failed to complete the scale action. Use this query to find the failed scale actions.
+There might be a case where the autoscale service took the scale action but the system decided not to scale or failed to complete the scale action. Use this query to find the failed scale actions:
```Kusto AutoscaleScaleActionsLog
Create alert rules to get notified of autoscale actions or failures. You can als
## Schema of autoscale resource logs
-For more information, see [autoscale resource logs](autoscale-resource-log-schema.md)
+For more information, see [Autoscale resource logs](autoscale-resource-log-schema.md).
## Next steps+ Read information on [autoscale best practices](autoscale-best-practices.md).
azure-monitor Autoscale Understanding Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-understanding-settings.md
Title: Understanding autoscale settings in Azure Monitor
-description: "A detailed breakdown of autoscale settings and how they work. Applies to Virtual Machines, Cloud Services, Web Apps"
+ Title: Understand autoscale settings in Azure Monitor
+description: This article explains autoscale settings, how they work, and how they apply to Azure Virtual Machines, Azure Cloud Services, and the Web Apps feature of Azure App Service.
# Understand autoscale settings
-Autoscale settings help ensure that you have the right amount of resources running to handle the fluctuating load of your application. You can configure autoscale settings to be triggered based on metrics that indicate load or performance, or triggered at a scheduled date and time.
+Autoscale settings help ensure that you have the right amount of resources running to handle the fluctuating load of your application. You can configure autoscale settings to be triggered based on metrics that indicate load or performance, or triggered at a scheduled date and time.
-This article gives a detailed explanation of the autoscale settings.
+This article explains the autoscale settings.
## Autoscale setting schema
-The following example shows an autoscale setting. This autoscale setting has the following attributes:
-- A single default profile.
+The following example shows an autoscale setting with these attributes:
+
+- A single default profile.
- Two metric rules in this profile: one for scale-out, and one for scale-in.
- - The scale-out rule is triggered when the Virtual Machine Scale Set's average percentage CPU metric is greater than 85 percent for the past 10 minutes.
- - The scale-in rule is triggered when the Virtual Machine Scale Set's average is less than 60 percent for the past minute.
+ - The scale-out rule is triggered when the virtual machine scale set's average percentage CPU metric is greater than 85% for the past 10 minutes.
+ - The scale-in rule is triggered when the virtual machine scale set's average is less than 60% for the past minute.
> [!NOTE]
-> A setting can have multiple profiles. To learn more, see the [profiles](#autoscale-profiles) section. A profile can also have multiple scale-out rules and scale-in rules defined. To see how they are evaluated, see the [evaluation](#autoscale-evaluation) section.
+> A setting can have multiple profiles. To learn more, see the [profiles](#autoscale-profiles) section. A profile can also have multiple scale-out rules and scale-in rules defined. To see how they're evaluated, see the [evaluation](#autoscale-evaluation) section.
```JSON {
The following example shows an autoscale setting. This autoscale setting has the
} ```
-The table below describes the elements in the above autoscale setting's JSON.
+The following table describes the elements in the preceding autoscale setting's JSON.
| Section | Element name |Portal name| Description | | | | | |
The table below describes the elements in the above autoscale setting's JSON.
| Setting | name | |The autoscale setting name. | | Setting | location | |The location of the autoscale setting. This location can be different from the location of the resource being scaled. | | properties | targetResourceUri | |The resource ID of the resource being scaled. You can only have one autoscale setting per resource. |
-| properties | profiles | Scale condition |An autoscale setting is composed of one or more profiles. Each time the autoscale engine runs, it executes one profile. |
+| properties | profiles | Scale condition |An autoscale setting is composed of one or more profiles. Each time the autoscale engine runs, it executes one profile. Configure up to 20 profiles per autoscale setting. |
| profiles | name | |The name of the profile. You can choose any name that helps you identify the profile. |
-| profiles | capacity.maximum | Instance limits - Maximum |The maximum capacity allowed. It ensures that autoscale doesn't scale your resource above this number when executing the profile. |
-| profiles | capacity.minimum | Instance limits - Minimum |The minimum capacity allowed. It ensures that autoscale doesn't scale your resource below this number when executing the profile |
-| profiles | capacity.default | Instance limits - Default |If there's a problem reading the resource metric, and the current capacity is below the default, autoscale scales out to the default. This ensures the availability of the resource. If the current capacity is already higher than the default capacity, autoscale doesn't scale in. |
-| profiles | rules | Rules |Autoscale automatically scales between the maximum and minimum capacities, by using the rules in the profile. Define up to 10 individual rules in a profile. Typically rules are defined in pairs, one to determine when to scale out, and the other to determine when to scale in. |
+| profiles | capacity.maximum | Instance limits - Maximum |The maximum capacity allowed. It ensures that autoscale doesn't scale your resource above this number when it executes the profile. |
+| profiles | capacity.minimum | Instance limits - Minimum |The minimum capacity allowed. It ensures that autoscale doesn't scale your resource below this number when it executes the profile |
+| profiles | capacity.default | Instance limits - Default |If there's a problem reading the resource metric, and the current capacity is below the default, autoscale scales out to the default. This action ensures the availability of the resource. If the current capacity is already higher than the default capacity, autoscale doesn't scale in. |
+| profiles | rules | Rules |Autoscale automatically scales between the maximum and minimum capacities by using the rules in the profile. Define up to 10 individual rules in a profile. Typically rules are defined in pairs, one to determine when to scale out, and the other to determine when to scale in. |
| rule | metricTrigger | Scale rule |Defines the metric condition of the rule. | | metricTrigger | metricName | Metric name |The name of the metric. |
-| metricTrigger | metricResourceUri | |The resource ID of the resource that emits the metric. In most cases, it is the same as the resource being scaled. In some cases, it can be different. For example, you can scale a Virtual Machine Scale Set based on the number of messages in a storage queue. |
-| metricTrigger | timeGrain | Time grain (minutes) |The metric sampling duration. For example, **TimeGrain = ΓÇ£PT1MΓÇ¥** means that the metrics should be aggregated every 1 minute, by using the aggregation method specified in the statistic element. |
-| metricTrigger | statistic | Time grain statistic |The aggregation method within the timeGrain period. For example, **statistic = ΓÇ£AverageΓÇ¥** and **timeGrain = ΓÇ£PT1MΓÇ¥** means that the metrics should be aggregated every 1 minute, by taking the average. This property dictates how the metric is sampled. |
-| metricTrigger | timeWindow | Duration |The amount of time to look back for metrics. For example, **timeWindow = ΓÇ£PT10MΓÇ¥** means that every time autoscale runs, it queries metrics for the past 10 minutes. The time window allows your metrics to be normalized, and avoids reacting to transient spikes. |
-| metricTrigger | timeAggregation |Time aggregation |The aggregation method used to aggregate the sampled metrics. For example, **TimeAggregation = ΓÇ£AverageΓÇ¥** should aggregate the sampled metrics by taking the average. In the preceding case, take the ten 1-minute samples, and average them. |
+| metricTrigger | metricResourceUri | |The resource ID of the resource that emits the metric. In most cases, it's the same as the resource being scaled. In some cases, it can be different. For example, you can scale a virtual machine scale set based on the number of messages in a storage queue. |
+| metricTrigger | timeGrain | Time grain (minutes) |The metric sampling duration. For example, **timeGrain = "PT1M"** means that the metrics should be aggregated every 1 minute, by using the aggregation method specified in the statistic element. |
+| metricTrigger | statistic | Time grain statistic |The aggregation method within the timeGrain period. For example, **statistic = "Average"** and **timeGrain = "PT1M"** means that the metrics should be aggregated every 1 minute, by taking the average. This property dictates how the metric is sampled. |
+| metricTrigger | timeWindow | Duration |The amount of time to look back for metrics. For example, **timeWindow = "PT10M"** means that every time autoscale runs, it queries metrics for the past 10 minutes. The time window allows your metrics to be normalized and avoids reacting to transient spikes. |
+| metricTrigger | timeAggregation |Time aggregation |The aggregation method used to aggregate the sampled metrics. For example, **timeAggregation = "Average"** should aggregate the sampled metrics by taking the average. In the preceding case, take the ten 1-minute samples, and average them. |
| rule | scaleAction | Action |The action to take when the metricTrigger of the rule is triggered. | | scaleAction | direction | Operation |"Increase" to scale out, or "Decrease" to scale in.| | scaleAction | value |Instance count |How much to increase or decrease the capacity of the resource. |
-| scaleAction | cooldown | Cool down (minutes)|The amount of time to wait after a scale operation before scaling again. For example, if **cooldown = ΓÇ£PT10MΓÇ¥**, autoscale doesn't attempt to scale again for another 10 minutes. The cooldown is to allow the metrics to stabilize after the addition or removal of instances. |
-
+| scaleAction | cooldown | Cool down (minutes)|The amount of time to wait after a scale operation before scaling again. For example, if **cooldown = "PT10M"**, autoscale doesn't attempt to scale again for another 10 minutes. The cooldown is to allow the metrics to stabilize after the addition or removal of instances. |
## Autoscale profiles
+Define up to 20 different profiles per autoscale setting.
There are three types of autoscale profiles: -- **Default profile:** Use the default profile if you donΓÇÖt need to scale your resource based on a particular date and time, or day of the week. The default profile runs when there are no other applicable profiles for the current date and time. You can only have one default profile.-- **Fixed date profile:** The fixed date profile is relevant for a single date and time. Use the fixed date profile to set scaling rules for a specific event. The profile runs only once, on the eventΓÇÖs date and time. For all other times, autoscale uses the default profile.
+- **Default profile**: Use the default profile if you don't need to scale your resource based on a particular date and time or day of the week. The default profile runs when there are no other applicable profiles for the current date and time. You can only have one default profile.
+- **Fixed-date profile**: The fixed-date profile is relevant for a single date and time. Use the fixed-date profile to set scaling rules for a specific event. The profile runs only once, on the event's date and time. For all other times, autoscale uses the default profile.
-```json
- ...
- "profiles": [
- {
- "name": " regularProfile",
- "capacity": {
- ...
- },
- "rules": [
- ...
- ]
- },
- {
- "name": "eventProfile",
- "capacity": {
- ...
+ ```json
+ ...
+ "profiles": [
+ {
+ "name": " regularProfile",
+ "capacity": {
+ ...
+ },
+ "rules": [
+ ...
+ ]
},
- "rules": [
+ {
+ "name": "eventProfile",
+ "capacity": {
...
- ],
- "fixedDate": {
- "timeZone": "Pacific Standard Time",
- "start": "2017-12-26T00:00:00",
- "end": "2017-12-26T23:59:00"
+ },
+ "rules": [
+ ...
+ ],
+ "fixedDate": {
+ "timeZone": "Pacific Standard Time",
+ "start": "2017-12-26T00:00:00",
+ "end": "2017-12-26T23:59:00"
+ }
}
- }
- ]
-```
+ ]
+ ```
-- **Recurrence profile:** A recurrence profile is used for a day or set of days of the week. The schema for a recurring profile doesn't include an end date. The end of date and time for a recurring profile is set by the start time of the following profile. When using the portal to configure recurring profiles, the default profile is automatically updated to start at the end time that you specify for the recurring profile. For more information on configuring multiple profiles, see [Autoscale with multiple profiles](./autoscale-multiprofile.md)
+- **Recurrence profile**: A recurrence profile is used for a day or set of days of the week. The schema for a recurring profile doesn't include an end date. The end of date and time for a recurring profile is set by the start time of the following profile. When the portal is used to configure recurring profiles, the default profile is automatically updated to start at the end time that you specify for the recurring profile. For more information on configuring multiple profiles, see [Autoscale with multiple profiles](./autoscale-multiprofile.md)
- The partial schema example below shows a recurring profile, starting at 06:00 and ending at 19:00 on Saturdays and Sundays. The default profile has been modified to start at 19:00 on Saturdays and Sundays.
-
-``` JSON
- {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "resources": [
- {
- "type": "Microsoft.Insights/ autoscaleSettings",
- "apiVersion": "2015-04-01",
- "name": "VMSS1-Autoscale-607",
- "location": "eastus",
- "properties": {
+ The partial schema example here shows a recurring profile. It starts at 06:00 and ends at 19:00 on Saturdays and Sundays. The default profile has been modified to start at 19:00 on Saturdays and Sundays.
+ ``` JSON
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.Insights/ autoscaleSettings",
+ "apiVersion": "2015-04-01",
"name": "VMSS1-Autoscale-607",
- "enabled": true,
- "targetResourceUri": "/subscriptions/ abc123456-987-f6e5-d43c-9a8d8e7f6541/ resourceGroups/rg-vmss1/providers/ Microsoft.Compute/ virtualMachineScaleSets/VMSS1",
- "profiles": [
- {
- "name": "Weekend profile",
- "capacity": {
- ...
- },
- "rules": [
- ...
- ],
- "recurrence": {
- "frequency": "Week",
- "schedule": {
- "timeZone": "E. Europe Standard Time",
- "days": [
- "Saturday",
- "Sunday"
- ],
- "hours": [
- 6
- ],
- "minutes": [
- 0
- ]
- }
- }
- },
- {
- "name": "{\"name\":\"Auto created default scale condition\",\"for\":\"Weekend profile\"}",
- "capacity": {
- ...
- },
- "recurrence": {
- "frequency": "Week",
- "schedule": {
- "timeZone": "E. Europe Standard Time",
- "days": [
- "Saturday",
- "Sunday"
- ],
- "hours": [
- 19
- ],
- "minutes": [
- 0
- ]
+ "location": "eastus",
+ "properties": {
+
+ "name": "VMSS1-Autoscale-607",
+ "enabled": true,
+ "targetResourceUri": "/subscriptions/ abc123456-987-f6e5-d43c-9a8d8e7f6541/ resourceGroups/rg-vmss1/providers/ Microsoft.Compute/ virtualMachineScaleSets/VMSS1",
+ "profiles": [
+ {
+ "name": "Weekend profile",
+ "capacity": {
+ ...
+ },
+ "rules": [
+ ...
+ ],
+ "recurrence": {
+ "frequency": "Week",
+ "schedule": {
+ "timeZone": "E. Europe Standard Time",
+ "days": [
+ "Saturday",
+ "Sunday"
+ ],
+ "hours": [
+ 6
+ ],
+ "minutes": [
+ 0
+ ]
+ }
} },
- "rules": [
- ...
- ]
- }
- ],
- "notifications": [],
- "targetResourceLocation": "eastus"
+ {
+ "name": "{\"name\":\"Auto created default scale condition\",\"for\":\"Weekend profile\"}",
+ "capacity": {
+ ...
+ },
+ "recurrence": {
+ "frequency": "Week",
+ "schedule": {
+ "timeZone": "E. Europe Standard Time",
+ "days": [
+ "Saturday",
+ "Sunday"
+ ],
+ "hours": [
+ 19
+ ],
+ "minutes": [
+ 0
+ ]
+ }
+ },
+ "rules": [
+ ...
+ ]
+ }
+ ],
+ "notifications": [],
+ "targetResourceLocation": "eastus"
+ }
+
}
-
- }
- ]
- }
-
-```
+ ]
+ }
+
+ ```
## Autoscale evaluation
-
-Autoscale settings can have multiple profiles. Each profile can have multiple rules. Each time the autoscale job runs, it begins by choosing the applicable profile for that time. Autoscale then evaluates the minimum and maximum values, any metric rules in the profile, and decides if a scale action is necessary. The autoscale job runs every 30 to 60 seconds, depending on the resource type.
+
+Autoscale settings can have multiple profiles. Each profile can have multiple rules. Each time the autoscale job runs, it begins by choosing the applicable profile for that time. Autoscale then evaluates the minimum and maximum values, any metric rules in the profile, and decides if a scale action is necessary. The autoscale job runs every 30 to 60 seconds, depending on the resource type.
### Which profile will autoscale use? Each time the autoscale service runs, the profiles are evaluated in the following order:
-1. Fixed date profiles
+1. Fixed-date profiles
1. Recurring profiles 1. Default profile
-The first suitable profile found will be used.
+The first suitable profile that's found is used.
### How does autoscale evaluate multiple rules?
-After autoscale determines which profile to run, it evaluates the scale-out rules in the profile, that is, where **direction = ΓÇ£IncreaseΓÇ¥**.
-If one or more scale-out rules are triggered, autoscale calculates the new capacity determined by the **scaleAction** specified for each of the rules. If more than one scale-out rule is triggered, autoscale scales to the highest specified capacity to ensure service availability.
+After autoscale determines which profile to run, it evaluates the scale-out rules in the profile, that is, where **direction = "Increase"**. If one or more scale-out rules are triggered, autoscale calculates the new capacity determined by the **scaleAction** specified for each of the rules. If more than one scale-out rule is triggered, autoscale scales to the highest specified capacity to ensure service availability.
-For example, assume that there are two rules: Rule 1 specifies a scale out by 3 instances, and rule 2 specifies a scale out by 5. If both rules are triggered, autoscale will scale out by 5 instances. Similarly, if one rule specifies scale out by 3 instances and another rule, scale out by 15%, the higher of the two instance counts will be used.
+For example, assume that there are two rules: Rule 1 specifies a scale-out by three instances, and rule 2 specifies a scale-out by five. If both rules are triggered, autoscale scales out by five instances. Similarly, if one rule specifies scale-out by three instances and another rule specifies scale-out by 15%, the higher of the two instance counts is used.
-If no scale-out rules are triggered, autoscale evaluates the scale-in rules, that is, rules with **direction = ΓÇ£DecreaseΓÇ¥**. Autoscale only scales in if all of the scale-in rules are triggered.
+If no scale-out rules are triggered, autoscale evaluates the scale-in rules, that is, rules with **direction = "Decrease"**. Autoscale only scales in if all the scale-in rules are triggered.
-Autoscale calculates the new capacity determined by the **scaleAction** of each of those rules. To ensure service availability, autoscale scales in by as little as possible to achieve the maximum capacity specified. For example, assume two scale-in rules, one that decreases capacity by 50 percent, and one that decreases capacity by 3 instances. If first rule results in 5 instances and the second rule results in 7, autoscale scales-in to 7 instances.
+Autoscale calculates the new capacity determined by the **scaleAction** of each of those rules. To ensure service availability, autoscale scales in by as little as possible to achieve the maximum capacity specified. For example, assume two scale-in rules, one that decreases capacity by 50% and one that decreases capacity by three instances. If the first rule results in five instances and the second rule results in seven, autoscale scales in to seven instances.
-Each time autoscale calculates the result of a scale-in action, it evaluates whether that action would trigger a scale-out action. The scenario where a scale action triggers the opposite scale action is known as flapping. Autoscale may defer a scale-in action to avoid flapping or may scale by a number less than what was specified in the rule. For more information on flapping, see [Flapping in Autoscale](./autoscale-custom-metric.md)
+Each time autoscale calculates the result of a scale-in action, it evaluates whether that action would trigger a scale-out action. The scenario where a scale action triggers the opposite scale action is known as flapping. Autoscale might defer a scale-in action to avoid flapping or might scale by a number less than what was specified in the rule. For more information on flapping, see [Flapping in autoscale](./autoscale-custom-metric.md).
## Next steps
-Learn more about autoscale by referring to the following articles :
+Learn more about autoscale:
* [Overview of autoscale](./autoscale-overview.md) * [Azure Monitor autoscale common metrics](./autoscale-common-metrics.md) * [Autoscale with multiple profiles](./autoscale-multiprofile.md)
-* [Flapping in Autoscale](./autoscale-custom-metric.md)
+* [Flapping in autoscale](./autoscale-custom-metric.md)
* [Use autoscale actions to send email and webhook alert notifications](./autoscale-webhook-email.md) * [Autoscale REST API](/rest/api/monitor/autoscalesettings)
azure-monitor Autoscale Webhook Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-webhook-email.md
# Use autoscale actions to send email and webhook alert notifications in Azure Monitor
-This article shows you how set up triggers so that you can call specific web URLs or send emails based on autoscale actions in Azure.
+This article shows you how to set up triggers so that you can call specific web URLs or send emails based on autoscale actions in Azure.
## Webhooks
-Webhooks allow you to route the Azure alert notifications to other systems for post-processing or custom notifications. For example, routing the alert to services that can handle an incoming web request to send SMS, log bugs, notify a team using chat or messaging services, etc. The webhook URI must be a valid HTTP or HTTPS endpoint.
+Webhooks allow you to route Azure alert notifications to other systems for post-processing or custom notifications. Examples include routing the alert to services that can handle an incoming web request to send an SMS, log bugs, or notify a team by using chat or messaging services. The webhook URI must be a valid HTTP or HTTPS endpoint.
## Email
-Email can be sent to any valid email address. Administrators and co-administrators of the subscription where the rule is running will also be notified.
+You can send email to any valid email address. Administrators and co-administrators of the subscription where the rule is running are also notified.
-## Cloud Services and App Services
-You can opt-in from the Azure portal for Cloud Services and Server Farms (App Services).
+## Cloud Services and App Service
+You can opt in from the Azure portal for Azure Cloud Services and server farms (Azure App Service).
* Choose the **scale by** metric.
-![scale by](./media/autoscale-webhook-email/insights-autoscale-notify.png)
+ ![Screenshot that shows the Autoscale setting pane.](./media/autoscale-webhook-email/insights-autoscale-notify.png)
-## Virtual Machine scale sets
-For newer Virtual Machines created with Resource Manager (Virtual Machine scale sets), you can configure this using REST API, Resource Manager templates, PowerShell, and CLI. A portal interface is not yet available.
-When using the REST API or Resource Manager template, include the notifications element in your [autoscalesettings](/azure/templates/microsoft.insights/2015-04-01/autoscalesettings) with the following options.
+## Virtual machine scale sets
+For newer virtual machines created with Azure Resource Manager (virtual machine scale sets), you can use the REST API, Resource Manager templates, PowerShell, and the Azure CLI for configuration. An Azure portal interface isn't yet available.
+
+When you use the REST API or Resource Manager templates, include the notifications element in your [autoscale settings](/azure/templates/microsoft.insights/2015-04-01/autoscalesettings) with the following options:
``` "notifications": [
When using the REST API or Resource Manager template, include the notifications
] ```
-| Field | Mandatory? | Description |
+| Field | Mandatory | Description |
| | | |
-| operation |yes |value must be "Scale" |
-| sendToSubscriptionAdministrator |yes |value must be "true" or "false" |
-| sendToSubscriptionCoAdministrators |yes |value must be "true" or "false" |
-| customEmails |yes |value can be null [] or string array of emails |
-| webhooks |yes |value can be null or valid Uri |
-| serviceUri |yes |a valid https Uri |
-| properties |yes |value must be empty {} or can contain key-value pairs |
+| operation |Yes |Value must be "Scale." |
+| sendToSubscriptionAdministrator |Yes |Value must be "true" or "false." |
+| sendToSubscriptionCoAdministrators |Yes |Value must be "true" or "false." |
+| customEmails |Yes |Value can be null [] or a string array of emails. |
+| webhooks |Yes |Value can be null or valid URI. |
+| serviceUri |Yes |Valid HTTPS URI. |
+| properties |Yes |Value must be empty {} or can contain key-value pairs. |
## Authentication in webhooks
-The webhook can authenticate using token-based authentication, where you save the webhook URI with a token ID as a query parameter. For example, https:\//mysamplealert/webcallback?tokenid=sometokenid&someparameter=somevalue
+The webhook can authenticate by using token-based authentication, where you save the webhook URI with a token ID as a query parameter. An example is https:\//mysamplealert/webcallback?tokenid=sometokenid&someparameter=somevalue.
## Autoscale notification webhook payload schema When the autoscale notification is generated, the following metadata is included in the webhook payload:
When the autoscale notification is generated, the following metadata is included
} ``` -
-| Field | Mandatory? | Description |
+| Field | Mandatory | Description |
| | | |
-| status |yes |The status that indicates that an autoscale action was generated |
-| operation |yes |For an increase of instances, it will be "Scale Out" and for a decrease in instances, it will be "Scale In" |
-| context |yes |The autoscale action context |
-| timestamp |yes |Time stamp when the autoscale action was triggered |
-| id |Yes |Resource Manager ID of the autoscale setting |
-| name |Yes |The name of the autoscale setting |
-| details |Yes |Explanation of the action that the autoscale service took and the change in the instance count |
-| subscriptionId |Yes |Subscription ID of the target resource that is being scaled |
-| resourceGroupName |Yes |Resource Group name of the target resource that is being scaled |
-| resourceName |Yes |Name of the target resource that is being scaled |
-| resourceType |Yes |The three supported values: "microsoft.classiccompute/domainnames/slots/roles" - Cloud Service roles, "microsoft.compute/virtualmachinescalesets" - Virtual Machine Scale Sets, and "Microsoft.Web/serverfarms" - Web App |
-| resourceId |Yes |Resource Manager ID of the target resource that is being scaled |
-| portalLink |Yes |Azure portal link to the summary page of the target resource |
-| oldCapacity |Yes |The current (old) instance count when Autoscale took a scale action |
-| newCapacity |Yes |The new instance count that Autoscale scaled the resource to |
-| properties |No |Optional. Set of <Key, Value> pairs (for example, Dictionary <String, String>). The properties field is optional. In a custom user interface or Logic app based workflow, you can enter keys and values that can be passed using the payload. An alternate way to pass custom properties back to the outgoing webhook call is to use the webhook URI itself (as query parameters) |
+| status |Yes |Status that indicates that an autoscale action was generated. |
+| operation |Yes |For an increase of instances, it's' "Scale Out." For a decrease in instances, it's' "Scale In." |
+| context |Yes |Autoscale action context. |
+| timestamp |Yes |Time stamp when the autoscale action was triggered. |
+| id |Yes |Resource Manager ID of the autoscale setting. |
+| name |Yes |Name of the autoscale setting. |
+| details |Yes |Explanation of the action that the autoscale service took and the change in the instance count. |
+| subscriptionId |Yes |Subscription ID of the target resource that's being scaled. |
+| resourceGroupName |Yes |Resource group name of the target resource that's being scaled. |
+| resourceName |Yes |Name of the target resource that's being scaled. |
+| resourceType |Yes |Three supported values: "microsoft.classiccompute/domainnames/slots/roles" - Azure Cloud Services roles, "microsoft.compute/virtualmachinescalesets" - Azure Virtual Machine Scale Sets, and "Microsoft.Web/serverfarms" - Web App feature of Azure Monitor. |
+| resourceId |Yes |Resource Manager ID of the target resource that's being scaled. |
+| portalLink |Yes |Azure portal link to the summary page of the target resource. |
+| oldCapacity |Yes |Current (old) instance count when autoscale took a scale action. |
+| newCapacity |Yes |New instance count to which autoscale scaled the resource. |
+| properties |No |Optional. Set of <Key, Value> pairs (for example, Dictionary <String, String>). The properties field is optional. In a custom user interface or logic app-based workflow, you can enter keys and values that can be passed by using the payload. An alternate way to pass custom properties back to the outgoing webhook call is to use the webhook URI itself (as query parameters). |
azure-monitor Tutorial Autoscale Performance Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/tutorial-autoscale-performance-schedule.md
Title: Autoscale Azure resources based on data or schedule
-description: Create an autoscale setting for an app service plan using metric data and a schedule
+description: Create an autoscale setting for an app service plan by using metric data and a schedule.
-# Create an Autoscale Setting for Azure resources based on performance data or a schedule
+# Create an autoscale setting for Azure resources based on performance data or a schedule
-Autoscale settings enable you to add/remove instances of service based on preset conditions. These settings can be created through the portal. This method provides a browser-based user interface for creating and configuring an autoscale setting.
+Autoscale settings enable you to add or remove instances of service based on preset conditions. These settings can be created through the portal. This method provides a browser-based user interface for creating and configuring an autoscale setting.
-In this tutorial, you will
+In this tutorial, you will:
> [!div class="checklist"]
-> * Create a Web App and App Service Plan
-> * Configure autoscale rules for scale-in and scale out based on the number of requests a Web App receives
-> * Trigger a scale-out action and watch the number of instances increase
-> * Trigger a scale-in action and watch the number of instances decrease
-> * Clean up your resources
+> * Create a web app and Azure App Service plan.
+> * Configure autoscale rules for scale-in and scale-out based on the number of requests a web app receives.
+> * Trigger a scale-out action and watch the number of instances increase.
+> * Trigger a scale-in action and watch the number of instances decrease.
+> * Clean up your resources.
If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
-## Log in to the Azure portal
+## Sign in to the Azure portal
-Log in to the [Azure portal](https://portal.azure.com/).
+Sign in to the [Azure portal](https://portal.azure.com/).
-## Create a Web App and App Service Plan
-1. Click the **Create a resource** option from the left-hand navigation pane.
-2. Search for and select the *Web App* item and click **Create**.
-3. Select an app name like *MyTestScaleWebApp*. Create a new resource group *myResourceGroup' or place it into a resource group of your choosing.
+## Create a web app and App Service plan
+1. On the menu on the left, select **Create a resource**.
+1. Search for and select the **Web App** item and select **Create**.
+1. Select an app name like **MyTestScaleWebApp**. Create a new resource group **myResourceGroup** or place it into a resource group of your choosing.
-Within a few minutes, your resources should be provisioned. Use the Web App and corresponding App Service Plan in the remainder of this tutorial.
+Within a few minutes, your resources should be provisioned. Use the web app and corresponding App Service plan in the remainder of this tutorial.
- ![Create a new app service in the portal](./media/tutorial-autoscale-performance-schedule/Web-App-Create.png)
+ ![Screenshot that shows creating a new app service in the portal.](./media/tutorial-autoscale-performance-schedule/Web-App-Create.png)
-## Navigate to Autoscale settings
-1. From the left-hand navigation pane, select the **Monitor** option. Once the page loads, select the **Autoscale** tab.
-2. A list of the resources under your subscription that support autoscale are listed here. Identify the App Service Plan that was created earlier in the tutorial, and click on it.
+## Go to autoscale settings
+1. On the menu on the left, select **Monitor**. Then select the **Autoscale** tab.
+1. A list of the resources under your subscription that support autoscale are listed here. Identify the App Service plan that was created earlier in the tutorial, and select it.
- ![Screenshot shows the Azure portal with Monitor, then Autoscale selected.](./media/tutorial-autoscale-performance-schedule/monitor-blade-autoscale.png)
+ ![Screenshot shows the Azure portal with Monitor and Autoscale selected.](./media/tutorial-autoscale-performance-schedule/monitor-blade-autoscale.png)
-3. On the autoscale setting, click the **Enable Autoscale** button.
+1. On the **Autoscale setting** screen, select **Enable autoscale**.
-The next few steps help you fill the autoscale screen to look like following picture:
+The next few steps help you fill the **Autoscale setting** screen to look like the following screenshot.
- ![Save autoscale setting](./media/tutorial-autoscale-performance-schedule/Autoscale-Setting-Save.png)
+ ![Screenshot that shows saving the autoscale setting.](./media/tutorial-autoscale-performance-schedule/Autoscale-Setting-Save.png)
## Configure default profile
-1. Provide a **Name** for the autoscale setting.
-2. In the default profile, ensure the **Scale mode** is set to 'Scale to a specific instance count'.
-3. Set the instance count to **1**. This setting ensures that when no other profile is active, or in effect, the default profile returns the instance count to 1.
-
- ![Screenshot shows the Autoscale setting page with a name entered for the setting.](./media/tutorial-autoscale-performance-schedule/autoscale-setting-profile.png)
+1. Provide a name for the autoscale setting.
+1. In the default profile, ensure **Scale mode** is set to **Scale to a specific instance count**.
+1. Set **Instance count** to **1**. This setting ensures that when no other profile is active, or in effect, the default profile returns the instance count to **1**.
+ ![Screenshot that shows the Autoscale setting screen with a name entered for the setting.](./media/tutorial-autoscale-performance-schedule/autoscale-setting-profile.png)
## Create recurrence profile
-1. Click on the **Add a scale condition** link under the default profile.
+1. Select the **Add a scale condition** link under the default profile.
-2. Edit the **Name** of this profile to be 'Monday to Friday profile'.
+1. Edit the name of this profile to be **Monday to Friday profile**.
-3. Ensure the **Scale mode** is set to 'Scale based on a metric'.
+1. Ensure **Scale mode** is set to **Scale based on a metric**.
-4. For **Instance limits** set the **Minimum** as '1', the **Maximum** as '2' and the **Default** as '1'. This setting ensures that this profile does not autoscale the service plan to have less than 1 instance, or more than 2 instances. If the profile does not have sufficient data to make a decision, it uses the default number of instances (in this case 1).
+1. For **Instance limits**, set **Minimum** as **1**, **Maximum** as **2**, and **Default** as **1**. This setting ensures that this profile doesn't autoscale the service plan to have less than one instance or more than two instances. If the profile doesn't have sufficient data to make a decision, it uses the default number of instances (in this case, one).
-5. For **Schedule**, select 'Repeat specific days'.
+1. For **Schedule**, select **Repeat specific days**.
-6. Set the profile to repeat Monday through Friday, from 09:00 PST to 18:00 PST. This setting ensures that this profile is only active and applicable 9AM to 6PM, Monday through Friday. During all other times, the 'Default' profile is the profile the autoscale setting uses.
+1. Set the profile to repeat Monday through Friday, from 09:00 PST to 18:00 PST. This setting ensures that this profile is only active and applicable 9 AM to 6 PM, Monday through Friday. During all other times, the **Default** profile is the profile the autoscale setting uses.
## Create a scale-out rule
-1. In the 'Monday to Friday profile'.
-
-2. Click the **Add a rule** link.
+1. In the **Monday to Friday profile** section, select the **Add a rule** link.
-3. Set the **Metric source** to be 'other resource'. Set the **Resource type** as 'App Services' and the **Resource** as the Web App created earlier in this tutorial.
+1. Set **Metric source** to be **Other resource**. Set **Resource type** as **App Services** and set **Resource** as the web app you created earlier in this tutorial.
-4. Set the **Time aggregation** as 'Total', the **Metric name** as 'Requests', and the **Time grain statistic** as 'Sum'.
+1. Set **Time aggregation** as **Total**, set **Metric name** as **Requests**, and set **Time grain statistic** as **Sum**.
-5. Set the **Operator** as 'Greater than', the **Threshold** as '10' and the **Duration** as '5' minutes.
+1. Set **Operator** as **Greater than**, set **Threshold** as **10**, and set **Duration** as **5** minutes.
-6. Select the **Operation** as 'Increase count by', the **Instance count** as '1', and the **Cool down** as '5' minutes.
+1. Set **Operation** as **Increase count by**, set **Instance count** as **1**, and set **Cool down** as **5** minutes.
-7. Click the **Add** button.
+1. Select **Add**.
-This rule ensures that if your Web App receives more than 10 requests within 5 minutes or less, one additional instance is added to your App Service Plan to manage load.
+This rule ensures that if your web app receives more than 10 requests within 5 minutes or less, one other instance is added to your App Service plan to manage load.
- ![Create a scale-out rule](./media/tutorial-autoscale-performance-schedule/Scale-Out-Rule.png)
+ ![Screenshot that shows creating a scale-out rule.](./media/tutorial-autoscale-performance-schedule/Scale-Out-Rule.png)
## Create a scale-in rule
-We recommended you always to have a scale-in rule to accompany a scale-out rule. Having both ensures that your resources are not over provisioned. Over provisioning means you have more instances running than needed to handle the current load.
-
-1. In the 'Monday to Friday profile'.
+We recommend that you always have a scale-in rule to accompany a scale-out rule. Having both ensures that your resources aren't overprovisioned. Overprovisioning means you have more instances running than needed to handle the current load.
-2. Click the **Add a rule** link.
+1. In the **Monday to Friday profile**, select the **Add a rule** link.
-3. Set the **Metric source** to be 'other resource'. Set the **Resource type** as 'App Services' and the **Resource** as the Web App created earlier in this tutorial.
+1. Set **Metric source** to **Other resource**. Set **Resource type** as **App Services**, and set **Resource** as the web app you created earlier in this tutorial.
-4. Set the **Time aggregation** as 'Total', the **Metric name** as 'Requests', and the **Time grain statistic** as 'Average'.
+1. Set **Time aggregation** as **Total**, set **Metric name** as **Requests**, and set **Time grain statistic** as **Average**.
-5. Set the **Operator** as 'Less than', the **Threshold** as '5' and the **Duration** as '5' minutes.
+1. Set **Operator** as **Less than**, set **Threshold** as **5**, and set **Duration** as **5** minutes.
-6. Select the **Operation** as 'Decrease count by', the **Instance count** as '1', and the **Cool down** as '5' minutes.
+1. Set **Operation** as **Decrease count by**, set **Instance count** as **1**, and set **Cool down** as **5** minutes.
-7. Click the **Add** button.
+1. Select **Add**.
- ![Create a scale-in rule](./media/tutorial-autoscale-performance-schedule/Scale-In-Rule.png)
+ ![Screenshot that shows creating a scale-in rule.](./media/tutorial-autoscale-performance-schedule/Scale-In-Rule.png)
-8. **Save** the autoscale setting.
+1. Save the autoscale setting.
- ![Save autoscale setting](./media/tutorial-autoscale-performance-schedule/Autoscale-Setting-Save.png)
+ ![Screenshot that shows saving autoscale setting.](./media/tutorial-autoscale-performance-schedule/Autoscale-Setting-Save.png)
## Trigger scale-out action
-To trigger the scale-out condition in the autoscale setting just created, the Web App must have more than 10 requests in less than 5 minutes.
+To trigger the scale-out condition in the autoscale setting you created, the web app must have more than 10 requests in less than 5 minutes.
-1. Open a browser window and navigate to the Web App created earlier in this tutorial. You can find the URL for your Web App in the Azure Portal by navigating to your Web App resource and clicking on the **Browse** button in the 'Overview' tab.
+1. Open a browser window and go to the web app you created earlier in this tutorial. You can find the URL for your web app in the Azure portal by going to your web app resource and selecting **Browse** on the **Overview** tab.
-2. In quick succession, reload the page more than 10 times.
+1. In quick succession, reload the page more than 10 times.
-3. From the left-hand navigation pane, select the **Monitor** option. Once the page loads select the **Autoscale** tab.
+1. On the menu on the left, select **Monitor**. Then select the **Autoscale** tab.
-4. From the list, select the App Service Plan used throughout this tutorial.
+1. From the list, select the App Service plan used throughout this tutorial.
-5. On the autoscale setting, click the **Run history** tab.
+1. On the **Autoscale setting** screen, select the **Run history** tab.
-6. You see a chart reflecting the instance count of the App Service Plan over time.
+1. You see a chart that reflects the instance count of the App Service plan over time. In a few minutes, the instance count should rise from **1** to **2**.
-7. In a few minutes, the instance count should rise from 1, to 2.
-
-8. Under the chart, you see the activity log entries for each scale action taken by this autoscale setting.
+1. Under the chart, you see the activity log entries for each scale action taken by this autoscale setting.
## Trigger scale-in action
-The scale-in condition in the autoscale setting triggers if there are fewer than 5 requests to the Web App over a period of 10 minutes.
-
-1. Ensure no requests are being sent to your Web App.
+The scale-in condition in the autoscale setting triggers if there are fewer than five requests to the web app over a period of 10 minutes.
-2. Load the Azure Portal.
+1. Ensure no requests are being sent to your web app.
-3. From the left-hand navigation pane, select the **Monitor** option. Once the page loads select the **Autoscale** tab.
+1. Load the Azure portal.
-4. From the list, select the App Service Plan used throughout this tutorial.
+1. On the menu on the left, select **Monitor**. Then select the **Autoscale** tab.
-5. On the autoscale setting, click the **Run history** tab.
+1. From the list, select the App Service plan used throughout this tutorial.
-6. You see a chart reflecting the instance count of the App Service Plan over time.
+1. On the **Autoscale setting** screen, select the **Run history** tab.
-7. In a few minutes, the instance count should drop from 2, to 1. The process takes at least 100 minutes.
+1. You see a chart that reflects the instance count of the App Service plan over time. In a few minutes, the instance count should drop from **2** to **1**. The process takes at least 100 minutes.
-8. Under the chart, are the corresponding set of activity log entries for each scale action taken by this autoscale setting.
+1. Under the chart, you see the corresponding set of activity log entries for each scale action taken by this autoscale setting.
- ![View scale-in actions](./media/tutorial-autoscale-performance-schedule/Scale-In-Chart.png)
+ ![Screenshot that shows viewing scale-in actions.](./media/tutorial-autoscale-performance-schedule/Scale-In-Chart.png)
## Clean up resources
-1. From the left-hand menu in the Azure portal, click **All resources** and then select the Web App created in this tutorial.
+1. On the menu on the left in the Azure portal, select **All resources**. Then select the web app created in this tutorial.
-2. On your resource page, click **Delete**, confirm delete by typing **yes** in the text box, and then click **Delete**.
+1. On your resource page, select **Delete**. Confirm delete by entering **yes** in the text box, and then select **Delete**.
-3. Then select the App Service Plan resource and click **Delete**.
+1. Select the App Service plan resource and select **Delete**.
-4. Confirm delete by typing **yes** in the text box, and then click **Delete**.
+1. Confirm delete by entering **yes** in the text box, and then select **Delete**.
## Next steps
-In this tutorial, you
-> [!div class="checklist"]
-> * Created a Web App and App Service Plan
-> * Configured autoscale rules for scale-in and scale out based on the number of requests the Web App received
-> * Triggered a scale-out action and watched the number of instances increase
-> * Triggered a scale-in action and watched the number of instances decrease
-> * Cleaned up your resources
--
-To learn more about autoscale settings, continue on to the [autoscale overview](../autoscale/autoscale-overview.md).
+To learn more about autoscale settings, see [Autoscale overview](../autoscale/autoscale-overview.md).
> [!div class="nextstepaction"] > [Archive your monitoring data](../essentials/platform-logs-overview.md)-
azure-monitor Collect Custom Metrics Linux Telegraf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-linux-telegraf.md
This article explains how to deploy and configure the [InfluxData](https://www.influxdata.com/) Telegraf agent on a Linux virtual machine to send metrics to Azure Monitor.
+> [!NOTE]
+> InfluxData Telegraf is an open source agent and not officially supported by Azure Monitor. For issues wuth the Telegraf connector, please refer to the Telegraf Github page here: [InfluxData](https://github.com/influxdata/telegraf)
+ ## InfluxData Telegraf agent [Telegraf](https://docs.influxdata.com/telegraf/) is a plug-in-driven agent that enables the collection of metrics from over 150 different sources. Depending on what workloads run on your VM, you can configure the agent to leverage specialized input plug-ins to collect metrics. Examples are MySQL, NGINX, and Apache. By using output plug-ins, the agent can then write to destinations that you choose. The Telegraf agent has integrated directly with the Azure Monitor custom metrics REST API. It supports an Azure Monitor output plug-in. By using this plug-in, the agent can collect workload-specific metrics on your Linux VM and submit them as custom metrics to Azure Monitor.
azure-monitor Data Collection Rule Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-best-practices.md
When creating a DCR, there are some aspects that need to be considered such as:
- The target Virtual Machines to which the DCR will be associated with - The destination of collected data
-Considering all these factors, is critical for a good DCR organization. All the above points impact on DCR management effort as well on resource consumption for configuration transfer and processing.
+Considering all these factors is critical for a good DCR organization. All the above points impact on DCR management effort as well on resource consumption for configuration transfer and processing.
-Given the native granularity, which allows a given DCR to be associated with more than one target virtual machine and vice versa, it's important to keep the DCR as simple as possible using fewer data sources each. It's also important to keep the list of collected items in each data source, lean and oriented to the observability scope.
+Given the native granularity, which allows a given DCR to be associated with more than one target virtual machine and vice versa, it's important to keep the DCRs as simple as possible using fewer data sources each. It's also important to keep the list of collected items in each data source lean and oriented to the observability scope.
:::image type="content" source="media/data-collection-rule-best-practices/data-collection-rules-to-vm-relationship.png" lightbox="media/data-collection-rule-best-practices/data-collection-rules-to-vm-relationship.png" alt-text="Screenshot of data collection rules to virtual machines relation.":::
-To clarify what an *observability scope* could be, think about it as your preferred logical boundary for collecting data. For instance, a possible scope could be a set of virtual machine running software (that is "Sql Servers") needed for a specific application, or basic operating system counters or events set used by the IT Admins. It's also possible to create similar scopes dedicated to different environments ('Development', 'Test', 'Production') to specialize even more.
+To clarify what an *observability scope* could be, think about it as your preferred logical boundary for collecting data. For instance, a possible scope could be a set of virtual machines running software (for example, "SQL Servers") needed for a specific application, or basic operating system counters or events set used by your IT Admins. It's also possible to create similar scopes dedicated to different environments ("Development", "Test", "Production") to specialize even more.
-In fact, it's not ideal, even not recommended, to create a single DCR containing all the data sources, collection items and destinations to implement the observability. In the following table, there are several recommendations that could help in better planning DCR creation and maintenance:
+In fact, it's not ideal and even not recommended to create a single DCR containing all the data sources, collection items and destinations to implement the observability. In the following table, there are several recommendations that could help in better planning DCR creation and maintenance:
| Category | Best practice | Explanation | Impact area | |:|:|:|:|
-| Data Collection | Define the observability scope | Defining the observability scope is key to an easier and successful DCR management and organization observability scope. It will help clarifying what the collection need is, and from which target virtual machine it should be performed. As previously explained, an observability scope could be a set of virtual machine running software that is common to a specific application, a set of common information for the IT department, etc. As an example, collecting the basic operating system performance counter, such as CPU utilization, available memory and free disk space, could be seen as scope for the Central IT Management. | Not having a clearly defined scope doesn't bring clarity and doesn't allow for a proper management. |
+| Data Collection | Define the observability scope | Defining the observability scope is key to an easier and successful DCR management and organization observability scope. It will help clarifying what the collection need is, and from which target virtual machine it should be performed. As previously explained, an observability scope could be a set of virtual machines running software that is common to a specific application, a set of common information for the IT department, etc. As an example, collecting the basic operating system performance counters, such as CPU utilization, available memory, and free disk space, could be seen as a scope for your Central IT Management. | Not having a clearly defined scope doesn't bring clarity and doesn't allow for a proper management. |
| | Create DCRs specific to the observability scope | Creating separate DCRs based on the observability scope is key for easy maintenance. It will allow you to easily associate the DCRs to the relevant target virtual machines. | Why creating a single DCR that collects operating system performance counters plus web server counters and database counters all together? This approach, not only will force each and every associated virtual machine to transfer, process and execute configuration that is outside of the scope but will also require more effort when the DCR configuration needs to be updated. Think about managing a template that includes unnecessary entries; this situation is less than ideal and leaves room for errors. | | | Create DCR specific to data source type inside the defined observability scope(s) | Creating separate DCRs for performance and events will help in both managing the configuration and the association with granularity based on the target machines. For instance, creating a DCR to collect both events and performance counters could result in an unoptimal approach. There could be situations in which a given machine (or set of machines) doesn't have the event logs or performance counters configured in the DCR. In this situation, the virtual machine(s) will be forced to process and execute a configuration that isn't necessary according to the software installed on it. | Not using different DCRs will force each and every associated virtual machine to transfer, process and execute configuration that might be not applicable according to the installed software. An excessive compute resource consumption and errors in processing configuration might happen causing the [Azure Monitor Agent (AMA)](../overview.md) becoming unresponsive. Moreover, collecting unnecessary data will increase data ingestion costs. | | Data destination | Create different DCR based on the destination | DCRs have the capability of sending data to multiple different destinations, like Azure Monitor Metrics and Azure Monitor Logs, simultaneously. Having DCR(s) specific to destination is helpful in managing the data sovereign or law requirements. Since, being compliant might require to send data only to allowed repositories created in allowed regions, having different DCRs allows for a better granular destination targeting | Not separating DCRs based on the data destination, might result in being not compliant with data handling, privacy and access requirements and could make unnecessary data collection resulting in unexpected costs. |
The afore mentioned principles provide a foundation for creating your own DCR ma
## Next steps -- [Read more about data collection rules and options for creating them.](data-collection-rule-overview.md)
+- [Read more about data collection rules and options for creating them.](data-collection-rule-overview.md)
azure-monitor Solution Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solution-office-365.md
> [!IMPORTANT] > ## Solution update
-> This solution has been replaced by the [Office 365](../../sentinel/data-connectors-reference.md#microsoft-office-365) General Availability solution in [Microsoft Sentinel](../../sentinel/overview.md) and the [Azure AD reporting and monitoring solution](../../active-directory/reports-monitoring/plan-monitoring-and-reporting.md). Together they provide an updated version of the previous Azure Monitor Office 365 solution with an improved configuration experience. You can continue to use the existing solution until October 31, 2020.
+> This solution has been replaced by the [Office 365](../../sentinel/data-connectors/office-365.md) General Availability solution in [Microsoft Sentinel](../../sentinel/overview.md) and the [Azure AD reporting and monitoring solution](../../active-directory/reports-monitoring/plan-monitoring-and-reporting.md). Together they provide an updated version of the previous Azure Monitor Office 365 solution with an improved configuration experience. You can continue to use the existing solution until October 31, 2020.
> > Microsoft Sentinel is a cloud native Security Information and Event Management solution that ingests logs and provides additional SIEM functionality including detections, investigations, hunting and machine learning driven insights. Using Microsoft Sentinel will now provide you with ingestion of Office 365 SharePoint activity and Exchange management logs. >
azure-monitor Access Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/access-api.md
Title: API Access and Authentication
-description: How to Authenticate and access the Azure Monitor Log Analytics API.
+ Title: API access and authentication
+description: Learn how to authenticate and access the Azure Monitor Log Analytics API.
Last updated 11/28/2022
# Access the Azure Monitor Log Analytics API
-You can submit a query request to a workspace using the Azure Monitor Log Analytics endpoint `https://api.loganalytics.azure.com`. To access the endpoint, you must authenticate through Azure Active Directory (Azure AD).
+You can submit a query request to a workspace by using the Azure Monitor Log Analytics endpoint `https://api.loganalytics.azure.com`. To access the endpoint, you must authenticate through Azure Active Directory (Azure AD).
+ >[!Note]
-> The `api.loganalytics.io` endpoint is being replaced by `api.loganalytics.azure.com`. `api.loganalytics.io` will continue to be supported for the forseeable future.
-## Authenticating with a demo API key
+> The `api.loganalytics.io` endpoint is being replaced by `api.loganalytics.azure.com`. The `api.loganalytics.io` endpoint will continue to be supported for the forseeable future.
+
+## Authenticate with a demo API key
-To quickly explore the API without Azure Active Directory authentication, use the demonstration workspace with sample data, which supports API key authentication.
+To quickly explore the API without Azure AD authentication, use the demonstration workspace with sample data, which supports API key authentication.
To authenticate and run queries against the sample workspace, use `DEMO_WORKSPACE` as the {workspace-id} and pass in the API key `DEMO_KEY`.
-If either the Application ID or the API key is incorrect, the API service will return a [403](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_Error) (Forbidden) error.
+If either the Application ID or the API key is incorrect, the API service returns a [403](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_Error) (Forbidden) error.
-The API key `DEMO_KEY` can be passed in three different ways, depending on whether you prefer to use the URL, a header, or basic authentication.
+The API key `DEMO_KEY` can be passed in three different ways, depending on whether you want to use a header, the URL, or basic authentication:
-1. **Custom header**: provide the API key in the custom header `X-Api-Key`
-2. **Query parameter**: provide the API key in the URL parameter `api_key`
-3. **Basic authentication**: provide the API key as either username or password. If you provide both, the API key must be in the username.
+- **Custom header**: Provide the API key in the custom header `X-Api-Key`.
+- **Query parameter**: Provide the API key in the URL parameter `api_key`.
+- **Basic authentication**: Provide the API key as either username or password. If you provide both, the API key must be in the username.
-This example uses the Workspace ID and API key in the header:
+This example uses the workspace ID and API key in the header:
``` POST https://api.loganalytics.azure.com/v1/workspaces/DEMO_WORKSPACE/query
This example uses the Workspace ID and API key in the header:
"query": "AzureActivity | summarize count() by Category" } ```+ ## Public API endpoint The public API endpoint is:
The public API endpoint is:
https://api.loganalytics.azure.com/{api-version}/workspaces/{workspaceId} ``` where:
+ - **api-version**: The API version. The current version is "v1."
+ - **workspaceId**: Your workspace ID.
The query is passed in the request body.
-For example,
+For example:
``` https://api.loganalytics.azure.com/v1/workspaces/1234abcd-def89-765a-9abc-def1234abcde
For example,
"query": "Usage" } ```
-## Set up Authentication
-To access the API, you need to register a client app with Azure Active Directory and request a token.
-1. [Register an app in Azure Active Directory](./register-app-for-token.md).
+## Set up authentication
+
+To access the API, you register a client app with Azure AD and request a token.
+
+1. [Register an app in Azure AD](./register-app-for-token.md).
+
+1. On the app's overview page, select **API permissions**.
+1. Select **Add a permission**.
+1. On the **APIs my organization uses** tab, search for **Log Analytics** and select **Log Analytics API** from the list.
-1. On the app's overview page, select **API permissions**
-1. Select **Add a permission**
-1. In the **APIs my organization uses** tab search for *log analytics* and select **Log Analytics API** from the list.
+ :::image type="content" source="../media/api-register-app/request-api-permissions.png" alt-text="A screenshot that shows the Request API permissions page.":::
-1. Select **Delegated permissions**
-1. Check the checkbox for **Data.Read**
-1. Select **Add permissions**
+1. Select **Delegated permissions**.
+1. Select the **Data.Read** checkbox.
+1. Select **Add permissions**.
-Now that your app is registered and has permissions to use the API, grant your app access to your Log Analytics Workspace.
+ :::image type="content" source="../media/api-register-app/add-requested-permissions.png" alt-text="A screenshot that shows the continuation of the Request API permissions page.":::
-1. From your Log analytics Workspace overview page, select **Access control (IAM)**.
+Now that your app is registered and has permissions to use the API, grant your app access to your Log Analytics workspace.
+
+1. From your **Log Analytics workspace** overview page, select **Access control (IAM)**.
1. Select **Add role assignment**.
- :::image type="content" source="../media/api-register-app/workspace-access-control.png" alt-text="A screenshot showing the access control page for a log analytics workspace.":::
+ :::image type="content" source="../media/api-register-app/workspace-access-control.png" alt-text="A screenshot that shows the Access control page for a Log Analytics workspace.":::
-1. Select the **Reader** role then select **Members**
-
- :::image type="content" source="../media/api-register-app/add-role-assignment.png" alt-text="A screenshot showing the add role assignment page for a log analytics workspace.":::
+1. Select the **Reader** role and then select **Members**.
+
+ :::image type="content" source="../media/api-register-app/add-role-assignment.png" alt-text="A screenshot that shows the Add role assignment page for a Log Analytics workspace.":::
-1. In the Members tab, select **Select members**
-1. Enter the name of your app in the **Select** field.
-1. Choose your app and select **Select**
-1. Select **Review and assign**
-
- :::image type="content" source="../media/api-register-app/select-members.png" alt-text="A screenshot showing the select members blade on the role assignment page for a log analytics workspace.":::
+1. On the **Members** tab, choose **Select members**.
+1. Enter the name of your app in the **Select** box.
+1. Select your app and choose **Select**.
+1. Select **Review + assign**.
-1. After completing the Active Directory setup and workspace permissions, request an authorization token.
+ :::image type="content" source="../media/api-register-app/select-members.png" alt-text="A screenshot that shows the Select members pane on the Add role assignment page for a Log Analytics workspace.":::
+
+1. After you finish the Active Directory setup and workspace permissions, request an authorization token.
>[!Note]
-> For this example we applied the **Reader** role. This role is one of many built-in roles and may include more permissions than you require. More granular roles and permissions can be created. For more information, see [Manage access to Log Analytics workspaces](../../logs/manage-access.md).
+> For this example, we applied the Reader role. This role is one of many built-in roles and might include more permissions than you require. More granular roles and permissions can be created. For more information, see [Manage access to Log Analytics workspaces](../../logs/manage-access.md).
-## Request an Authorization Token
+## Request an authorization token
-Before beginning, make sure you have all the values required to make the request successfully. All requests require:
-- Your Azure Active Directory tenant ID.
+Before you begin, make sure you have all the values required to make the request successfully. All requests require:
+- Your Azure AD tenant ID.
- Your workspace ID.-- Your Azure Active Directory client ID for the app.-- An Azure Active Directory client secret for the app.
+- Your Azure AD client ID for the app.
+- An Azure AD client secret for the app.
-The Log Analytics API supports Azure Active Directory authentication with three different [Azure AD OAuth2](/azure/active-directory/develop/active-directory-protocols-oauth-code) flows:
-- Client credentials
+The Log Analytics API supports Azure AD authentication with three different [Azure AD OAuth2](/azure/active-directory/develop/active-directory-protocols-oauth-code) flows:
+- Client credentials
- Authorization code - Implicit
+### Client credentials flow
-### Client Credentials Flow
+In the client credentials flow, the token is used with the Log Analytics endpoint. A single request is made to receive a token by using the credentials provided for your app in the previous step when you [register an app in Azure AD](./register-app-for-token.md).
-In the client credentials flow, the token is used with the log analytics endpoint. A single request is made to receive a token, using the credentials provided for your app in the [Register an app for in Azure Active Directory](./register-app-for-token.md) step above.
-Use the `https://api.loganalytics.azure.com` endpoint.
+Use the `https://api.loganalytics.azure.com` endpoint.
-##### Client Credentials Token URL (POST request)
+#### Client credentials token URL (POST request)
```http POST /<your-tenant-id>/oauth2/token
A successful request receives an access token in the response:
} ```
-Use the token in requests to the log analytics endpoint:
+Use the token in requests to the Log Analytics endpoint:
```http POST /v1/workspaces/your workspace id/query?timespan=P1D
Use the token in requests to the log analytics endpoint:
} ``` -
-Example Response:
+Example response:
```http {
Example Response:
} ```
-### Authorization Code Flow
+### Authorization code flow
-The main OAuth2 flow supported is through [authorization codes](/azure/active-directory/develop/active-directory-protocols-oauth-code). This method requires two HTTP requests to acquire a token with which to call the Azure Monitor Log Analytics API. There are two URLs, one endpoint per request. Their formats are:
+The main OAuth2 flow supported is through [authorization codes](/azure/active-directory/develop/active-directory-protocols-oauth-code). This method requires two HTTP requests to acquire a token with which to call the Azure Monitor Log Analytics API. There are two URLs, with one endpoint per request. Their formats are described in the following sections.
-#### Authorization Code URL (GET request):
+#### Authorization code URL (GET request)
```http GET https://login.microsoftonline.com/YOUR_Azure AD_TENANT/oauth2/authorize?
The main OAuth2 flow supported is through [authorization codes](/azure/active-di
&resource=https://api.loganalytics.io ```
-When making a request to the Authorize URL, the client\_id is the Application ID from your Azure AD App, copied from the App's properties menu. The redirect\_uri is the home page/login URL from the same Azure AD App. When a request is successful, this endpoint redirects you to the sign-in page you provided at sign-up with the authorization code appended to the URL. See the following example:
+When a request is made to the authorize URL, the client\_id is the application ID from your Azure AD app, copied from the app's properties menu. The redirect\_uri is the homepage/login URL from the same Azure AD app. When a request is successful, this endpoint redirects you to the sign-in page you provided at sign-up with the authorization code appended to the URL. See the following example:
```http http://<app-client-id>/?code=AUTHORIZATION_CODE&session_state=STATE_GUID ```
-At this point you'll have obtained an authorization code, which you need now to request an access token.
+At this point, you've obtained an authorization code, which you need now to request an access token.
-#### Authorization Code Token URL (POST request)
+#### Authorization code token URL (POST request)
```http POST /YOUR_Azure AD_TENANT/oauth2/token HTTP/1.1
At this point you'll have obtained an authorization code, which you need now to
&client_secret=<app-client-secret> ```
-All values are the same as before, with some additions. The authorization code is the same code you received in the previous request after a successful redirect. The code is combined with the key obtained from the Azure AD App. If you didn't save the key, you can delete it and create a new one from the keys tab of the Azure AD App menu. The response is a JSON string containing the token with the following schema. Types are indicated for the token values.
+All values are the same as before, with some additions. The authorization code is the same code you received in the previous request after a successful redirect. The code is combined with the key obtained from the Azure AD app. If you didn't save the key, you can delete it and create a new one from the keys tab of the Azure AD app menu. The response is a JSON string that contains the token with the following schema. Types are indicated for the token values.
Response example:
Response example:
} ```
-The access token portion of this response is what you present to the Log Analytics API in the `Authorization: Bearer` header. You may also use the refresh token in the future to acquire a new access\_token and refresh\_token when yours have gone stale. For this request, the format and endpoint are:
+The access token portion of this response is what you present to the Log Analytics API in the `Authorization: Bearer` header. You can also use the refresh token in the future to acquire a new access\_token and refresh\_token when yours have gone stale. For this request, the format and endpoint are:
```http POST /YOUR_AAD_TENANT/oauth2/token HTTP/1.1
Response example:
} ```
-### Implicit Code Flow
+### Implicit code flow
-The Log Analytics API supports the OAuth2 [implicit flow](/azure/active-directory/develop/active-directory-dev-understanding-oauth2-implicit-grant). For this flow, only a single request is required but no refresh token can be acquired.
+The Log Analytics API supports the OAuth2 [implicit flow](/azure/active-directory/develop/active-directory-dev-understanding-oauth2-implicit-grant). For this flow, only a single request is required, but no refresh token can be acquired.
-#### Implicit Code Authorize URL
+#### Implicit code authorize URL
```http GET https://login.microsoftonline.com/YOUR_AAD_TENANT/oauth2/authorize?
The Log Analytics API supports the OAuth2 [implicit flow](/azure/active-director
&resource=https://api.loganalytics.io ```
-A successful request will produce a redirect to your redirect URI with the token in the URL as follows.
+A successful request produces a redirect to your redirect URI with the token in the URL:
```http http://YOUR_REDIRECT_URI/#access_token=YOUR_ACCESS_TOKEN&token_type=Bearer&expires_in=3600&session_state=STATE_GUID ```
-This access\_token can be used as the `Authorization: Bearer` header value when passed to the Log Analytics API to authorize requests.
+This access\_token can be used as the `Authorization: Bearer` header value when it's passed to the Log Analytics API to authorize requests.
-## More Information
+## More information
You can find documentation about OAuth2 with Azure AD here:-
+ - [Azure AD authorization code flow](/azure/active-directory/develop/active-directory-protocols-oauth-code)
+ - [Azure AD implicit grant flow](/azure/active-directory/develop/active-directory-dev-understanding-oauth2-implicit-grant)
+ - [Azure AD S2S client credentials flow](/azure/active-directory/develop/active-directory-protocols-oauth-service-to-service)
## Next steps -- [Request format](./request-format.md) -- [Response format](./response-format.md) -- [Querying logs for Azure resources](./azure-resource-queries.md) -- [Batch queries](./batch-queries.md)
+- [Request format](./request-format.md)
+- [Response format](./response-format.md)
+- [Querying logs for Azure resources](./azure-resource-queries.md)
+- [Batch queries](./batch-queries.md)
azure-monitor Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/errors.md
Title: Errors
+ Title: Azure Monitor Log Analytics API errors
description: This section contains a non-exhaustive list of known common errors that can occur in the Azure Monitor Log Analytics API, their causes, and possible solutions. Last updated 11/29/2021
-# Azure Monitor Log Analytics API Errors
+# Azure Monitor Log Analytics API errors
-This section contains a non-exhaustive list of known common errors, their causes, and possible solutions. It also contains successful responses which often indicate an issue with the request (such as a missing header) or otherwise unexpected behavior.
+This section contains a non-exhaustive list of known common errors, their causes, and possible solutions. It also contains successful responses, which often indicate an issue with the request (such as a missing header) or otherwise unexpected behavior.
-## Query Syntax Error
+## Query syntax error
-Code: 400 Response:
+400 response:
``` {
Code: 400 Response:
} ```
-Details: The query string is malformed. Check for extra spaces, punctuation, or spelling errors.
+The query string is malformed. Check for extra spaces, punctuation, or spelling errors.
-## No Authentication Provided
+## No authentication provided
-Code: 401 Response:
+401 response:
``` {
Code: 401 Response:
} ```
-Details: Include a form of authentication with your request, such as the header "Authorization: Bearer \<token\>"
+Include a form of authentication with your request, such as the header `"Authorization: Bearer \<token\>"`.
-## Invalid Authentication Token
+## Invalid authentication token
-Code: 403 Response:
+403 response:
``` {
Code: 403 Response:
} ```
-Details: the token is malformed or otherwise invalid. This can occur if you manually copy-paste the token and add or cut characters to the payload. Verify that the token is exactly as received from Azure AD.
+The token is malformed or otherwise invalid. This error can occur if you manually copy and paste the token and add or cut characters to the payload. Verify that the token is exactly as received from Azure Active Directory (Azure AD).
-## Invalid Token Audience
+## Invalid token audience
-Code: 403 Response:
+403 response:
``` {
Code: 403 Response:
} ```
-Details: this occurs if you try to use the client credentials OAuth2 flow to obtain a token for the API and then use that token via the ARM endpoint. Use one of the indicated URLs as the resource in your token request if you want to use the ARM endpoint. Alternatively, you can use the direct API endpoint with a different OAuth2 flow for authorization.
+This error occurs if you try to use the client credentials OAuth2 flow to obtain a token for the API and then use that token via the Azure Resource Manager endpoint. Use one of the indicated URLs as the resource in your token request if you want to use the Azure Resource Manager endpoint. Alternatively, you can use the direct API endpoint with a different OAuth2 flow for authorization.
-## Client Credentials to Direct API
+## Client credentials to direct API
-Code: 403 Response:
+403 response:
``` {
Code: 403 Response:
} ```
-Details: This error can occur if you try to use client credentials via the direct API endpoint. If you are using the direct API endpoint, use a different OAuth2 flow for authorization. If you must use client credentials, use the ARM API endpoint.
+This error can occur if you try to use client credentials via the direct API endpoint. If you're using the direct API endpoint, use a different OAuth2 flow for authorization. If you must use client credentials, use the Azure Resource Manager API endpoint.
-## Insufficient Permissions
+## Insufficient permissions
-Code: 403 Response:
+403 response:
``` {
Code: 403 Response:
} ```
-Details: The token you have presented for authorization belongs to a user who does not have sufficient access to this privilege. Verify your workspace GUID and your token request are correct, and if necessary grant IAM privileges in your workspace to the Azure AD Application you created as Contributor.
+The token you've presented for authorization belongs to a user who doesn't have sufficient access to this privilege. Verify that your workspace GUID and your token request are correct. If necessary, grant IAM privileges in your workspace to the Azure AD application you created as Contributor.
> [!NOTE]
-> When using Azure AD authentication, it may take up to 60 minutes for the Azure Application Insights REST API to recognize new
-> role-based access control (RBAC) permissions. While permissions are propagating, REST API calls may fail with error code 403.
+> When you use Azure AD authentication, it might take up to 60 minutes for the Application Insights REST API to recognize new role-based access control permissions. While permissions are propagating, REST API calls might fail with error code 403.
-## Bad Authorization Code
+## Bad authorization code
-Code: 403 Response:
+403 response:
``` {
Code: 403 Response:
} ```
-Details: The authorization code submitted in the token request was either stale or previously used. Reauthorize via the Azure AD authorize endpoint to get a new code.
+The authorization code submitted in the token request was either stale or previously used. Reauthorize via the Azure AD authorize endpoint to get a new code.
-## Path Not Found
+## Path not found
-Code: 404 Response:
+404 response:
``` {
Code: 404 Response:
} ```
-Details: the requested query path does not exist. Verify the URL spelling of the endpoint you are hitting, and that you are using a supported HTTP verb.
+The requested query path doesn't exist. Verify the URL spelling of the endpoint you're hitting and that you're using a supported HTTP verb.
## Missing JSON or Content-Type
-Code: 200 Response: empty body. Details: If you send a POST request that is missing either JSON body or the "Content-Type: application/json" header, we will return an empty 200 response.
+200 response: Empty body
-## No Data in Workspace
+If you send a POST request that's missing either JSON body or the `"Content-Type: application/json"` header, we return an empty 200 response.
-Code: 204 Response: empty body. Details: If a workspace has no data in it, we return a 204 No Content.
+## No data in workspace
+
+204 response: Empty body
+
+If a workspace has no data in it, we return 204 No Content.
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/overview.md
Title: Overview
-description: This article describes the REST API, created to make the data collected by Azure Log Analytics easily available.
+description: This article describes the REST API that makes the data collected by Azure Log Analytics easily available.
Last updated 02/28/2023
-# Azure Monitor Log Analytics API Overview
+# Azure Monitor Log Analytics API overview
-The Log Analytics **Query API** is a REST API that lets you query the full set of data collected by Azure Monitor logs using the same query language used throughout the service. Use this API to retrieve data, build new visualizations of your data, and extend the capabilities of Log Analytics.
+The Log Analytics Query API is a REST API that you can use to query the full set of data collected by Azure Monitor logs. You can use the same query language that's used throughout the service. Use this API to retrieve data, build new visualizations of your data, and extend the capabilities of Log Analytics.
-## Log Analytics API Authentication
+## Log Analytics API authentication
-You must authenticate to access the Log Analytics API.
-- To query your workspaces, you must use [Azure Active Directory authentication](../../../active-directory/fundamentals/active-directory-whatis.md).
+You must authenticate to access the Log Analytics API:
+- To query your workspaces, you must use [Azure Active Directory (Azure AD) authentication](../../../active-directory/fundamentals/active-directory-whatis.md).
- To quickly explore the API without using Azure AD authentication, you can use an API key to query sample data in a non-production environment. ### Azure AD authentication for workspace data
You must authenticate to access the Log Analytics API.
The Log Analytics API supports Azure AD authentication with three different [Azure AD OAuth2](/azure/active-directory/develop/active-directory-protocols-oauth-code) flows: - Authorization code - Implicit-- Client credentials
+- Client credentials
The authorization code flow and implicit flow both require at least one user interactive sign-in to your application. If you need a non-interactive flow, use the client credentials flow.
-After receiving a token, the process for calling the Log Analytics API is the same for all flows. Requests require the `Authorization: Bearer` header, populated with the token received from the OAuth2 flow.
+After you receive a token, the process for calling the Log Analytics API is the same for all flows. Requests require the `Authorization: Bearer` header, populated with the token received from the OAuth2 flow.
### API key authentication for sample data
-To quickly explore the API without using Azure AD authentication, we provide a demonstration workspace with sample data, which allows [authenticating with an API key](./access-api.md#authenticating-with-a-demo-api-key).
+To quickly explore the API without using Azure AD authentication, we provide a demonstration workspace with sample data. You can [authenticate by using an API key](./access-api.md#authenticate-with-a-demo-api-key).
> [!NOTE]
-> When using Azure AD authentication, it may take up to 60 minutes for the Azure Application Insights REST API to recognize new
-> role-based access control (RBAC) permissions. While permissions are propagating, REST API calls may fail with [error code 403](./errors.md#insufficient-permissions).
+> When you use Azure AD authentication, it might take up to 60 minutes for the Application Insights REST API to recognize new role-based access control permissions. While permissions are propagating, REST API calls might fail with [error code 403](./errors.md#insufficient-permissions).
-## Log Analytics API Query Limits
+## Log Analytics API query limits
-See [the **Query API** section of this page](../../service-limits.md) for information about query limits.
+For information about query limits, see the [Query API section of this webpage](../../service-limits.md).
-## Trying the Log Analytics API
+## Try the Log Analytics API
To try the API without writing any code, you can use: - Your favorite client such as [Fiddler](https://www.telerik.com/fiddler) or [Postman](https://www.getpostman.com/) to manually generate queries with a user interface.
- - [cURL](https://curl.haxx.se/) from the command line, and then pipe the output into [jsonlint](https://github.com/zaach/jsonlint) to get readable JSON.
+ - [cURL](https://curl.haxx.se/) from the command line. Then pipe the output into [jsonlint](https://github.com/zaach/jsonlint) to get readable JSON.
Instead of calling the REST API directly, you can use the idiomatic Azure Monitor Query client libraries:
azure-monitor Response Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/response-format.md
Title: Response format
+ Title: Azure Monitor Log Analytics API response format
description: The Azure Monitor Log Analytics API response is JSON that contains an array of table objects.
# Azure Monitor Log Analytics API response format
-The response is JSON string that contains an array of table objects.
+The Azure Monitor Log Analytics API response is a JSON string that contains an array of table objects.
-- The `tables` property is an array of tables representing the query result. Each table contains `name`, `columns`, and `rows` properties.
+The `tables` property is an array of tables that represent the query result. Each table contains `name`, `columns`, and `rows` properties:
- The `name` property is the name of the table.
+ - The `columns` property is an array of objects that describe the schema of each column.
- The `rows` property is an array of values. Each item in the array represents a row in the result set.
-In the following example, we can see the result contains two columns, `Category` and `count_`. The first column, `Category`, represents the value of the `Category` column in the `AzureActivity` table, and the second column, `count_` is count of the number of events in the `AzureActivity` table for the given Category.
+In the following example, we can see that the result contains two columns: `Category` and `count_`. The first column, `Category`, represents the value of the `Category` column in the `AzureActivity` table. The second column, `count_` is the count of the number of events in the `AzureActivity` table for the specific category.
``` HTTP/1.1 200 OK
In the following example, we can see the result contains two columns, `Category`
## Azure Monitor Log Analytics API errors
-If a fatal error occurs during query execution, an error status code is returned with a [OneAPI](https://github.com/Microsoft/api-guidelines/blob/vNext/Guidelines.md#errorresponse--object) error object describing the error.
+If a fatal error occurs during query execution, an error status code is returned with a [OneAPI](https://github.com/Microsoft/api-guidelines/blob/vNext/Guidelines.md#errorresponse--object) error object that describes the error.
-If a non-fatal error occurs during query execution, the response status code is `200 OK` and contains the query results in the `tables` property as described above. The response will also contain an `error` property, which is OneAPI error object with code `PartialError`. Details of the error are included in the `details` property.
+If a non-fatal error occurs during query execution, the response status code is `200 OK`. It contains the query results in the `tables` property as described. The response also contains an `error` property, which is a OneAPI error object with the code `PartialError`. Details of the error are included in the `details` property.
-## Next Steps
-Get detailed information about using the [API options](batch-queries.md).
+## Next steps
+
+Get more information about using the [API options](batch-queries.md).
azure-monitor Timeouts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/timeouts.md
Title: Timeouts
-description: Query execution times can vary widely based on the complexity of the query, amount of data being analyzed, and the load on the system and workspace at the time of the query.
+ Title: Timeouts of query executions
+description: Query execution times can vary widely based on the complexity of the query, the amount of data being analyzed, and the load on the system and workspace at the time of the query.
Last updated 11/28/2021
# Timeouts Query execution times can vary widely based on:-- The complexity of the query-- The amount of data being analyzed-- The load on the system at the time of the query-- The load on the workspace at the time of the query
-You may want to customize the timeout for the query. The default timeout is 3 minutes, and the maximum timeout is 10 minutes.
+- The complexity of the query.
+- The amount of data being analyzed.
+- The load on the system at the time of the query.
+- The load on the workspace at the time of the query.
+
+You might want to customize the timeout for the query. The default timeout is 3 minutes. The maximum timeout is 10 minutes.
## Timeout request header
-To set the timeout, use the `Prefer` header in the HTTP request, using the standard `wait` preference, see [here](https://tools.ietf.org/html/rfc7240#section-4.3) for details. The `Prefer` header puts an upper limit, in seconds, on how long the client will wait for the service to process the query.
+To set the timeout, use the `Prefer` header in the HTTP request by using the standard `wait` preference. For more information, see [this website](https://tools.ietf.org/html/rfc7240#section-4.3). The `Prefer` header puts an upper limit, in seconds, on how long the client waits for the service to process the query.
## Response
-If a query takes longer than the specified timeout (or default timeout, if unspecified), it will fail with a status code of 504 Gateway Timeout.
+If a query takes longer than the specified timeout (or default timeout, if unspecified), it fails with a status code of 504 Gateway Timeout.
-For example, the following request allows a maximum server timeout age of 30 seconds
+For example, the following request allows a maximum server timeout age of 30 seconds:
``` POST https://api.loganalytics.azure.com/v1/workspaces/{workspace-id}/query
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
na Previously updated : 02/17/2023 Last updated : 03/08/2023
All [Azure NetApp Files features](whats-new.md) available on Azure public cloud
|: |: |: | | Azure NetApp Files backup | Public preview | No | | Standard network features | Generally available (GA) | No |
+| Azure NetApp Files datastores for AVS | Generally available (GA) | No |
+| Azure NetApp Files customer-managed keys | Public preview | No |
+| Azure NetApp Files large volumes | Public preview | No |
## Portal access
backup Backup Azure Private Endpoints Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-private-endpoints-concept.md
Title: Private endpoints for Azure Backup - Overview
description: This article explains about the concept of private endpoints for Azure Backup that helps to perform backups while maintaining the security of your resources. Previously updated : 02/20/2023 Last updated : 03/08/2023
This article describes how the [enhanced capabilities of private endpoints](#key
- You can create private endpoints for new Recovery Services vaults that don't have any items registered/protected to the vault, only.
+ >[!Note]
+ >You can't create private endpoints using static IP.
+ - You can't upgrade vaults (that contains private endpoints) created using the classic experience to the new experience. You can delete all existing private endpoints, and then create new private endpoints with the v2 experience. - One virtual network can contain private endpoints for multiple Recovery Services vaults. Also, one Recovery Services vault can have private endpoints for it in multiple virtual networks. However, you can create a maximum of 12 private endpoints for a vault.
This article describes how the [enhanced capabilities of private endpoints](#key
- [Cross-region restore](backup-create-rs-vault.md#set-cross-region-restore) for SQL and SAP HANA database backups aren't supported, if the vault has private endpoints enabled.
+- You can create DNS across subscriptions.
+ ## Recommended and supported scenarios While private endpoints are enabled for the vault, they're used for backup and restore of SQL and SAP HANA workloads in an Azure VM, MARS agent backup and DPM only. You can use the vault for backup of other workloads as well (they won't require private endpoints though). In addition to backup of SQL and SAP HANA workloads and backup using the MARS agent, private endpoints are also used to perform file recovery for Azure VM backup.
In addition to the Azure Backup cloud services, the workload extension and agent
As a pre-requisite, Recovery Services vault requires permissions for creating additional private endpoints in the same Resource Group. We also recommend providing the Recovery Services vault the permissions to create DNS entries in the private DNS zones (`privatelink.blob.core.windows.net`, `privatelink.queue.core.windows.net`). Recovery Services vault searches for private DNS zones in the resource groups where VNet and private endpoint are created. If it has the permissions to add DNS entries in these zones, theyΓÇÖll be created by the vault; otherwise, you must create them manually.
->[!Note]
->Integration with private DNS zone present in different subscriptions is unsupported in this experience.
- The following diagram shows how the name resolution works for storage accounts using a private DNS zone. :::image type="content" source="./media/private-endpoints-overview/name-resolution-works-for-storage-accounts-using-private-dns-zone-inline.png" alt-text="Diagram showing how the name resolution works for storage accounts using a private DNS zone." lightbox="./media/private-endpoints-overview/name-resolution-works-for-storage-accounts-using-private-dns-zone-expanded.png":::
backup Backup Azure Private Endpoints Configure Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-private-endpoints-configure-manage.md
Title: How to create and manage private endpoints (with v2 experience) for Azure
description: This article explains how to configure and manage private endpoints for Azure Backup. Previously updated : 02/20/2023 Last updated : 03/08/2023
You'll see an entry for the virtual network for which you've created the private
If you're using a host file for name resolution, make corresponding entries in the host file for each IP and FQDN according to the format - `<private ip><space><FQDN>`. >[!Note]
->Azure Backup may allocate new storage account for your vault for the backup data, and the extension or agent needs to access the respective endpoints. For more about how to add more DNS records after registration and backup, see [the guidance in Use Private Endpoints for Backup](private-endpoints.md#use-private-endpoints-for-backup).
------
+>Azure Backup may allocate new storage account for your vault for the backup data, and the extension or agent needs to access the respective endpoints. For more about how to add more DNS records after registration and backup, see [how to use private endpoints for backup](#use-private-endpoints-for-backup).
## Use private endpoints for backup
backup Backup Blobs Storage Account Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-blobs-storage-account-ps.md
Title: Back up Azure blobs within a storage account using Azure PowerShell description: Learn how to back up all Azure blobs within a storage account using Azure PowerShell. + Last updated 08/06/2021
blobrg-PSTestSA-3df6ac08-9496-4839-8fb5-8b78e594f166 Microsoft.DataProtection/ba
## Next steps
-[Restore Azure blobs using Azure PowerShell](restore-blobs-storage-account-ps.md)
+[Restore Azure blobs using Azure PowerShell](restore-blobs-storage-account-ps.md)
backup Backup Managed Disks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-managed-disks-cli.md
Title: Back up Azure Managed Disks using Azure CLI description: Learn how to back up Azure Managed Disks using Azure CLI. + Last updated 09/17/2021
backup Blob Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-support-matrix.md
Operational backup for blobs is available in all public cloud regions, except Fr
# [Vaulted backup](#tab/vaulted-backup)
-Vaulted backup (preview) for blobs is currently available in the following regions: France Central, Canada Central, Canada East, US East, and US South.
+Vaulted backup (preview) for blobs is currently available in the following regions: France Central, Canada Central, Canada East, US East, South Central US, Germany West Central, Germany North, Australia Central, Australia Central 2, India South, India West, Korea Central and Korea South.
backup Manage Afs Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-afs-backup-cli.md
Title: Manage Azure file share backups with the Azure CLI description: Learn how to use the Azure CLI to manage and monitor Azure file shares backed up by Azure Backup. + Last updated 02/09/2022
backup Multi User Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization.md
zone_pivot_groups: backup-vaults-recovery-services-vault-backup-vault Last updated 11/08/2022 +
To disable the MUA, the Backup admins must follow these steps:
## Next steps [Learn more about Multi-user authorization using Resource Guard](multi-user-authorization-concept.md).-
backup Quick Backup Vm Bicep Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-vm-bicep-template.md
ms.devlang: azurecli
Last updated 11/17/2021 -+
backup Quick Backup Vm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-vm-template.md
description: Learn how to back up your virtual machines with Azure Resource Mana
ms.devlang: azurecli Last updated 11/15/2021-+
backup Restore Blobs Storage Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-blobs-storage-account-cli.md
Title: Restore Azure Blobs via Azure CLI description: Learn how to restore Azure Blobs to any point-in-time using Azure CLI. + Last updated 06/18/2021
backup Restore Blobs Storage Account Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-blobs-storage-account-ps.md
Title: Restore Azure blobs via Azure PowerShell description: Learn how to restore Azure blobs to any point-in-time using Azure PowerShell. + Last updated 05/05/2021
backup Restore Managed Disks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-managed-disks-cli.md
Title: Restore Azure Managed Disks via Azure CLI description: Learn how to restore Azure Managed Disks using Azure CLI. + Last updated 06/18/2021
az dataprotection job list-from-resourcegraph --datasource-type AzureDisk --oper
## Next steps
-[Azure Disk Backup FAQ](./disk-backup-faq.yml)
+[Azure Disk Backup FAQ](./disk-backup-faq.yml)
backup Restore Managed Disks Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-managed-disks-ps.md
Title: Restore Azure Managed Disks via Azure PowerShell description: Learn how to restore Azure Managed Disks using Azure PowerShell. + Last updated 03/26/2021
backup Restore Postgresql Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-postgresql-database-cli.md
description: Learn how to restore Azure PostgreSQL databases using Azure CLI.
Last updated 01/24/2022 +
az dataprotection job list-from-resourcegraph --datasource-type AzureDatabaseFor
## Next steps -- [Overview of Azure PostgreSQL backup](backup-azure-database-postgresql-overview.md)
+- [Overview of Azure PostgreSQL backup](backup-azure-database-postgresql-overview.md)
backup Restore Postgresql Database Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-postgresql-database-ps.md
description: Learn how to restore Azure PostgreSQL databases using Azure PowerSh
Last updated 01/24/2022 +
$job = Search-AzDataProtectionJobInAzGraph -Subscription $sub -ResourceGroupName
## Next steps -- [Azure PostgreSQL Backup overview](backup-azure-database-postgresql-overview.md)
+- [Azure PostgreSQL Backup overview](backup-azure-database-postgresql-overview.md)
backup Sap Hana Database With Hana System Replication Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-with-hana-system-replication-backup.md
Title: Back up SAP HANA System Replication databases on Azure VMs (preview) description: In this article, discover how to back up SAP HANA databases with HANA System Replication enabled. Previously updated : 12/23/2022 Last updated : 03/08/2023
When a failover occurs, the users are replicated to the new primary, but *hdbuse
If the password of this custom backup key expires, it could lead to the backup and restore operations failure.
-1. Create the same customer backup user (with the same password) and key (in *hdbuserstore*) on both VMs/nodes.
+1. Create the same *Custom backup user* (with the same password) and key (in *hdbuserstore*) on both VMs/nodes.
1. Run the SAP HANA backup configuration script (preregistration script) in the VMs where HANA is installed as the root user. This script sets up the HANA system for backup. For more information about the script actions, see the [What the preregistration script does](tutorial-backup-sap-hana-db.md#what-the-pre-registration-script-does) section.
backup Delete Recovery Services Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/delete-recovery-services-vault.md
description: Learn about how to use a PowerShell script to delete a Recovery Ser
Last updated 03/06/2023 +
backup Microsoft Azure Recovery Services Powershell All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/microsoft-azure-recovery-services-powershell-all.md
Title: Script Sample - Configuring Backup for on-premises Windows server description: Learn how to use a script to configure Backup for on-premises Windows server. + Last updated 06/23/2021
backup Tutorial Restore Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-restore-files.md
Title: Tutorial - Restore files to a VM with Azure Backup
description: Learn how to perform file-level restores on an Azure VM with Backup and Recovery Services. Last updated 01/31/2019-+
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary -- February 2023
+- March 2023
- [Azure Blob vaulted backups (preview)](#azure-blob-vaulted-backups-preview) - October 2022 - [Multi-user authorization using Resource Guard for Backup vault (in preview)](#multi-user-authorization-using-resource-guard-for-backup-vault-in-preview)
baremetal-infrastructure Connect Baremetal Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/connect-baremetal-infrastructure.md
Title: Connect BareMetal Infrastructure instances in Azure description: Learn how to identify and interact with BareMetal instances in the Azure portal or Azure CLI. + Last updated 07/13/2021
bastion Bastion Create Host Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-create-host-powershell.md
Last updated 03/14/2022 + # Customer intent: As someone with a networking background, I want to deploy Bastion and connect to a VM.- # Deploy Bastion using Azure PowerShell
bastion Shareable Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/shareable-link.md
By default, users in your org will have only read access to shared links. If a u
## Considerations * Shareable Links isn't currently supported for peered VNets that aren't in the same subscription.
+* Shareable Links isn't currently supported for peered VNEts across tenants.
* Shareable Links isn't currently supported for peered VNets that aren't in the same region. * Shareable Links isn't supported for national clouds during preview. * The Standard SKU is required for this feature.
batch Batch Certificate Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-certificate-migration-guide.md
Title: Migrate Batch account certificates to Azure Key Vault description: Learn how to migrate Batch account certificates to Azure Key Vault and plan for feature end of support.-- Previously updated : 10/12/2022 Last updated : 03/08/2023 # Migrate Batch account certificates to Azure Key Vault
On *February 29, 2024*, the Azure Batch account certificates feature will be ret
## About the feature
-Certificates are often required in various scenarios such as decrypting a secret, securing communication channels, or [accessing another service](credential-access-key-vault.md). Currently, Azure Batch offers two ways to manage certificates on Batch pools. You can add certificates to a Batch account or you can use the Azure Key Vault VM extension to manage certificates on Batch pools. Only the [certificate functionality on an Azure Batch account](/rest/api/batchservice/certificate) and the functionality it extends to Batch pools via `CertificateReference` to [Add Pool](/rest/api/batchservice/pool/add#certificatereference), [Patch Pool](/rest/api/batchservice/pool/patch#certificatereference), [Update Properties](/rest/api/batchservice/pool/update-properties#certificatereference) and the corresponding references on Get and List Pool APIs are being retired.
+Certificates are often required in various scenarios such as decrypting a secret, securing communication channels, or [accessing another service](credential-access-key-vault.md). Currently, Azure Batch offers two ways to manage certificates on Batch pools. You can add certificates to a Batch account or you can use the Azure Key Vault VM extension to manage certificates on Batch pools. Only the [certificate functionality on an Azure Batch account](/rest/api/batchservice/certificate) and the functionality it extends to Batch pools via `CertificateReference` to [Add Pool](/rest/api/batchservice/pool/add#certificatereference), [Patch Pool](/rest/api/batchservice/pool/patch#certificatereference), [Update Properties](/rest/api/batchservice/pool/update-properties#certificatereference) and the corresponding references on Get and List Pool APIs are being retired. Additionally, for Linux pools, the environment variable `$AZ_BATCH_CERTIFICATES_DIR` will no longer be defined and populated.
## Feature end of support [Azure Key Vault](../key-vault/general/overview.md) is the standard, recommended mechanism for storing and accessing secrets and certificates across Azure securely. Therefore, on February 29, 2024, we'll retire the Batch account certificates feature in Azure Batch. The alternative is to use the Azure Key Vault VM Extension and a user-assigned managed identity on the pool to securely access and install certificates on your Batch pools.
-After the certificates feature in Azure Batch is retired on February 29, 2024, a certificate in Batch won't work as expected. After that date, you'll no longer be able to add certificates to a Batch account or link these certificates to Batch pools. Pools that continue to use this feature after this date may not behave as expected such as updating certificate references or the ability to install existing certificate references.
+After the certificates feature in Azure Batch is retired on February 29, 2024, a certificate in Batch won't work as expected. After that date, you'll no longer be able to add certificates to a Batch account or link these certificates to Batch pools. Pools that continue to use this feature after this date may not behave as expected such as updating certificate references or the ability to install existing certificate references.
## Alternative: Use Azure Key Vault VM extension with pool user-assigned managed identity
For a complete guide on how to enable Azure Key Vault VM Extension with Pool Use
Yes. You may use the same Key Vault as specified with your Batch account as for use with your pools, but your Key Vault used for certificates for your Batch pools may be entirely separate.
+- Are both Linux and Windows Batch pools supported with the Key Vault VM extension?
+
+ Yes. See the documentation for [Windows](../virtual-machines/extensions/key-vault-windows.md) and [Linux](../virtual-machines/extensions/key-vault-linux.md).
+
+- How do I get references to certificates on Linux Batch Pools since `$AZ_BATCH_CERTIFICATES_DIR` will be removed?
+
+ The Key Vault VM extension for Linux allows you to specify the `certificateStoreLocation`, which is an absolute path to where the certificate will be stored.
+ - Where can I find best practices for using Azure Key Vault?
-
+ See [Azure Key Vault best practices](../key-vault/general/best-practices.md). ## Next steps
batch Manage Private Endpoint Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/manage-private-endpoint-connections.md
Title: Manage private endpoint connections with Azure Batch accounts description: Learn how to manage private endpoint connections with Azure Batch accounts, including list, approve, reject and remove. + Last updated 05/26/2022
batch Virtual File Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/virtual-file-mount.md
Title: Mount a virtual file system on a pool
description: Learn how to mount a virtual file system on a Batch pool. ms.devlang: csharp-+ Last updated 11/11/2021
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md
You can use your own certificate to enable the HTTPS feature. This process is do
2. Azure Key Vault certificates: If you have a certificate, upload it directly to your Azure Key Vault account. If you don't have a certificate, create a new certificate directly through Azure Key Vault. > [!NOTE]
-> The certificate must have a complete certificate chain with leaf and intermediate certificates, and root CA must be part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT).
+> * Azure CDN only supports PFX certificates.
+> * The certificate must have a complete certificate chain with leaf and intermediate certificates, and root CA must be part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT).
### Register Azure CDN
Register Azure CDN as an app in your Azure Active Directory via PowerShell.
2. In PowerShell, run the following command: `New-AzADServicePrincipal -ApplicationId "205478c0-bd83-4e1b-a9d6-db63a3e1e1c8" -Role Contributor`+ > [!NOTE]
- > **205478c0-bd83-4e1b-a9d6-db63a3e1e1c8** is the service principal for **Microsoft.AzureFrontDoor-Cdn**.
+ > * **205478c0-bd83-4e1b-a9d6-db63a3e1e1c8** is the service principal for **Microsoft.AzureFrontDoor-Cdn**.
+ > * You need to have the **Global Administrator** role to run this command.
```bash New-AzADServicePrincipal -ApplicationId "205478c0-bd83-4e1b-a9d6-db63a3e1e1c8" -Role Contributor
Grant Azure CDN permission to access the certificates (secrets) in your Azure Ke
- The available certificate/secret versions. > [!NOTE]
- > In order for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your Key Vault, please set the certificate/secret version to 'Latest'. If a specific version is selected, you have to re-select the new version manually for certificate rotation. It takes up to 72 hours for the new version of the certificate/secret to be deployed.
+ > * Azure CDN only supports PFX certificates.
+ > * In order for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your Key Vault, please set the certificate/secret version to 'Latest'. If a specific version is selected, you have to re-select the new version manually for certificate rotation. It takes up to 72 hours for the new version of the certificate/secret to be deployed.
5. Select **On** to enable HTTPS.
cdn Create Profile Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-bicep.md
na -+ Last updated 03/14/2022
cdn Create Profile Endpoint Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-template.md
na -+ Last updated 02/27/2023
cdn Cdn Azure Cli Create Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/scripts/cli/cdn-azure-cli-create-endpoint.md
Last updated 02/27/2023
ms.devlang: azurecli+ ms.tool: azure-cli
cloud-services-extended-support Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-powershell.md
Last updated 10/13/2020-+ # Deploy a Cloud Service (extended support) using Azure PowerShell
cloud-services-extended-support Deploy Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-sdk.md
Last updated 10/13/2020-+ # Deploy Cloud Services (extended support) by using the Azure SDK
cloud-shell Private Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/private-vnet.md
description: Deploy Cloud Shell into an Azure virtual network
ms.contributor: jahelmic- Last updated 11/14/2022 vm-linux
cognitive-services Concept Background Removal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-background-removal.md
The following example images illustrate what the Image Analysis service returns
|Original image |With background removed |Alpha matte |
-||||
+|::|::|::|
| | | | ||||
cognitive-services Concept Image Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-image-retrieval.md
Vector embeddings are a way of representing content&mdash;text or images&mdash;a
## How does it work? + 1. Vectorize Images and Text: the Image Retrieval APIs, **VectorizeImage** and **VectorizeText**, can be used to extract feature vectors out of an image or text respectively. The APIs return a single feature vector representing the entire input.-- Measure similarity: Vector search systems typically use distance metrics, such as cosine distance or Euclidean distance, to compare vectors and rank them by similarity. The [Vision studio](https://portal.vision.cognitive.azure.com/) demo uses [cosine distance](./how-to/image-retrieval.md#calculate-vector-similarity) to measure similarity. -- Retrieve Images: Use the top _N_ vectors similar to the search query and retrieve images corresponding to those vectors from your photo library to provide as the final result.
+1. Measure similarity: Vector search systems typically use distance metrics, such as cosine distance or Euclidean distance, to compare vectors and rank them by similarity. The [Vision studio](https://portal.vision.cognitive.azure.com/) demo uses [cosine distance](./how-to/image-retrieval.md#calculate-vector-similarity) to measure similarity.
+1. Retrieve Images: Use the top _N_ vectors similar to the search query and retrieve images corresponding to those vectors from your photo library to provide as the final result.
## Next steps
cognitive-services Background Removal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/background-removal.md
This guide assumes you've already [created a Computer Vision resource](https://p
## Submit data to the service
-When calling the **Image Analysis - Segment** API, you specify the image's URL by formatting the request body like this: `{"url":"https://docs.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg"}`.
+When calling the **Image Analysis - Segment** API, you specify the image's URL by formatting the request body like this: `{"url":"https://learn.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg"}`.
To analyze a local image, you'd put the binary image data in the HTTP request body.
cognitive-services Call Analyze Image 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-analyze-image-40.md
The code in this guide uses remote images referenced by URL. You may want to try
In your main class, save a reference to the URL of the image you want to analyze. ```csharp
-var imageSource = VisionSource.FromUrl(new Uri("https://docs.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg"));
+var imageSource = VisionSource.FromUrl(new Uri("https://learn.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg"));
``` > [!TIP]
auto imageSource = VisionSource::FromUrl("https://learn.microsoft.com/azure/cogn
#### [REST](#tab/rest)
-When analyzing a remote image, you specify the image's URL by formatting the request body like this: `{"url":"https://docs.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg"}`.
+When analyzing a remote image, you specify the image's URL by formatting the request body like this: `{"url":"https://learn.microsoft.com/azure/cognitive-services/computer-vision/images/windows-kitchen.jpg"}`.
To analyze a local image, you'd put the binary image data in the HTTP request body.
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md
These APIs are only available in the following geographic regions: East US, Fran
Image Analysis 4.0 (preview) offers the ability to remove the background of an image. This feature can either output an image of the detected foreground object with a transparent background, or a grayscale alpha matte image showing the opacity of the detected foreground object. [Background removal](./concept-background-removal.md)
+|Original image |With background removed |Alpha matte |
+|::|::|::|
+
+| | | |
+||||
+| :::image type="content" source="media/background-removal/person-5.png" alt-text="Photo of a group of people using a tablet."::: | :::image type="content" source="media/background-removal/person-5-result.png" alt-text="Photo of a group of people using a tablet; background is transparent."::: | :::image type="content" source="media/background-removal/person-5-matte.png" alt-text="Alpha matte of a group of people."::: |
## Image requirements
cognitive-services What Is Luis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/what-is-luis.md
ms.
Language Understanding (LUIS) is a cloud-based conversational AI service that applies custom machine-learning intelligence to a user's conversational, natural language text to predict overall meaning, and pull out relevant, detailed information. LUIS provides access through its [custom portal](https://www.luis.ai), [APIs][endpoint-apis] and [SDK client libraries](client-libraries-rest-api.md). - For first time users, follow these steps to [sign in to LUIS portal](sign-in-luis-portal.md "sign in to LUIS portal") To get started, you can try a LUIS [prebuilt domain app](luis-get-started-create-app.md).
cognitive-services Batch Transcription Audio Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-audio-data.md
Last updated 10/21/2022 ms.devlang: csharp-+ # Locate audio files for batch transcription
You could otherwise specify individual files in the container. You must generate
- [Batch transcription overview](batch-transcription.md) - [Create a batch transcription](batch-transcription-create.md)-- [Get batch transcription results](batch-transcription-get.md)
+- [Get batch transcription results](batch-transcription-get.md)
cognitive-services How To Configure Azure Ad Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-configure-azure-ad-auth.md
Last updated 06/18/2021
zone_pivot_groups: programming-languages-set-two ms.devlang: cpp, csharp, java, python-+ # Azure Active Directory Authentication with the Speech SDK
The ```VoiceProfileClient``` isn't available with the Speech SDK for Python.
::: zone-end > [!NOTE]
-> The ```ConversationTranslator``` doesn't support Azure AD authentication.
+> The ```ConversationTranslator``` doesn't support Azure AD authentication.
cognitive-services Spx Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-overview.md
Last updated 09/16/2022 - # What is the Speech CLI?
cognitive-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/authentication.md
+ Last updated 09/01/2022
cognitive-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-virtual-networks.md
+ Last updated 07/19/2022
cognitive-services Create Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/create-resource.md
+ Last updated 02/02/2023 zone_pivot_groups: openai-create-resource
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/tutorials/embeddings.md
If you created an OpenAI resource solely for completing this tutorial and want t
Learn more about Azure OpenAI's models: > [!div class="nextstepaction"]
-> [Next steps button](../concepts/models.md)
+> [Azure OpenAI Service models](../concepts/models.md)
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
If you want to purchase more phone numbers or place a special order, follow the
| Operation | Timeframes (seconds) | Limit (number of requests) | ||--|--|
-| **Create identity** | 30 | 500|
+| **Create identity** | 30 | 1000|
| **Delete identity** | 30 | 500|
-| **Issue access token** | 30 | 500|
-| **Revoke access token** | 1 | 100|
+| **Issue access token** | 30 | 1000|
+| **Revoke access token** | 30 | 500|
| **createUserAndToken**| 30 | 1000 | | **exchangeTokens**| 30 | 500 |
communication-services Record Every Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/record-every-call.md
The Call Started event when a call start is formatted in the following way:
```
+> [!NOTE]
+> Using Azure Event Grid incurs additional costs. For more information, see [Azure Event Grid pricing](https://azure.microsoft.com/pricing/details/event-grid/).
+ ## Pre-requisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
communication-services View Events Request Bin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/event-grid/view-events-request-bin.md
This document shows you how to validate that your Azure Communication Services resource sends events using Azure Event Grid viewer or RequestBin.
+> [!NOTE]
+> Using Azure Event Grid incurs additional costs. For more information, see [Azure Event Grid pricing](https://azure.microsoft.com/pricing/details/event-grid/).
+ ## Pre-requisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
communication-services Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/get-started.md
zone_pivot_groups: acs-azcli-js-csharp-java-python-swift-android-+ # Quickstart: Add Chat to your App
You may also want to:
- Learn about [chat concepts](../../concepts/chat/concepts.md) - Familiarize yourself with [Chat SDK](../../concepts/chat/sdk-features.md) - Using [Chat SDK in your React Native](./react-native.md) application.-
communication-services Create Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/create-communication-resource.md
zone_pivot_groups: acs-plat-azp-azcli-net-ps-+ ms.devlang: azurecli # Quickstart: Create and manage Communication Services resources
communication-services Get Started Rooms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/rooms/get-started-rooms.md
Last updated 09/01/2022 -+ zone_pivot_groups: acs-azcli-js-csharp-java-python # Quickstart: Create and manage a room resource
communication-services Receive Sms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/receive-sms.md
The `SMSReceived` event generated when an SMS is sent to an Azure Communication
}] ```
-To start generating the events, we must configure Azure Event Grid for our Azure Communication Services resource. Leveraging Event Grid generates an additional charge for the usage. More information on Event Grid pricing can be found on the [pricing page](https://azure.microsoft.com/pricing/details/event-grid/).
+To start generating the events, we must configure Azure Event Grid for our Azure Communication Services resource.
+
+> [!NOTE]
+> Using Azure Event Grid incurs additional costs. For more information, see [Azure Event Grid pricing](https://azure.microsoft.com/pricing/details/event-grid/).
## Pre-requisites
confidential-computing Create Confidential Vm From Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/create-confidential-vm-from-compute-gallery.md
Last updated 07/14/2022-+ ms.devlang: azurecli
confidential-ledger Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-portal.md
Last updated 11/14/2022 -+ # Quickstart: Create a confidential ledger using the Azure portal
confidential-ledger Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-powershell.md
Last updated 06/08/2022 + - # Quickstart: Create a confidential ledger using Azure PowerShell
confidential-ledger Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-python.md
Last updated 11/14/2022 -+ # Quickstart: Microsoft Azure confidential ledger client library for Python
confidential-ledger Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-template.md
-+ Last updated 11/14/2022
container-apps Azure Arc Enable Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-enable-cluster.md
description: 'Tutorial: learn how to set up Azure Container Apps in your Azure A
+ Last updated 12/16/2022 - # Tutorial: Enable Azure Container Apps on Azure Arc-enabled Kubernetes (Preview)
container-apps Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-pipelines.md
description: Learn to automatically create new revisions in Azure Container Apps
+ Last updated 11/09/2022
container-apps Background Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/background-processing.md
Last updated 11/02/2021 -+ # Tutorial: Deploy a background processing application with Azure Container Apps
Remove-AzResourceGroup -Name $ResourceGroupName -Force
``` --
container-apps Communicate Between Microservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/communicate-between-microservices.md
description: Learn how to communicate between microservices deployed in Azure Co
+ Last updated 05/13/2022
container-apps Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/disaster-recovery.md
+ Last updated 1/18/2023
container-apps Get Started Existing Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image.md
description: Deploy an existing container image to Azure Container Apps with the
-+ Last updated 08/31/2022
container-apps Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started.md
Last updated 03/21/2022 -+ ms.devlang: azurecli
container-apps Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/github-actions.md
description: Learn to automatically create new revisions in Azure Container Apps
+ Last updated 11/09/2022
container-apps Manage Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/manage-secrets.md
Last updated 09/29/2022 -+ # Manage secrets in Azure Container Apps
container-apps Managed Identity Image Pull https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity-image-pull.md
description: Set up Azure Container Apps to authenticate Azure Container Registr
+ Last updated 09/16/2022
Remove-AzResourceGroup -Name $ResourceGroupName -Force
## Next steps > [!div class="nextstepaction"]
-> [Managed identities in Azure Container Apps](managed-identity.md)
+> [Managed identities in Azure Container Apps](managed-identity.md)
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-azure-resource-manager.md
Last updated 06/29/2022 -+ zone_pivot_groups: container-apps
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
Last updated 09/29/2022 -+ ms.devlang: azurecli
container-apps Quickstart Code To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-code-to-cloud.md
description: Code to cloud deploying your application to Azure Container Apps
+ Last updated 05/11/2022
Remove-AzResourceGroup -Name $ResourceGroup -Force
This quickstart is the entrypoint for a set of progressive tutorials that showcase the various features within Azure Container Apps. Continue on to learn how to enable communication from a web front end that calls the API you deployed in this article. > [!div class="nextstepaction"]
-> [Tutorial: Communication between microservices](communicate-between-microservices.md)
+> [Tutorial: Communication between microservices](communicate-between-microservices.md)
container-apps Revisions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions-manage.md
description: Manage revisions and traffic splitting in Azure Container Apps.
+ Last updated 06/07/2022
container-apps Storage Mounts Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/storage-mounts-azure-files.md
description: Learn to create an Azure Files storage mount in Azure Container App
+ Last updated 07/19/2022
container-apps Tutorial Java Quarkus Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-java-quarkus-connect-managed-identity-postgresql-database.md
Last updated 09/26/2022-+ # Tutorial: Connect to PostgreSQL Database from a Java Quarkus Container App without secrets using a managed identity
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
description: Learn how to integrate a VNET to an internal Azure Container Apps e
-+ Last updated 08/31/2022
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
description: Learn how to integrate a VNET with an external Azure Container Apps
-+ Last updated 08/31/2022
container-instances Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/availability-zones.md
Last updated 06/17/2022-+ # Deploy an Azure Container Instances (ACI) container group in an availability zone (preview)
Learn about building fault-tolerant applications using zonal container groups fr
[az-container-show]: /cli/azure/container#az_container_show [az-group-create]: /cli/azure/group#az_group_create [az-deployment-group-create]: /cli/azure/deployment#az_deployment_group_create
-[availability-zone-overview]: ../availability-zones/az-overview.md
+[availability-zone-overview]: ../availability-zones/az-overview.md
container-instances Container Instances Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-application-gateway.md
description: Create a container group in a virtual network and use an Azure appl
+ Last updated 06/17/2022
container-instances Container Instances Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-custom-dns.md
description: Configure a public or private DNS configuration for a container gro
+ Last updated 05/25/2022
See the Azure quickstart template [Create an Azure container group with VNet](ht
[az-container-delete]: /cli/azure/container#az-container-delete [az-network-vnet-delete]: /cli/azure/network/vnet#az-network-vnet-delete [az-group-delete]: /cli/azure/group#az-group-create
-[cloud-shell-bash]: ../cloud-shell/overview.md
+[cloud-shell-bash]: ../cloud-shell/overview.md
container-instances Container Instances Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-managed-identity.md
+ Last updated 06/17/2022
container-registry Container Registry Authentication Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-authentication-managed-identity.md
Title: Authenticate with managed identity description: Provide access to images in your private container registry by using a user-assigned or system-assigned managed Azure identity. + Last updated 10/11/2022
container-registry Container Registry Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-authentication.md
Last updated 10/11/2022- # Authenticate with an Azure container registry
container-registry Container Registry Get Started Docker Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-docker-cli.md
Last updated 10/11/2022 -+ # Push your first image to your Azure container registry using the Docker CLI
container-registry Container Registry Health Error Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-health-error-reference.md
This error means that the CLI was unable to find the login server of the given r
## NOTARY_VERSION_ERROR This error means that the CLI is not compatible with the currently installed version of Docker/Notary. Try downgrading your notary.exe version to a version earlier than 0.6.0 by replacing your Docker installation's Notary client manually to resolve this issue. You can also try downloading and installing a pre-compiled binary of Notary earlier than 0.6.0 for 64 bit Linux or macOS X from the Notary repository's releases page on GitHub. For windows download the .exe, place it in the(default path: C:\ProgramFiles\Docker\Docker\resources\bin) and rename it to notary.exe.
-
+
+## CONNECTIVITY_TOOMANYREQUESTS_ERROR
+
+This error means that the user has sent too many requests in a short period causing the authentication system to block further requests to prevent overload. This error occurs by reaching a configured limit in the user's registry service tier or environment. We recommend waiting for a moment before sending another request. This will allow the authentication system's block to lift and you can try sending a request again.
+ ## Next steps For options to check the health of a registry, see [Check the health of an Azure container registry](container-registry-check-health.md).
container-registry Container Registry Helm Repos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-helm-repos.md
Title: Store Helm charts description: Learn how to store Helm charts for your Kubernetes applications using repositories in Azure Container Registry + Last updated 10/11/2022
container-registry Container Registry Oras Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-oras-artifacts.md
Last updated 01/04/2023 -+ # Push and pull supply chain artifacts using Azure Registry (Preview)
In this article, a graph of supply chain artifacts is created, discovered, promo
[az-acr-build]: /cli/azure/acr#az_acr_build [az-acr-manifest-metadata]: /cli/azure/acr/manifest/metadata [az-acr-repository-delete]: /cli/azure/acr/repository#az_acr_repository_delete
-[azure-cli-install]: /cli/azure/install-azure-cli
+[azure-cli-install]: /cli/azure/install-azure-cli
container-registry Container Registry Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-private-link.md
Title: Set up private endpoint with private link description: Set up a private endpoint on a container registry and enable access over a private link in a local virtual network. Private link access is a feature of the Premium service tier. + Last updated 10/11/2022- # Connect privately to an Azure container registry using Azure Private Link
container-registry Container Registry Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-roles.md
Last updated 10/11/2022- # Azure Container Registry roles and permissions
container-registry Container Registry Soft Delete Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-soft-delete-policy.md
Title: Enable soft delete policy description: Learn how to enable a soft delete policy in your Azure Container Registry for recovering accidentally deleted artifacts for a set retention period. + Last updated 04/19/2022
container-registry Container Registry Task Run Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-task-run-template.md
Title: Quick task run with template description: Queue an ACR task run to build an image using an Azure Resource Manager template + Last updated 10/11/2022
container-registry Container Registry Tasks Authentication Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-authentication-key-vault.md
Title: External authentication from ACR task description: Configure an Azure Container Registry Task (ACR Task) to read Docker Hub credentials stored in an Azure key vault, by using a managed identity for Azure resources. + Last updated 10/11/2022
container-registry Container Registry Tasks Cross Registry Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-cross-registry-authentication.md
Title: Cross-registry authentication from ACR task description: Configure an Azure Container Registry Task (ACR Task) to access another private Azure container registry by using a managed identity for Azure resources + Last updated 10/11/2022
container-registry Container Registry Tasks Scheduled https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-scheduled.md
Title: Tutorial - Schedule an ACR task description: In this tutorial, learn how to run an Azure Container Registry Task on a defined schedule by setting one or more timer triggers + Last updated 10/11/2022
For examples of tasks triggered by source code commits or base image updates, se
[az-acr-task-timer-update]: /cli/azure/acr/task/timer#az_acr_task_timer_update [az-acr-task-run]: /cli/azure/acr/task#az_acr_task_run [az-acr-task]: /cli/azure/acr/task
-[azure-cli-install]: /cli/azure/install-azure-cli
+[azure-cli-install]: /cli/azure/install-azure-cli
container-registry Container Registry Transfer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-cli.md
Last updated 10/11/2022-+ # ACR Transfer with Az CLI
View [ACR Transfer Troubleshooting](container-registry-transfer-troubleshooting.
[az-deployment-group-show]: /cli/azure/deployment/group#az-deployment-group-show [az-acr-repository-show-tags]: /cli/azure/acr/repository##az_acr_repository_show_tags [az-acr-import]: /cli/azure/acr#az-acr-import
-[az-resource-delete]: /cli/azure/resource#az-resource-delete
+[az-resource-delete]: /cli/azure/resource#az-resource-delete
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
description: In this tutorial you'll learn to create a signing certificate, buil
+ Last updated 12/12/2022
container-registry Container Registry Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-vnet.md
Title: Restrict access using a service endpoint description: Restrict access to an Azure container registry using a service endpoint in an Azure virtual network. Service endpoint access is a feature of the Premium service tier. + Last updated 10/11/2022
container-registry Tasks Consume Public Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tasks-consume-public-content.md
Last updated 10/11/2022-+ # How to consume and maintain public content with Azure Container Registry Tasks
In this article. you used ACR tasks to create an automated gating workflow to in
[oci-artifacts]: ./container-registry-oci-artifacts.md [oci-consuming-public-content]: https://opencontainers.org/posts/blog/2020-10-30-consuming-public-content/ [opa]: https://www.openpolicyagent.org/
-[quay]: https://quay.io
+[quay]: https://quay.io
container-registry Tutorial Enable Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-enable-customer-managed-keys.md
Title: Enable a customer-managed key description: In this tutorial, learn how to encrypt your Premium registry with a customer-managed key stored in Azure Key Vault. + Last updated 08/5/2022
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md
Last updated 09/26/2022 -+ # Configure and use Azure Synapse Link for Azure Cosmos DB
cosmos-db Get Latest Restore Timestamp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/get-latest-restore-timestamp.md
-+ Last updated 04/08/2022
cosmos-db How To Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-container-copy.md
Title: Create and manage intra-account container copy jobs in Azure Cosmos DB
description: Learn how to create, monitor, and manage container copy jobs within an Azure Cosmos DB account using CLI commands. -+ Last updated 08/01/2022
cosmos-db How To Setup Cross Tenant Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-cross-tenant-customer-managed-keys.md
description: Learn how to configure encryption with customer-managed keys for Az
-+ Last updated 09/27/2022
cosmos-db How To Setup Customer Managed Keys Mhsm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys-mhsm.md
Last updated 12/25/2022 -+ ms.devlang: azurecli
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-setup-rbac.md
Title: Configure role-based access control in Azure Cosmos DB for MongoDB databa
description: Learn how to configure native role-based access control in Azure Cosmos DB for MongoDB -+ Last updated 09/26/2022
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-get-started.md
ms.devlang: csharp Last updated 07/06/2022-+ # Get started with Azure Cosmos DB for NoSQL using .NET
cosmos-db How To Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-python-get-started.md
ms.devlang: python Last updated 12/06/2022-+ # Get started with Azure Cosmos DB for NoSQL using Python
The following guides show you how to use each of these classes to build your app
Now that you've connected to an API for NoSQL account, use the next guide to create and manage databases. > [!div class="nextstepaction"]
-> [Create a database in Azure Cosmos DB for NoSQL using Python](how-to-python-create-database.md)
+> [Create a database in Azure Cosmos DB for NoSQL using Python](how-to-python-create-database.md)
cosmos-db Manage With Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/manage-with-cli.md
description: Manage Azure Cosmos DB for NoSQL resources using Azure CLI.
+ Last updated 02/18/2022 - # Manage Azure Cosmos DB for NoSQL resources using Azure CLI
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-dotnet.md
ms.devlang: csharp Last updated 11/07/2022-+ # Quickstart: Azure Cosmos DB for NoSQL client library for .NET
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-python.md
ms.devlang: python Last updated 1/17/2023-+ # Quickstart: Azure Cosmos DB for NoSQL client library for Python
cosmos-db Quickstart Template Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-template-bicep.md
Last updated 04/18/2022-+ #Customer intent: As a database admin who is new to Azure, I want to use Azure Cosmos DB to store and manage my data.
cosmos-db Quickstart Template Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-template-json.md
Last updated 08/26/2021-+ #Customer intent: As a database admin who is new to Azure, I want to use Azure Cosmos DB to store and manage my data.
cosmos-db Tutorial Deploy App Bicep Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-deploy-app-bicep-aks.md
Title: 'Tutorial: Deploy an ASP.NET web application using Azure Cosmos DB for No
description: Learn how to deploy an ASP.NET MVC web application with Azure Cosmos DB for NoSQL, managed identity, and Azure Kubernetes Service by using Bicep. -+
cosmos-db Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy.md
Last updated 09/23/2020 -+ # Use Azure Policy to implement governance and controls for Azure Cosmos DB resources
cosmos-db Tutorial Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/tutorial-private-access.md
-+ Last updated 09/28/2022
az group delete --resource-group link-demo
endpoints](../../private-link/private-endpoint-overview.md) * Learn about [virtual networks](../../virtual-network/concepts-and-best-practices.md)
-* Learn about [private DNS zones](../../dns/private-dns-overview.md)
+* Learn about [private DNS zones](../../dns/private-dns-overview.md)
cosmos-db Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/role-based-access-control.md
Last updated 05/11/2022 -+ # Azure role-based access control in Azure Cosmos DB
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/throughput.md
Last updated 10/07/2020 -+ # Throughput (RU/s) operations with PowerShell for a keyspace or table for Azure Cosmos DB - API for Cassandra
cosmos-db Troubleshoot Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/troubleshoot-cmk.md
Last updated 12/25/2022 -+ ms.devlang: azurecli
As customer has removed the current default identityΓÇÖs ΓÇ£GET/WRAP/UnwrapΓÇ¥ p
Customer should follow ΓÇ£Key Vault Revoke State Troubleshooting guideΓÇ¥ to regrant key vault access. ___________________________________-
cost-management-billing Get Started Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/get-started-partners.md
To view costs at the customer scope, in the partner tenant navigate to Cost anal
Only the users with **Global admin** and **Admin agent** roles can manage and view costs for billing accounts, billing profiles, and customers directly in the partner's Azure tenant. For more information about partner center roles, see [Assign users roles and permissions](/partner-center/permissions-overview).
-## Enable cost management for customer tenant subscriptions
+## Enable Cost Management for customer tenant subscriptions
Partners may enable access to Cost Management after customers are onboarded to a Microsoft Customer Agreement. Then partners can then enable a policy allowing customers to view their costs for Azure consumed services computed at pay-as-you-go retail rates. Costs are shown in the customer's billing currency for their consumed usage at Azure RBAC subscription and resource groups scopes.
When the cost policy is set to **Yes**, subscription users associated to the cus
When the cost visibility policy is enabled, all services that have subscription usage show costs at pay-as-you-go rates. Reservation usage appears with zero charges for actual and amortized costs. Purchases and entitlements are not associated to a specific subscription. So, purchases aren't displayed at the subscription scope. The global admin/admin agent of a direct partner or an indirect provider can also use the [Update Customer API](/rest/api/billing/2019-10-01-preview/policies/updatecustomer) to set each customer's cost visibility policy at scale.
-## View and enable all policies
-
-You can also view and change policies for Azure reservations, Azure Marketplace, view Azure charges, and tag management in a single location. The policy settings apply to all customers under the billing profile.
-
-To view or change policies:
-
-1. In the Azure portal, navigate to **Cost Management** (not Cost Management + Billing).
-1. In the left menu under **Settings**, select **Configuration**.
-1. The billing profile configuration is shown. Policies are shown as Enabled or Disabled. If you want to change a policy, select **Edit** under a policy.
- :::image type="content" source="./media/get-started-partners/configuration-policy-settings.png" alt-text="Screenshot showing the billing profile configuration page where you can view and edit policy settings." lightbox="./media/get-started-partners/configuration-policy-settings.png" :::
-1. If needed, change the policy settings, and then select **Save**.
- ### View subscription costs in the customer tenant
-To view costs for a subscription, open **Cost Management + Billing** in the customer's Azure tenant. Select **Cost analysis** and then the required subscription to start reviewing costs. You can view consumption costs for each subscription individually in the customer tenant.
+To view costs for a subscription, open **Cost Management** in the customer's Azure tenant. Select **Cost analysis** and then the required subscription to start reviewing costs. You can view consumption costs for each subscription individually in the customer tenant.
[![View cost analysis as a customer ](./media/get-started-partners/subscription-costs.png)](./media/get-started-partners/subscription-costs.png#lightbox)
cost-management-billing Quick Create Budget Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-create-budget-bicep.md
Last updated 08/26/2022-+ # Quickstart: Create a budget with Bicep
cost-management-billing Quick Create Budget Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-create-budget-template.md
Last updated 01/07/2022-+ # Quickstart: Create a budget with an ARM template
cost-management-billing Tutorial Acm Create Budgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md
-+ # Tutorial: Create and manage Azure budgets
cost-management-billing Programmatically Create Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription.md
Last updated 03/22/2022 - # Create Azure subscriptions programmatically
cost-management-billing Choose Commitment Amount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/choose-commitment-amount.md
Note the following points:
Savings plan purchases are calculated by the recommendations engine for the selected term and scope, based on last 30 days of usage. Recommendations are provided through [Azure Advisor](https://portal.azure.com/#view/Microsoft_Azure_Expert/AdvisorMenuBlade/~/Cost), the savings plan purchase experience in [Azure portal](https://portal.azure.com/), and through the [savings plan benefit recommendations API](/rest/api/cost-management/benefit-recommendations/list).
+## Savings plan purchase recommendations for customers using Management groups
+Currently Azure portal doesn't provide Savings plan recommendations for Management groups, but customers can calculate their own per-hour commitment for Management groups by using the following steps until Azure portal provides Management group-level recommendations.
+1. Download Usage Detail report from EA portal or Azure portal to get the accurate usage and cost.
+ - From EA portal - By logging into ea.azure.com, navigating to Reports section, and downloading Usage Details report for the current month and the past 2 months.
+ - From Azure portal - By logging into Azure portal and searching for cost management and billing. Under Billing, click on Usage + charges and click on download against the month to download current and past 2 months.
+1. Open the downloaded file in Excel. If the file size is huge open it in Power BI.
+1. Create cost column by multiplying PayG Price * Quantity (i.e. calculated cost).
+1. Filter Charge Type = "Usage".
+1. Filter Meter Category = "Virtual Machines", "App Service", "Functions", "Container Instance" - As the SP is applied on only these services.
+1. Filter ProductOrderName = Blank
+1. Filter Quantity >= 23 to consider only items which ran 24 hours as SP is per hour commitment, and we have the granularity of per day and not per hour. This will avoid any sparse compute.
+1. Filter Months for current and previous 2 months.
+1. If you are doing this in Power BI export the data to .csv file and copy into Excel.
+1. Now copy the subscription names that belong to the management group on which you want to apply Savings plan on in Excel sheet.
+1. Do a Vlookup against the internal subscriptions against the filter data.
+1. Divide calculated cost with 24 hours to get per hour cost.
+1. Create pivot to group the data by subscription by month and day, and copy this pivot data into new sheet.
+1. Multiply per hour cost with .4. Reason for this is you will get a discount on the usage. For example, you have committed 100 rupees you will be charged based on 1 or 3 year Savings plan discount applicable for SKU hence your cost per hour will be less than 100 hours and hence you will be needing more cost of compute to get the value of 100. 40% is the safe limit.
+1. Now see the range of cost per hour per day and per month to get view of the sage commitment you can make.
+ ## Need help? Contact us If you have Azure savings plan for compute questions, contact your account team, or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft will only provide Azure savings plan for compute expert support requests in English.
If you have Azure savings plan for compute questions, contact your account team,
- [Manage Azure savings plans](manage-savings-plan.md) - [View Azure savings plan cost and usage details](utilization-cost-reports.md)-- [Software costs not included in saving plan](software-costs-not-included.md)
+- [Software costs not included in saving plan](software-costs-not-included.md)
cost-management-billing Utilization Cost Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/utilization-cost-reports.md
Get the Amortized costs data and filter the data for a `PricingModel` = `Savings
2. Get the savings plan costs. Sum the `Cost` values to get the monetary value of what you paid for the savings plan. It includes the used and unused costs of the savings plan. 3. Subtract estimated pay-as-you-go costs from savings plan costs to get the estimated savings.
-To know the Savings made out of public list price:
-Get public or list price cost. Multiply the `PayGPrice` value with `Quantity` values to get public-list-price costs.
-Get Savings made out of savings plan against public list price. Subtract estimated public-list-price costs from `Cost`.
+- To know the Savings made out of public list price:
+ - Get public or list price cost. Multiply the `PayGPrice` value with `Quantity` values to get public-list-price costs.
+ - Get Savings made out of savings plan against public list price. Subtract estimated public-list-price costs from `Cost`.
-To know the % savings made out of discounted price for customer:
-Get Savings made out of savings plan against discounts given to customer. Subtract estimated pay-as-you-go from `Cost`.
-Get % discount applied on each line item. Divide `Cost` with public-list-price and then divide by 100.
++ To know the % savings made out of discounted price for customer:
+ + Get Savings made out of savings plan against discounts given to customer. Subtract estimated pay-as-you-go from `Cost`.
+ + Get % discount applied on each line item. Divide `Cost` with public-list-price and then divide by 100.
Keep in mind that if you have an underutilized savings plan, the `UnusedBenefit` entry for `ChargeType` becomes a factor to consider. When you have a fully utilized savings plan, you receive the maximum savings possible. Any `UnusedBenefit` quantity reduces savings.
data-factory Azure Ssis Integration Runtime Express Virtual Network Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/azure-ssis-integration-runtime-express-virtual-network-injection.md
Last updated 12/16/2022 - # Express virtual network injection method
data-factory Azure Ssis Integration Runtime Virtual Network Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/azure-ssis-integration-runtime-virtual-network-configuration.md
Last updated 02/15/2022 - # Configure a virtual network for injection of Azure-SSIS integration runtime
data-factory Connector Azure Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-table-storage.md
-+ Last updated 07/04/2022
data-factory Create Azure Ssis Integration Runtime Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime-portal.md
Last updated 02/15/2022 - # Create an Azure-SSIS integration runtime via Azure portal
data-factory Create Azure Ssis Integration Runtime Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime-resource-manager-template.md
Last updated 02/15/2022 - # Use an Azure Resource Manager template to create an integration runtime
data-factory How To Clean Up Ssisdb Logs With Elastic Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-clean-up-ssisdb-logs-with-elastic-jobs.md
Last updated 08/09/2022 - # How to clean up SSISDB logs automatically
data-factory How To Create Tumbling Window Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-tumbling-window-trigger.md
-+ Last updated 08/09/2022
To monitor trigger runs and pipeline runs in the Azure portal, see [Monitor pipe
* For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json). * [Create a tumbling window trigger dependency](tumbling-window-trigger-dependency.md).
-* Learn how to reference trigger metadata in pipeline, see [Reference Trigger Metadata in Pipeline Runs](how-to-use-trigger-parameterization.md)
+* Learn how to reference trigger metadata in pipeline, see [Reference Trigger Metadata in Pipeline Runs](how-to-use-trigger-parameterization.md)
data-factory How To Invoke Ssis Package Ssis Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-ssis-activity.md
ms.devlang: powershell
-+ Last updated 08/09/2022
You can also create a scheduled trigger for your pipeline so that the pipeline r
## Next steps - [Run an SSIS package with the Execute SSIS Package activity in Azure Data Factory with PowerShell](how-to-invoke-ssis-package-ssis-activity-powershell.md)-- [Modernize and extend your ETL/ELT workflows with SSIS activities in Azure Data Factory pipelines](https://techcommunity.microsoft.com/t5/SQL-Server-Integration-Services/Modernize-and-Extend-Your-ETL-ELT-Workflows-with-SSIS-Activities/ba-p/388370)
+- [Modernize and extend your ETL/ELT workflows with SSIS activities in Azure Data Factory pipelines](https://techcommunity.microsoft.com/t5/SQL-Server-Integration-Services/Modernize-and-Extend-Your-ETL-ELT-Workflows-with-SSIS-Activities/ba-p/388370)
data-factory How To Schedule Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-schedule-azure-ssis-integration-runtime.md
Last updated 02/15/2022 -+ # How to start and stop Azure-SSIS Integration Runtime on a schedule
See the following articles from SSIS documentation:
- [Deploy, run, and monitor an SSIS package on Azure](/sql/integration-services/lift-shift/ssis-azure-deploy-run-monitor-tutorial) - [Connect to SSIS catalog on Azure](/sql/integration-services/lift-shift/ssis-azure-connect-to-catalog-database) - [Schedule package execution on Azure](/sql/integration-services/lift-shift/ssis-azure-schedule-packages)-- [Connect to on-premises data sources with Windows authentication](/sql/integration-services/lift-shift/ssis-azure-connect-with-windows-auth)
+- [Connect to on-premises data sources with Windows authentication](/sql/integration-services/lift-shift/ssis-azure-connect-with-windows-auth)
data-factory Join Azure Ssis Integration Runtime Virtual Network Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network-ui.md
Last updated 08/12/2022 - # Join Azure-SSIS integration runtime to a virtual network via Azure portal
data-factory Join Azure Ssis Integration Runtime Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network.md
Last updated 08/12/2022 - # Join Azure-SSIS integration runtime to a virtual network
data-factory Manage Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/manage-azure-ssis-integration-runtime.md
Title: Reconfigure the Azure-SSIS integration runtime
description: Learn how to reconfigure an Azure-SSIS integration runtime in Azure Data Factory after you have already provisioned it. + Last updated 08/12/2022
For more information about Azure-SSIS runtime, see the following topics:
- [Tutorial: deploy SSIS packages to Azure](./tutorial-deploy-ssis-packages-azure.md). This article provides step-by-step instructions to create an Azure-SSIS IR and uses Azure SQL Database to host the SSIS catalog. - [How to: Create an Azure-SSIS integration runtime](create-azure-ssis-integration-runtime.md). This article expands on the tutorial and provides instructions on using Azure SQL Managed Instance and joining the IR to a virtual network. - [Join an Azure-SSIS IR to a virtual network](join-azure-ssis-integration-runtime-virtual-network.md). This article provides conceptual information about joining an Azure-SSIS IR to an Azure virtual network. It also provides steps to use Azure portal to configure virtual network so that Azure-SSIS IR can join the virtual network. -- [Monitor an Azure-SSIS IR](monitor-integration-runtime.md#azure-ssis-integration-runtime). This article shows you how to retrieve information about an Azure-SSIS IR and descriptions of statuses in the returned information.
+- [Monitor an Azure-SSIS IR](monitor-integration-runtime.md#azure-ssis-integration-runtime). This article shows you how to retrieve information about an Azure-SSIS IR and descriptions of statuses in the returned information.
data-factory Quickstart Create Data Factory Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-bicep.md
tags: azure-resource-manager
-+ Last updated 08/19/2022
data-factory Quickstart Create Data Factory Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-dot-net.md
ms.devlang: csharp
Last updated 08/18/2022 -+ # Quickstart: Create a data factory and pipeline using .NET SDK
data-factory Quickstart Create Data Factory Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-resource-manager-template.md
-+ Last updated 10/25/2022
data-factory Self Hosted Integration Runtime Proxy Ssis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-proxy-ssis.md
-+ Last updated 02/28/2023
data-factory Transform Data Using Custom Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-custom-activity.md
-+ Last updated 09/22/2022
data-factory Tutorial Bulk Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-bulk-copy.md
-+ Last updated 09/26/2022
data-factory Tutorial Incremental Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-powershell.md
+ Last updated 09/26/2022
data-factory Tutorial Transform Data Spark Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-spark-powershell.md
Title: 'Transform data using Spark in Azure Data Factory '
description: 'This tutorial provides step-by-step instructions for transforming data by using Spark Activity in Azure Data Factory.' + Last updated 09/26/2022
Advance to the next tutorial to learn how to transform data by running Hive scri
> [!div class="nextstepaction"] > [Tutorial: transform data using Hive in Azure Virtual Network](tutorial-transform-data-hive-virtual-network.md).-----
data-factory Data Factory Build Your First Pipeline Using Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-arm.md
Last updated 10/22/2021- # Tutorial: Build your first Azure data factory using Azure Resource Manager template
This template creates a data factory named GatewayUsingArmDF with a gateway name
| [Pipelines](data-factory-create-pipelines.md) |This article helps you understand pipelines and activities in Azure Data Factory and how to use them to construct end-to-end data-driven workflows for your scenario or business. | | [Datasets](data-factory-create-datasets.md) |This article helps you understand datasets in Azure Data Factory. | | [Scheduling and execution](data-factory-scheduling-and-execution.md) |This article explains the scheduling and execution aspects of Azure Data Factory application model. |
-| [Monitor and manage pipelines using Monitoring App](data-factory-monitor-manage-app.md) |This article describes how to monitor, manage, and debug pipelines using the Monitoring & Management App. |
+| [Monitor and manage pipelines using Monitoring App](data-factory-monitor-manage-app.md) |This article describes how to monitor, manage, and debug pipelines using the Monitoring & Management App. |
data-factory Data Factory Build Your First Pipeline Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-powershell.md
+ Last updated 04/18/2022
data-factory Data Factory Build Your First Pipeline Using Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-rest-api.md
Last updated 10/22/2021- # Tutorial: Build your first Azure data factory using Data Factory REST API
data-factory Data Factory Build Your First Pipeline Using Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-vs.md
-+ Last updated 10/22/2021
data-factory Data Factory Copy Activity Tutorial Using Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-azure-resource-manager-template.md
Last updated 10/22/2021 - # Tutorial: Use Azure Resource Manager template to create a Data Factory pipeline to copy data
In this tutorial, you used Azure blob storage as a source data store and Azure S
[!INCLUDE [data-factory-supported-data-stores](includes/data-factory-supported-data-stores.md)]
-To learn about how to copy data to/from a data store, click the link for the data store in the table.
+To learn about how to copy data to/from a data store, click the link for the data store in the table.
data-factory Data Factory Copy Activity Tutorial Using Dotnet Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-dotnet-api.md
-
+ Title: 'Tutorial: Create a pipeline with Copy Activity using .NET API ' description: In this tutorial, you create an Azure Data Factory pipeline with a Copy Activity by using .NET API. + Last updated 10/22/2021
In this tutorial, you used Azure blob storage as a source data store and Azure S
[!INCLUDE [data-factory-supported-data-stores](includes/data-factory-supported-data-stores.md)] To learn about how to copy data to/from a data store, click the link for the data store in the table.-
data-factory Data Factory Copy Activity Tutorial Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-powershell.md
-
+ Title: 'Tutorial: Create a pipeline to move data by using Azure PowerShell ' description: In this tutorial, you create an Azure Data Factory pipeline with Copy Activity by using Azure PowerShell. + Last updated 10/22/2021
In this tutorial, you used Azure blob storage as a source data store and Azure S
[!INCLUDE [data-factory-supported-data-stores](includes/data-factory-supported-data-stores.md)] To learn about how to copy data to/from a data store, click the link for the data store in the table. -
data-factory Data Factory Json Scripting Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-json-scripting-reference.md
Last updated 10/22/2021-+ # Azure Data Factory - JSON Scripting Reference
data-factory Data Factory Odbc Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-odbc-connector.md
Last updated 10/22/2021 - # Move data From ODBC data stores using Azure Data Factory
data-factory How To Invoke Ssis Package Stored Procedure Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/how-to-invoke-ssis-package-stored-procedure-activity.md
ms.devlang: powershell+ Last updated 10/22/2021
In this step, you create a pipeline with a stored procedure activity. The activi
``` ## Next steps
-For details about the stored procedure activity, see the [Stored Procedure activity](data-factory-stored-proc-activity.md) article.
+For details about the stored procedure activity, see the [Stored Procedure activity](data-factory-stored-proc-activity.md) article.
data-lake-analytics Data Lake Analytics Cicd Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-cicd-overview.md
description: Learn how to set up continuous integration and continuous deploymen
Last updated 01/20/2023- # How to set up a CI/CD pipeline for Azure Data Lake Analytics
data-lake-analytics Data Lake Analytics Manage Use Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-manage-use-cli.md
Title: Manage Azure Data Lake Analytics using Azure CLI description: This article describes how to use the Azure CLI to manage Data Lake Analytics jobs, data sources, & users. + Last updated 01/27/2023
data-lake-store Data Lake Store Hdinsight Hadoop Use Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-hdinsight-hadoop-use-resource-manager-template.md
Last updated 05/29/2018 -- # Create an HDInsight cluster with Azure Data Lake Storage Gen1 using Azure Resource Manager template > [!div class="op_single_selector"]
data-lake-store Data Lake Store Offline Bulk Data Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-offline-bulk-data-upload.md
Last updated 05/29/2018 -- # Use the Azure Import/Export service for offline copy of data to Data Lake Storage Gen1
data-share Share Your Data Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/share-your-data-arm.md
Last updated 10/27/2022-+ # Quickstart: Share data using Azure Data Share and ARM template
data-share Share Your Data Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/share-your-data-bicep.md
Last updated 10/27/2022-+ # Quickstart: Share data using Azure Data Share and Bicep
data-share Share Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/share-your-data.md
description: Tutorial - Share data with customers and partners using Azure Data
+ Last updated 10/26/2022
data-share Subscribe To Data Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/subscribe-to-data-share.md
description: Tutorial - Accept and receive data using Azure Data Share
+ Last updated 11/30/2022
databox-online Azure Stack Edge Gpu Back Up Virtual Machine Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-back-up-virtual-machine-disks.md
+ Last updated 06/25/2021
To move your backups to an external location, you can use Azure Storage Explorer
## Next steps
-[Deploy virtual machines on your Azure Stack Edge Pro GPU device using templates](azure-stack-edge-gpu-deploy-virtual-machine-templates.md).
+[Deploy virtual machines on your Azure Stack Edge Pro GPU device using templates](azure-stack-edge-gpu-deploy-virtual-machine-templates.md).
databox-online Azure Stack Edge Gpu Create Virtual Machine Marketplace Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md
+ Last updated 05/24/2022
The deletion takes a couple minutes to complete.
## Next steps
-[Deploy VMs on your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-virtual-machine-portal.md).
+[Deploy VMs on your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-virtual-machine-portal.md).
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Install Password Reset Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-install-password-reset-extension.md
+ Last updated 04/14/2022
Learn how to:
- [Monitor VM activity on your device](azure-stack-edge-gpu-monitor-virtual-machine-activity.md) - [Manage VM disks](azure-stack-edge-gpu-manage-virtual-machine-disks-portal.md) - [Manage VM network interfaces](azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal.md)-- [Manage VM sizes](azure-stack-edge-gpu-manage-virtual-machine-resize-portal.md)
+- [Manage VM sizes](azure-stack-edge-gpu-manage-virtual-machine-resize-portal.md)
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-templates.md
+ Last updated 05/25/2022
Follow these steps to connect to a Linux VM.
## Next steps
-[Azure Resource Manager cmdlets](/powershell/module/azurerm.resources/?view=azurermps-6.13.0&preserve-view=true)
+[Azure Resource Manager cmdlets](/powershell/module/azurerm.resources/?view=azurermps-6.13.0&preserve-view=true)
databox-online Azure Stack Edge Gpu Manage Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-manage-storage-accounts.md
+ Last updated 04/18/2022
databox-online Azure Stack Edge Gpu Manage Virtual Machine Tags Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-manage-virtual-machine-tags-powershell.md
+ Last updated 07/12/2021
ddos-protection Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/alerts.md
# Configure Azure DDoS Protection metric alerts through portal
-Azure DDoS Protection provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
+Azure DDoS Protection provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors/azure-ddos-protection.md), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
In this article, you'll learn how to configure metrics alerts through Azure Monitor.
ddos-protection Ddos Diagnostic Alert Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-diagnostic-alert-templates.md
# Configure Azure DDoS Protection diagnostic logging alerts
-Azure DDoS Protection provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
+Azure DDoS Protection provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors/azure-ddos-protection.md), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
In this article, you'll learn how to configure diagnostic logging alerts through Azure Monitor and Logic App.
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
When deployed with a web application firewall (WAF), Azure DDoS Protection prote
All L3/L4 attack vectors can be mitigated, with global capacity, to protect against the largest known DDoS attacks. ### Attack analytics
-Get detailed reports in five-minute increments during an attack, and a complete summary after the attack ends. Stream mitigation flow logs to [Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection) or an offline security information and event management (SIEM) system for near real-time monitoring during an attack. See [View and configure DDoS diagnostic logging](diagnostic-logging.md) to learn more.
+Get detailed reports in five-minute increments during an attack, and a complete summary after the attack ends. Stream mitigation flow logs to [Microsoft Sentinel](../sentinel/data-connectors/azure-ddos-protection.md) or an offline security information and event management (SIEM) system for near real-time monitoring during an attack. See [View and configure DDoS diagnostic logging](diagnostic-logging.md) to learn more.
### Attack metrics Summarized metrics from each attack are accessible through Azure Monitor. See [View and configure DDoS protection telemetry](telemetry.md) to learn more.
ddos-protection Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/diagnostic-logging.md
# Tutorial: View and configure Azure DDoS Protection diagnostic logging
-Azure DDoS Protection provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
+Azure DDoS Protection provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors/azure-ddos-protection.md), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
The following diagnostic logs are available for Azure DDoS Protection:
This [built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/Po
### Microsoft Sentinel data connector
-You can connect logs to Microsoft Sentinel, view and analyze your data in workbooks, create custom alerts, and incorporate it into investigation processes. To connect to Microsoft Sentinel, see [Connect to Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection).
+You can connect logs to Microsoft Sentinel, view and analyze your data in workbooks, create custom alerts, and incorporate it into investigation processes. To connect to Microsoft Sentinel, see [Connect to Microsoft Sentinel](../sentinel/data-connectors/azure-ddos-protection.md).
:::image type="content" source="./media/ddos-attack-telemetry/azure-sentinel-ddos.png" alt-text="Screenshot of Microsoft Sentinel DDoS Connector." lightbox="./media/ddos-attack-telemetry/azure-sentinel-ddos.png":::
ddos-protection Manage Ddos Protection Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-bicep.md
-+ Last updated 10/12/2022
ddos-protection Manage Ddos Protection Powershell Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-powershell-ip.md
Last updated 11/14/2022 -+ # Quickstart: Create and configure Azure DDoS IP Protection Preview using Azure PowerShell
ddos-protection Manage Ddos Protection Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-template.md
-+ Last updated 10/12/2022
ddos-protection Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/telemetry.md
# Tutorial: View and configure Azure DDoS protection telemetry
-Azure DDoS Protection provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
+Azure DDoS Protection provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors/azure-ddos-protection.md), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
In this tutorial, you'll learn how to:
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
description: Enable the container protections of Microsoft Defender for Containe
-+ zone_pivot_groups: k8s-host Last updated 10/30/2022
defender-for-cloud Defender For Resource Manager Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-resource-manager-usage.md
To investigate security alerts from Microsoft Defender for Resource
1. Look for suspicious activities. > [!TIP]
-> For a better, richer investigation experience, stream your Azure activity logs to Microsoft Sentinel as described in [Connect data from Azure Activity log](../sentinel/data-connectors-reference.md#azure-activity).
+> For a better, richer investigation experience, stream your Azure activity logs to Microsoft Sentinel as described in [Connect data from Azure Activity log](../sentinel/data-connectors/azure-activity.md).
## Step 3. Immediate mitigation
defender-for-cloud Export To Siem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/export-to-siem.md
Learn more in [Connect alerts from Microsoft Defender for Cloud](../sentinel/con
Another alternative for investigating Defender for Cloud alerts in Microsoft Sentinel is to stream your audit logs into Microsoft Sentinel: - [Connect Windows security events](../sentinel/connect-windows-security-events.md) - [Collect data from Linux-based sources using Syslog](../sentinel/connect-syslog.md)-- [Connect data from Azure Activity log](../sentinel/data-connectors-reference.md#azure-activity)
+- [Connect data from Azure Activity log](../sentinel/data-connectors/azure-activity.md)
> [!TIP] > Microsoft Sentinel is billed based on the volume of data that it ingests for analysis in Microsoft Sentinel and stores in the Azure Monitor Log Analytics workspace. Microsoft Sentinel offers a flexible and predictable pricing model. [Learn more at the Microsoft Sentinel pricing page](https://azure.microsoft.com/pricing/details/azure-sentinel/).
defender-for-cloud Overview Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/overview-page.md
Title: Microsoft Defender for Cloud's main dashboard or 'overview' page description: Learn about the features of the Defender for Cloud overview page Previously updated : 01/10/2023 Last updated : 03/08/2023
The center of the page displays the **feature tiles**, each linking to a high pr
The **Insights** pane offers customized items for your environment including: -- Your most attacked resources-- Your [security controls](secure-score-security-controls.md) that have the highest potential to increase your secure score-- The active recommendations with the most resources impacted-- Recent blog posts by Microsoft Defender for Cloud experts
+- Actionable items to enhance your security.
+- Tips to handle alerts and recommendations.
+- Recommendations on how to to upgrade your service to enhance your environments protections.
+- Recent blog posts by Microsoft Defender for Cloud experts.
## Next steps
defender-for-cloud Powershell Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/powershell-onboarding.md
Title: Onboard to Microsoft Defender for Cloud with PowerShell
description: This document walks you through the process of enabling Microsoft Defender for Cloud with PowerShell cmdlets. Last updated 01/24/2023-+ # Quickstart: Automate onboarding of Microsoft Defender for Cloud using PowerShell
defender-for-cloud Powershell Sample Vulnerability Assessment Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/powershell-sample-vulnerability-assessment-azure-sql.md
Last updated 11/29/2022-+ # Enable vulnerability assessments on Azure SQL databases with the express configuration
defender-for-cloud Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Microsoft Defender for Cloud
description: Sample Azure Resource Graph queries for Microsoft Defender for Cloud showing use of resource types and tables to access Microsoft Defender for Cloud related resources and properties. Last updated 02/14/2023 -+ # Azure Resource Graph sample queries for Microsoft Defender for Cloud
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 02/19/2023 Last updated : 03/05/2023 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| [Three alerts in Defender for Azure Resource Manager plan will be deprecated](#three-alerts-in-defender-for-azure-resource-manager-plan-will-be-deprecated) | March 2023 | | [Alerts automatic export to Log Analytics workspace will be deprecated](#alerts-automatic-export-to-log-analytics-workspace-will-be-deprecated) | March 2023 | | [Deprecation and improvement of selected alerts for Windows and Linux Servers](#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers) | April 2023 |
+| [Deprecation of App Service language monitoring policies](#deprecation-of-app-service-language-monitoring-policies) | April 2023 |
| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | August 2023 | ### Changes in the recommendation "Machines should be configured securely" **Estimated date for change: March 2023**
-The recommendation "Machines should be configured securely" is going to be upgraded on March 20th to improve its performance and stability, and to align its experience with the generic behavior of MDC recommendations.
-As part of this update, the recommendation's ID will be changed from "181ac480-f7c4-544b-9865-11b8ffe87f47" to "c476dc48-8110-4139-91af-c8d940896b98".
-No action is required on the customer side, and there is no expected downtime nor impact on the secure score.
+The recommendation `Machines should be configured securely` is set to be updated. This update will improve the performance and stability of the recommendation and align its experience with the generic behavior of Defender for Cloud's recommendations.
+
+As part of this update, the recommendation's ID will be changed from `181ac480-f7c4-544b-9865-11b8ffe87f47` to `c476dc48-8110-4139-91af-c8d940896b98`.
+
+No action is required on the customer side, and there's no expected downtime nor impact on the secure score.
### Three alerts in Defender for Azure Resource Manager plan will be deprecated
As we continue to improve the quality of our alerts, the following three alerts
You can learn more details about each of these alerts from the [alerts reference list](alerts-reference.md#alerts-resourcemanager).
-In the scenario where an activity from a suspicious IP address is detected, one of the following Defender for Azure Resource Manager plan alerts `Azure Resource Manager operation from suspicious IP address` or `Azure Resource Manager operation from suspicious proxy IP address` will be present.
+In the scenario where an activity from a suspicious IP address is detected, one of the following Defenders for Azure Resource Manager plan alerts `Azure Resource Manager operation from suspicious IP address` or `Azure Resource Manager operation from suspicious proxy IP address` will be present.
### Alerts automatic export to Log Analytics workspace will be deprecated
You can also view the [full list of alerts](alerts-reference.md#defender-for-ser
Read the [Microsoft Defender for Cloud blog](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-servers-security-alerts-improvements/ba-p/3714175).
+### Deprecation of App Service language monitoring policies
+
+The following App Service language monitoring policies are set to be deprecated because they generate false negatives and they don't necessarily provide better security. Instead, you should always ensure you're using a language version without any known vulnerabilities.
+
+| Policy name | Policy ID |
+|--|--|
+| [App Service apps that use Java should use the latest 'Java version'](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F496223c3-ad65-4ecd-878a-bae78737e9ed) | 496223c3-ad65-4ecd-878a-bae78737e9ed |
+| [App Service apps that use Python should use the latest 'Python version'](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7008174a-fd10-4ef0-817e-fc820a951d73) | 7008174a-fd10-4ef0-817e-fc820a951d73 |
+| [Function apps that use Java should use the latest 'Java version'](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc) | 9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc |
+| [Function apps that use Python should use the latest 'Python version'](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7238174a-fd10-4ef0-817e-fc820a951d73) | 7238174a-fd10-4ef0-817e-fc820a951d73 |
+| [App Service apps that use PHP should use the latest 'PHP version'](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7261b898-8a84-4db8-9e04-18527132abb3)| 7261b898-8a84-4db8-9e04-18527132abb3 |
+
+Customers can use alternative built-in policies to monitor any specified language version for their App Services.
+
+Defender for Cloud won't include these recommendations as built-in recommendations. You can add them as custom recommendations to have Defender for Cloud monitor them.
+ ### Multiple changes to identity recommendations **Estimated date for change: August 2023**
The following security recommendations will be released as GA and replace the V1
The following security recommendations will be deprecated as part of this change:
-The following security recommendations will be deprecated as part of this change:
-
- | Recommendation | Assessment Key | |--|--| | MFA should be enabled on accounts with owner permissions on subscriptions | 94290b00-4d0c-d7b4-7cea-064a9554e681 |
defender-for-iot Dell Edge 5200 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-edge-5200.md
This article describes the Dell Edge 5200 appliance for OT sensors.
|**Physical specifications** | Mounting: Wall Mount<br>Ports: 3x RJ45 | |**Status** | Supported, Not available pre-configured|
+The following image shows the hardware elements on the Dell Edge 5200 that are used by Defender for IoT:
++ ## Specifications |Component |Technical specifications|
defender-for-iot Dell Poweredge R350 E1800 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r350-e1800.md
The Dell PowerEdge R350 is also available for the on-premises management console
The following image shows a view of the Dell PowerEdge R350 front panel: The following image shows a view of the Dell PowerEdge R350 back panel: ## Specifications
defender-for-iot Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/device-inventory.md
For example:
The Defender for IoT device inventory is available in the Azure portal, OT network sensor consoles, and the on-premises management console.
-While you can view device details from any of these locations, each location also offers extra device inventory support. The following table describes the device inventory visible supported for each location and the extra actions available from that location only:
+While you can view device details from any of these locations, each location also offers extra device inventory support. The following table describes the device inventory support for each location and the extra actions available from that location only:
|Location |Description | Extra inventory support | |||| |**Azure portal** | Devices detected from all cloud-connected OT sensors and Enterprise IoT sensors. <br><br> | - If you have an [Enterprise IoT plan](eiot-defender-for-endpoint.md) on your Azure subscription, the device inventory also includes devices detected by Microsoft Defender for Endpoint agents. <br><br>- If you also use [Microsoft Sentinel](iot-solution.md), incidents in Microsoft Sentinel are linked to related devices in Defender for IoT. <br><br>- Use Defender for IoT [workbooks](workbooks.md) for visibility into all cloud-connected device inventory, including related alerts and vulnerabilities. |
-|**OT network sensor consoles** | Devices detected by that OT sensor | - View all detected devices across a network device map<br>- View related events on the **Event timeline** |
+|**OT network sensor consoles** | Devices detected by that OT sensor | - View all detected devices across a network device map<br><br>- View related events on the **Event timeline** |
|**An on-premises management console** | Devices detected across all connected OT sensors | Enhance device data by importing data manually or via script | For more information, see:
Defender for IoT's device inventory supports device types across a variety of in
|Devices |For example ... | ||| |**Manufacturing**| Industrial and operational devices, such as pneumatic devices, packaging systems, industrial packaging systems, industrial robots |
-|**Building** | Access panels, surveillance devices, HVAC systems, elevators , smart lighting systems |
+|**Building** | Access panels, surveillance devices, HVAC systems, elevators, smart lighting systems |
|**Health care** | Glucose meters, monitors | |**Transportation / Utilities** | Turnstiles, people counters, motion sensors, fire and safety systems, intercoms | |**Energy and resources** | DCS controllers, PLCs, historian devices, HMIs |
When you're first working with Defender for IoT, during the learning period just
After the learning period is over, any new devices detected are considered to be *unauthorized* and *new* devices. We recommend checking these devices carefully for risks and vulnerabilities. For example, in the Azure portal, filter the device inventory for `Authorization == **Unauthorized**`. On the device details page, drill down and check for related vulnerabilities, alerts, and recommendations.
-The *new* status is removed as soon as you edit any of the device details move the device on an OT sensor device map. In contrast, the *unauthorized* label remains until you manually edit the device details and mark it as *authorized*.
+The *new* status is removed as soon as you edit any of the device details or move the device on an OT sensor device map. In contrast, the *unauthorized* label remains until you manually edit the device details and mark it as *authorized*.
On an OT sensor, unauthorized devices are also included in the following reports:
Mark OT devices as *important* to highlight them for extra tracking. On an OT se
- [Attack vector reports](how-to-create-attack-vector-reports.md): Devices marked as *important* are included in an attack vector simulation as possible attack targets. -- [Risk assessment reports](how-to-create-risk-assessment-reports.md): Devices marked as *important* are counted in risk assessment reports when calculating security scores
+- [Risk assessment reports](how-to-create-risk-assessment-reports.md): Devices marked as *important* are counted in risk assessment reports when calculating security scores.
## Device inventory column data
The following table lists the columns available in the Defender for IoT device i
| **Class** | Editable. The device's class. <br>Default: `IoT` | |**Data source** | The source of the data, such as a micro agent, OT sensor, or Microsoft Defender for Endpoint. <br>Default: `MicroAgent`| |**Description** * | Editable. The device's description. |
-| **Device Id** | The device's Azure-assigned ID number|
+| **Device Id** | The device's Azure-assigned ID number. |
| **Firmware model** | The device's firmware model.| | **Firmware vendor** | Editable. The vendor of the device's firmware. | | **Firmware version** * |Editable. The device's firmware version. |
defender-for-iot Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-solution.md
Before you start, make sure you have the following requirements on your workspac
- A Defender for IoT plan on your Azure subscription with data streaming into Defender for IoT. For more information, see [Quickstart: Get started with Defender for IoT](getting-started.md). > [!IMPORTANT]
-> Currently, having both the Microsoft Defender for IoT and the [Microsoft Defender for Cloud](../../sentinel/data-connectors-reference.md#microsoft-defender-for-cloud) data connectors enabled on the same Microsoft Sentinel workspace simultaneously may result in duplicate alerts in Microsoft Sentinel. We recommend that you disconnect the Microsoft Defender for Cloud data connector before connecting to Microsoft Defender for IoT.
+> Currently, having both the Microsoft Defender for IoT and the [Microsoft Defender for Cloud](../../sentinel/data-connectors/microsoft-defender-for-cloud.md) data connectors enabled on the same Microsoft Sentinel workspace simultaneously may result in duplicate alerts in Microsoft Sentinel. We recommend that you disconnect the Microsoft Defender for Cloud data connector before connecting to Microsoft Defender for IoT.
> ## Connect your data from Defender for IoT to Microsoft Sentinel
-Start by enabling the [Defender for IoT data connector](../../sentinel/data-connectors-reference.md#microsoft-defender-for-iot) to stream all your Defender for IoT events into Microsoft Sentinel.
+Start by enabling the [Defender for IoT data connector](/azure/sentinel/data-connectors/microsoft-defender-for-iot.md) to stream all your Defender for IoT events into Microsoft Sentinel.
**To enable the Defender for IoT data connector**:
dev-box How To Customize Devbox Azure Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-customize-devbox-azure-image-builder.md
description: 'Learn how to create a custom image with Azure Image Builder, then create a Dev box with the image.' + Last updated 11/17/2022
devtest-labs Devtest Lab Create Environment From Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-environment-from-arm.md
Last updated 12/21/2022-+ # Create Azure DevTest Labs environments from ARM templates
devtest-labs Devtest Lab Integrate Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-integrate-ci-cd.md
Title: Integrate Azure DevTest Labs into Azure Pipelines description: Learn how to integrate Azure DevTest Labs into Azure Pipelines continuous integration and delivery (CI/CD) pipelines. + Last updated 12/28/2021
devtest-labs Devtest Lab Troubleshoot Apply Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-troubleshoot-apply-artifacts.md
Last updated 03/31/2022- # Troubleshoot issues applying artifacts on DevTest Labs virtual machines
If you need more help, try one of the following support channels:
- Contact the Azure DevTest Labs experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). - Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums). - Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.-- Go to the [Azure support site](https://azure.microsoft.com/support/options) and select **Get Support** to file an Azure support incident.
+- Go to the [Azure support site](https://azure.microsoft.com/support/options) and select **Get Support** to file an Azure support incident.
devtest-labs Devtest Lab Vmcli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-vmcli.md
Title: Create and manage virtual machines in Azure DevTest Labs with Azure CLI description: Learn how to use Azure DevTest Labs to create and manage virtual machines with Azure CLI + Last updated 06/26/2020
devtest-labs How To Move Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/how-to-move-labs.md
Title: Move a DevTest lab to another region description: Shows you how to move a lab to another region. + Last updated 03/03/2022
digital-twins How To Create Data History Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-data-history-connection.md
description: See how to set up a data history connection for historizing Azure Digital Twins updates into Azure Data Explorer. Previously updated : 02/23/2023 Last updated : 03/02/2023
You'll see confirmation messages on the screen as models, twins, and relationshi
When the simulation is ready, the **Start simulation** button will become enabled. Scroll down and select **Start simulation** to push simulated data to your Azure Digital Twins instance. To continuously update the twins in your Azure Digital Twins instance, keep this browser window in the foreground on your desktop and complete other browser actions in a separate window. This will continuously generate twin property updates events that will be historized to Azure Data Explorer.
-To verify that data is flowing through the data history pipeline, navigate to the [Azure portal](https://portal.azure.com) and open the Event Hubs namespace resource you created. You should see charts showing the flow of messages into and out of the namespace, indicating the flow of incoming messages from Azure Digital Twins and outgoing messages to Azure Data Explorer. The image below shows what these charts might look like after an hour of running the simulator (but you should start to see some data after only a few minutes).
+#### Verify data flow
+
+To verify that data is flowing through the data history pipeline, you can use the [data history validation in Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md#validate-and-explore-historized-properties).
+
+1. Navigate to the [Azure Digital Twins Explorer](https://explorer.digitaltwins.azure.net/) and ensure it's [connected to the right instance](how-to-use-azure-digital-twins-explorer.md#switch-contexts-within-the-app).
+
+1. Use the instructions in [Validate and explore historized properties](how-to-use-azure-digital-twins-explorer.md#validate-and-explore-historized-properties) to choose a historized twin property to visualize in the chart.
+
+If you see data being populated in the chart, this means that Azure Digital Twins update events are being successfully stored in Azure Data Explorer.
++
+If you *don't* see data in the chart, the historization data flow isn't working properly. You can investigate the issue by viewing your Event Hubs namespace in the [Azure portal](https://portal.azure.com), which displays charts showing the flow of messages into and out of the namespace. This will allow you to verify both the flow of incoming messages from Azure Digital Twins and the outgoing messages to Azure Data Explorer, to help you identify which part of the flow isn't working.
:::image type="content" source="media/how-to-create-data-history-connection/simulated-environment-portal.png" alt-text="Screenshot of the Azure portal showing an Event Hubs namespace for the simulated environment." lightbox="media/how-to-create-data-history-connection/simulated-environment-portal.png"::: ### View the historized updates in Azure Data Explorer
-In this section, you'll view all three types of historized updates that were generated by the simulator and stored in Azure Data Explorer tables.
+Now that you've verified the data history flow is sending data to Azure Data Explorer, this section will show you how to view all three types of historized updates that were generated by the simulator and stored in Azure Data Explorer tables.
Start in the [Azure portal](https://portal.azure.com) and navigate to the Azure Data Explorer cluster you created earlier. Choose the **Databases** pane from the left menu to open the database view. Find the database you created for this article and select the checkbox next to it, then select **Query**.
digital-twins How To Ingest Iot Hub Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-ingest-iot-hub-data.md
-# Mandatory fields.
Title: Ingest telemetry from IoT Hub description: Learn how to ingest device telemetry messages from Azure IoT Hub to digital twins in an instance of Azure Digital Twins.
Last updated 11/18/2022 + # Optional fields. Don't forget to remove # if you need a field. #
-#
#
digital-twins How To Integrate Time Series Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-time-series-insights.md
-# Mandatory fields.
Title: Integrate with Azure Time Series Insights description: Learn how to set up event routes from Azure Digital Twins to Azure Time Series Insights.
Last updated 01/10/2023 + # Optional fields. Don't forget to remove # if you need a field. #
-#
#
If you allow a simulation to run for much longer, your visualization will look s
## Next steps
-After establishing a data pipeline to send time series data from Azure Digital Twins to Time Series Insights, you might want to think about how to translate asset models designed for Azure Digital Twins into asset models for Time Series Insights. For a tutorial on this next step in the integration process, see [Model synchronization between Azure Digital Twins and Time Series Insights Gen2](../time-series-insights/tutorials-model-sync.md).
+After establishing a data pipeline to send time series data from Azure Digital Twins to Time Series Insights, you might want to think about how to translate asset models designed for Azure Digital Twins into asset models for Time Series Insights. For a tutorial on this next step in the integration process, see [Model synchronization between Azure Digital Twins and Time Series Insights Gen2](../time-series-insights/tutorials-model-sync.md).
digital-twins How To Send Twin To Twin Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-send-twin-to-twin-events.md
-# Mandatory fields.
Title: Set up twin-to-twin event handling description: Learn how to create a function in Azure for propagating events through the twin graph.
Last updated 06/21/2022 -+ ms.devlang: azurecli # Optional fields. Don't forget to remove # if you need a field. #
-#
#
digital-twins How To Set Up Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-set-up-instance-cli.md
-# Mandatory fields.
Title: Set up an instance and authentication (CLI) description: See how to set up an instance of the Azure Digital Twins service using the CLI
Last updated 11/17/2022 -+ ms.devlang: azurecli # Optional fields. Don't forget to remove # if you need a field. #
-#
#
digital-twins How To Use 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-3d-scenes-studio.md
description: Learn how to use all the features of 3D Scenes Studio (preview) for Azure Digital Twins. Previously updated : 02/22/2023 Last updated : 02/27/2023
You can switch to **View** mode to enable filtering on specific elements and vis
:::image type="content" source="media/how-to-use-3d-scenes-studio/scene-view.png" alt-text="Screenshot of 3D Scenes Studio, showing a scene in the viewer." lightbox="media/how-to-use-3d-scenes-studio/scene-view.png":::
+You can view **All properties** of an element from here, as well as their values over time if [data history](concepts-data-history.md) is enabled on your instance. To view property history, select the **Open data history explorer** icon.
++
+This will open the **Data history explorer** for the property. For more information about using the data history explorer, see [Validate and explore historized properties](how-to-use-azure-digital-twins-explorer.md#validate-and-explore-historized-properties).
++ ### Embed scenes in custom applications The viewer component can also be embedded into custom applications outside of 3D Scenes Studio, and can work in conjunction with 3rd party components.
digital-twins How To Use Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-azure-digital-twins-explorer.md
description: Learn how to use all the features of Azure Digital Twins Explorer (preview) Previously updated : 12/2/2022 Last updated : 02/27/2023
The **Twin Graph** panel allows you to explore the twins and relationships in yo
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel.png" alt-text="Screenshot of Azure Digital Twins Explorer. The Twin Graph panel is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel.png":::
-You can use this panel to [view your twins and relationships](#view-twins-and-relationships).
+You can use this panel to [explore twin data](#explore-twin-data).
The Twin Graph panel also provides several abilities to customize your graph viewing experience: * [Change twin display property](#change-twin-display-property)
The Twin Graph panel also provides several abilities to customize your graph vie
* [Show and hide twin graph elements](#show-and-hide-twin-graph-elements) * [Filter and highlight twin graph elements](#filter-and-highlight-twin-graph-elements)
-### View twins and relationships
+### Explore twin data
Run a query using the [Query Explorer](#query-your-digital-twin-graph) to see the twins and relationships in the query result displayed in the **Twin Graph** panel.
Both of these error messages are shown in the screenshot below:
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/properties-errors.png" alt-text="Screenshot of Azure Digital Twins Explorer Twin Properties panel, showing two error messages. One error indicates that models are missing, and the other indicates that properties are missing a model." lightbox="media/how-to-use-azure-digital-twins-explorer/properties-errors-large.png":::
+#### Validate and explore historized properties
+
+If your Azure Digital Twins instance has [data history](concepts-data-history.md) enabled, you can validate and explore its historized data in Azure Digital Twins Explorer. Follow the steps below to visualize historized data in a chart, or view raw values in a table.
+
+1. From the **Twin Graph** viewer, select a twin whose historized properties you want to view to open it in the **Twin Properties** panel. In the top right corner of the panel, select the **Time series** icon to open the **Data history explorer**.
+
+ :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/open-data-history.png" alt-text="Screenshot of Azure Digital Twins Explorer Twin Properties panel, highlighting the icon to open the data history explorer." lightbox="media/how-to-use-azure-digital-twins-explorer/open-data-history.png":::
+
+1. Select the twin name from the left to bring up the options for choosing which historical properties of the twin to view. The **Twin ID** field will be pre-populated with the twin selection. Next to this field, you can select the **Inspect properties** icon to view the twin data, or the **Advanced twin search** icon to find other twins by querying property values.
+
+ :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/data-history-explorer.png" alt-text="Screenshot of the Data history explorer and a modal asking for twin and property details." lightbox="media/how-to-use-azure-digital-twins-explorer/data-history-explorer.png":::
+
+1. In the **Property** field, select the property whose historized data you'd like to view. If the property is not numeric, but it consists of numeric values, use the **Cast property value to number** option to attempt to cast this data to numbers so it can be visualized on the chart.
+
+ :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/data-history-explorer-property.png" alt-text="Screenshot of the Data history explorer with the property details highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/data-history-explorer-property.png":::
+
+1. Choose a **Label** for the time series and select **Update**.
+
+This will load the chart view of the historized values for the chosen property. You can use the tabs above the chart to toggle between the [chart view](#view-history-in-chart) and [table view](#view-history-in-table).
+
+To add more properties to the visualization, use the **Add time series** button on the left.
++
+To exit the data history explorer and return to the main Azure Digital Twins Explorer, select **Close**.
+
+##### View history in chart
+
+The **Chart** view of historized properties shows property values as points on a line graph over time.
++
+You can use the icons above the chart to control the chart settings, including...
+* changing the time range for the data that's included in the chart.
+* selecting whether multiple time series are shown independently or on a shared y-axis. Selecting **Independent** for the axes means that each time series will scale to the chart and maintain their own axis for scale. Selecting **Shared** axes means that all time series data will be scaled to the same axis.
+* choosing the aggregation logic for the chart. When the property has more data points than can be shown on the chart, the data will be aggregated into a finite amount of data points using either average, minimum or maximum functions.
+
+There's also a button to **Open query in Azure Data Explorer**, where you can view and modify the current query to further explore the time series data.
++
+##### View history in table
+
+The **Table** view of historized properties shows property values and their timestamps in a table.
++
+You can use the icons above the table to control the table settings, including...
+* changing the time range for the data that's included in the table.
+* downloading the table data for independent analysis.
+ #### View a twin's relationships You can also quickly view the code of all relationships that involve a certain twin (including incoming and outgoing relationships).
digital-twins Tutorial Command Line Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-command-line-cli.md
-# Mandatory fields.
Title: 'Tutorial: Create a graph in Azure Digital Twins (CLI)' description: Tutorial that shows how to build an Azure Digital Twins scenario using the Azure CLI
Last updated 02/25/2022 + # Optional fields. Don't forget to remove # if you need a field. #
-#
#
In this tutorial, you got started with Azure Digital Twins by building a graph i
Continue to the next tutorial to combine Azure Digital Twins with other Azure services to complete a data-driven, end-to-end scenario: > [!div class="nextstepaction"]
-> [Connect an end-to-end solution](tutorial-end-to-end.md)
+> [Connect an end-to-end solution](tutorial-end-to-end.md)
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-end-to-end.md
-# Mandatory fields.
Title: 'Tutorial: Connect an end-to-end solution' description: Follow this tutorial to learn how to build out an end-to-end Azure Digital Twins solution that's driven by device data.
Last updated 09/26/2022 -+ # Optional fields. Don't forget to remove # if you need a field. #
-#
#
dms Ads Sku Recommend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/ads-sku-recommend.md
Title: Get Azure recommendations for your SQL Server migration description: Learn how to use the Azure SQL Migration extension in Azure Data Studio to get SKU recommendation when you migrate SQL Server databases to the Azure SQL Managed Instance, SQL Server on Azure Virtual Machines, or Azure SQL Database.- -- Last updated : 02/22/2022 - Previously updated : 02/22/2022
dms Create Dms Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/create-dms-bicep.md
Title: Create instance of DMS (Bicep) description: Learn how to create Database Migration Service by using Bicep.----++ Last updated 03/21/2022 ++
+ - subject-armqs
+ - mode-arm
# Quickstart: Create instance of Azure Database Migration Service using Bicep
Remove-AzResourceGroup -Name exampleRG
For other ways to deploy Azure Database Migration Service, see [Azure portal](quickstart-create-data-migration-service-portal.md).
-To learn more, see [an overview of Azure Database Migration Service](dms-overview.md).
+To learn more, see [an overview of Azure Database Migration Service](dms-overview.md).
dms Create Dms Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/create-dms-resource-manager-template.md
Title: Create instance of DMS (Azure Resource Manager template) description: Learn how to create Database Migration Service by using Azure Resource Manager template (ARM template).----++ Last updated 06/29/2020 ++
+ - subject-armqs
+ - mode-arm
# Quickstart: Create instance of Azure Database Migration Service using ARM template
dms Dms Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-overview.md
Title: What is Azure Database Migration Service?
+ Title: What is Azure Database Migration Service?
description: Overview of Azure Database Migration Service, which provides seamless migrations from many database sources to Azure Data platforms.- - Last updated : 02/08/2023 - Previously updated : 02/08/2023 # What is Azure Database Migration Service?
For up-to-date info about the regional availability of Azure Database Migration
* [Status of migration scenarios supported by Azure Database Migration Service](./resource-scenario-status.md) * [Services and tools available for data migration scenarios](./dms-tools-matrix.md) * [Migrate databases with Azure SQL Migration extension for Azure Data Studio](./migration-using-azure-data-studio.md)
-* [FAQ about using Azure Database Migration Service](./faq.yml)
+* [FAQ about using Azure Database Migration Service](./faq.yml)
dms Dms Tools Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-tools-matrix.md
Title: Azure Database Migration Service tools matrix description: Learn about the services and tools available to migrate databases and to support various phases of the migration process.- - Last updated : 03/03/2020 -- Previously updated : 03/03/2020+
+ - mvc
+ - ignite-2022
# Services and tools available for data migration scenarios
dms Faq Mysql Single To Flex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/faq-mysql-single-to-flex.md
Title: FAQ about using Azure Database Migration Service for Azure Database MySQL Single Server to Flexible Server migrations description: Frequently asked questions about using Azure Database Migration Service to perform database migrations from Azure Database MySQL Single Server to Flexible Server.----++ Last updated : 09/17/2022 -- Previously updated : 09/17/2022+ # Frequently Asked Questions (FAQs)
dms How To Migrate Ssis Packages Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-migrate-ssis-packages-managed-instance.md
Title: Migrate SSIS packages to SQL Managed Instance description: Learn how to migrate SQL Server Integration Services (SSIS) packages and projects to an Azure SQL Managed Instance using the Azure Database Migration Service or the Data Migration Assistant.- - Last updated : 02/20/2020 -- Previously updated : 02/20/2020+ # Migrate SQL Server Integration Services packages to an Azure SQL Managed Instance
After an instance of the service is created, locate it within the Azure portal,
## Next steps
-* Review the migration guidance in the Microsoft [Database Migration Guide](/data-migration/).
+* Review the migration guidance in the Microsoft [Database Migration Guide](/data-migration/).
dms How To Migrate Ssis Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-migrate-ssis-packages.md
Title: Redeploy SSIS packages to SQL single database description: Learn how to migrate or redeploy SQL Server Integration Services packages and projects to Azure SQL Database single database using the Azure Database Migration Service and Data Migration Assistant.- - Last updated : 02/20/2020 -- Previously updated : 02/20/2020+ # Redeploy SSIS packages to Azure SQL Database with Azure Database Migration Service
If the deployment of your project succeeds without failure, you can select any p
## Next steps
-* Review the migration guidance in the Microsoft [Database Migration Guide](/data-migration/).
+* Review the migration guidance in the Microsoft [Database Migration Guide](/data-migration/).
dms How To Monitor Migration Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-monitor-migration-activity.md
Title: Monitor migration activity - Azure Database Migration Service description: Learn to use the Azure Database Migration Service to monitor migration activity.- - Last updated : 02/20/2020 -- Previously updated : 02/20/2020+ # Monitor migration activity using the Azure Database Migration Service
The following table describes the fields shown in table level migration progress
> CDC values of Insert, Update and Delete and Total Applied may decrease when database is cutover or migration is restarted. ## Next steps-- Review the migration guidance in the Microsoft [Database Migration Guide](/data-migration/).
+- Review the migration guidance in the Microsoft [Database Migration Guide](/data-migration/).
dms Howto Sql Server To Azure Sql Managed Instance Powershell Offline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md
Title: "PowerShell: Migrate SQL Server to SQL Managed Instance offline" description: Learn to offline migrate from SQL Server to Azure SQL Managed Instance by using Azure PowerShell and the Azure Database Migration Service.- - Last updated : 12/16/2020 -- Previously updated : 12/16/2020+
+ - seo-lt-2019
+ - fasttrack-edit
+ - devx-track-azurepowershell
# Migrate SQL Server to SQL Managed Instance offline with PowerShell & Azure Database Migration Service
Remove-AzDms -ResourceGroupName myResourceGroup -ServiceName MyDMS
Find out more about Azure Database Migration Service in the article [What is the Azure Database Migration Service?](./dms-overview.md).
-For information about additional migrating scenarios (source/target pairs), see the Microsoft [Database Migration Guide](/data-migration/).
+For information about additional migrating scenarios (source/target pairs), see the Microsoft [Database Migration Guide](/data-migration/).
dms Howto Sql Server To Azure Sql Managed Instance Powershell Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-online.md
Title: "PowerShell: Migrate SQL Server to SQL Managed Instance online" description: Learn to online migrate from SQL Server to Azure SQL Managed Instance by using Azure PowerShell and the Azure Database Migration Service.- - Last updated : 12/16/2020 -- Previously updated : 12/16/2020+ # Migrate SQL Server to SQL Managed Instance online with PowerShell & Azure Database Migration Service
dms Howto Sql Server To Azure Sql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-powershell.md
Title: "PowerShell: Migrate SQL Server to SQL Database"
+ Title: "PowerShell: Migrate SQL Server to SQL Database"
description: Learn to migrate a database from SQL Server to Azure SQL Database by using Azure PowerShell with the Azure Database Migration Service.- - Last updated : 02/20/2020 -- Previously updated : 02/20/2020+
+ - seo-lt-2019
+ - devx-track-azurepowershell
# Migrate a SQL Server database to Azure SQL Database using Azure PowerShell
dms Known Issues Azure Mysql Fs Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-mysql-fs-online.md
Title: Known issues with migrations to Azure MySQL Database description: Learn about known migration issues associated with migrations to Azure MySQL Database- - Last updated : 10/04/2022 -- Previously updated : 10/04/2022+ # Known Issues With Migrations To Azure Database for MySQL
One or more incompatible SQL modes can cause a number of different errors. Below
## Next steps * View the tutorial [Migrate Azure Database for MySQL - Single Server to Flexible Server online using DMS via the Azure portal](tutorial-mysql-azure-single-to-flex-online-portal.md).
-* View the tutorial [Migrate Azure Database for MySQL - Single Server to Flexible Server offline using DMS via the Azure portal](tutorial-mysql-azure-mysql-offline-portal.md).
+* View the tutorial [Migrate Azure Database for MySQL - Single Server to Flexible Server offline using DMS via the Azure portal](tutorial-mysql-azure-mysql-offline-portal.md).
dms Known Issues Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-postgresql-online.md
Title: "Known issues: Online migrations from PostgreSQL to Azure Database for PostgreSQL" description: Learn about known issues and migration limitations with online migrations from PostgreSQL to Azure Database for PostgreSQL using the Azure Database Migration Service.----++ Last updated : 02/20/2020 -- Previously updated : 02/20/2020+
+ - seo-lt-2019
+ - seo-dt-2019
# Known issues/limitations with online migrations from PostgreSQL to Azure Database for PostgreSQL
dms Known Issues Azure Sql Db Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-db-managed-instance-online.md
-
+ Title: Known issues and limitations with online migrations to Azure SQL Managed Instance description: Learn about known issues/migration limitations associated with online migrations to Azure SQL Managed Instance.- - Last updated : 02/20/2020 -- Previously updated : 02/20/2020+ # Known issues/migration limitations with online migrations to Azure SQL Managed Instance
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
Title: "Known issues, limitations, and troubleshooting" description: Known issues, limitations and troubleshooting guide for Azure SQL Migration extension for Azure Data Studio- -- Last updated : 01/05/2023 -- Previously updated : 01/05/2023+ # Known issues, limitations, and troubleshooting
dms Known Issues Dms Hybrid Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-dms-hybrid-mode.md
Title: Known issues/migration limitations with using Hybrid mode description: Learn about known issues/migration limitations with using Azure Database Migration Service in hybrid mode.----++ Last updated : 02/20/2020 -- Previously updated : 02/20/2020+ # Known issues/migration limitations with using hybrid mode
dms Known Issues Mongo Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-mongo-cosmos-db.md
Title: "Known issues: Migrate from MongoDB to Azure Cosmos DB" description: Learn about known issues and migration limitations with migrations from MongoDB to Azure Cosmos DB using the Azure Database Migration Service.- - Last updated : 05/18/2022 -- Previously updated : 05/18/2022+
+ - seo-lt-2019
+ - kr2b-contr-experiment
+ - ignite-2022
# Known issues with migrations from MongoDB to Azure Cosmos DB
dms Known Issues Troubleshooting Dms Source Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-troubleshooting-dms-source-connectivity.md
Title: "Issues connecting source databases" description: Learn about how to troubleshoot known issues/errors associated with connecting Azure Database Migration Service to source databases.----++ Last updated : 02/20/2020 -- Previously updated : 02/20/2020+ # Troubleshoot DMS errors when connecting to source databases
dms Known Issues Troubleshooting Dms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-troubleshooting-dms.md
Title: "Common issues - Azure Database Migration Service" description: Learn about how to troubleshoot common known issues/errors associated with using Azure Database Migration Service.----++ Last updated : 02/20/2020 -- Previously updated : 02/20/2020+
+ - seo-lt-2019
+ - ignite-2022
# Troubleshoot common Azure Database Migration Service issues and errors
dms Migrate Azure Mysql Consistent Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migrate-azure-mysql-consistent-backup.md
Title: MySQL to Azure Database for MySQL Data Migration - MySQL Consistent Backup (Preview) description: Learn how to use the Azure Database for MySQL Data Migration - MySQL Consistent Backup for transaction consistency even without making the Source server read-only-----++ Last updated : 04/19/2022 - Previously updated : 04/19/2022
dms Migrate Mysql To Azure Mysql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migrate-mysql-to-azure-mysql-powershell.md
Title: "PowerShell: Run offline migration from MySQL database to Azure Database for MySQL using DMS" description: Learn to migrate an on-premises MySQL database to Azure Database for MySQL by using Azure Database Migration Service through PowerShell script.-----+++ Last updated : 04/11/2021 -- Previously updated : 04/11/2021+
+ - seo-lt-2019
+ - devx-track-azurepowershell
# Migrate MySQL to Azure Database for MySQL offline with PowerShell & Azure Database Migration Service
Remove-AzDataMigrationService -ResourceId $($dmsService.ResourceId)
* For troubleshooting source database connectivity issues while using DMS, see the article [Issues connecting source databases](./known-issues-troubleshooting-dms-source-connectivity.md). * For information about Azure Database Migration Service, see the article [What is Azure Database Migration Service?](./dms-overview.md). * For information about Azure Database for MySQL, see the article [What is Azure Database for MySQL?](../mysql/overview.md).
-* For tutorial about using DMS via portal, see the article [Tutorial: Migrate MySQL to Azure Database for MySQL offline using DMS](./tutorial-mysql-azure-mysql-offline-portal.md)
+* For tutorial about using DMS via portal, see the article [Tutorial: Migrate MySQL to Azure Database for MySQL offline using DMS](./tutorial-mysql-azure-mysql-offline-portal.md)
dms Migration Dms Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-dms-powershell-cli.md
Title: Migrate databases at scale using Azure PowerShell / CLI (Preview) description: Learn how to use Azure PowerShell or CLI to migrate databases at scale with the Azure SQL migration extension in Azure Data Studio- - Last updated : 04/26/2022 - Previously updated : 04/26/2022- # Migrate databases at scale using automation (Preview)
If you receive the error *"The subscription isn't registered to use namespace 'M
- For Azure PowerShell reference documentation for SQL Server database migrations, see [Az.DataMigration](/powershell/module/az.datamigration). - For Azure CLI reference documentation for SQL Server database migrations, see [az datamigration](/cli/azure/datamigration).-- For Azure Samples code repository, see [data-migration-sql](https://github.com/Azure-Samples/data-migration-sql)
+- For Azure Samples code repository, see [data-migration-sql](https://github.com/Azure-Samples/data-migration-sql)
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
Title: Migrate databases by using the Azure SQL Migration extension for Azure Data Studio description: Learn how to use the Azure SQL Migration extension in Azure Data Studio to migrate databases with Azure Database Migration Service.- - Last updated : 09/28/2022 - Previously updated : 09/28/2022
dms Pre Reqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/pre-reqs.md
Title: Prerequisites for Azure Database Migration Service description: Learn about an overview of the prerequisites for using the Azure Database Migration Service to perform database migrations.----++ Last updated : 02/25/2020 -- Previously updated : 02/25/2020+ # Overview of prerequisites for using the Azure Database Migration Service
dms Quickstart Create Data Migration Service Hybrid Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/quickstart-create-data-migration-service-hybrid-portal.md
Title: "Quickstart: Create a hybrid mode instance with Azure portal" description: Use the Azure portal to create an instance of Azure Database Migration Service in hybrid mode.----++ Last updated : 03/13/2020 -- Previously updated : 03/13/2020+
+ - seo-lt-2019
+ - mode-ui
+ - subject-rbac-steps
# Quickstart: Create a hybrid mode instance with Azure portal & Azure Database Migration Service
dms Quickstart Create Data Migration Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/quickstart-create-data-migration-service-portal.md
Title: "Quickstart: Create an instance using the Azure portal" description: Use the Azure portal to create an instance of Azure Database Migration Service.----++ Last updated : 01/29/2021 -- Previously updated : 01/29/2021+
+ - seo-lt-2019
+ - mode-ui
# Quickstart: Create an instance of the Azure Database Migration Service by using the Azure portal
dms Resource Custom Roles Sql Database Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-database-ads.md
Title: "Custom roles for SQL Server to Azure SQL Database (preview) migrations in Azure Data Studio" description: Learn how to use custom roles for SQL Server to Azure SQL Database (preview) migrations in Azure Data Studio.- -- Last updated : 09/28/2022 -- Previously updated : 09/28/2022 # Custom roles for SQL Server to Azure SQL Database (preview) migrations in Azure Data Studio
dms Resource Custom Roles Sql Db Managed Instance Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance-ads.md
Title: "Custom roles: Online SQL Server to SQL Managed Instance migrations using ADS" description: Learn to use the custom roles for SQL Server to Azure SQL Managed Instance migrations.- -- Last updated : 05/02/2022 -- Previously updated : 05/02/2022 # Custom roles for SQL Server to Azure SQL Managed Instance migrations using ADS
dms Resource Custom Roles Sql Db Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance.md
Title: "Custom roles: Online SQL Server to SQL Managed Instance migrations" description: Learn to use the custom roles for SQL Server to Azure SQL Managed Instance online migrations.- - Last updated : 02/08/2021 -- Previously updated : 02/08/2021+ # Custom roles for SQL Server to Azure SQL Managed Instance online migrations
dms Resource Custom Roles Sql Db Virtual Machine Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-virtual-machine-ads.md
Title: "Custom roles: Online SQL Server to Azure Virtual Machines migrations with ADS" description: Learn to use the custom roles for SQL Server to Azure VM's migrations.- -- Last updated : 05/02/2022 -- Previously updated : 05/02/2022 # Custom roles for SQL Server to Azure Virtual Machines migrations using ADS
dms Resource Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-network-topologies.md
Title: Network topologies for SQL Managed Instance migrations description: Learn the source and target configurations for Azure SQL Managed Instance migrations using the Azure Database Migration Service.--++ Last updated : 01/08/2020 -- Previously updated : 01/08/2020+ # Network topologies for Azure SQL Managed Instance migrations using Azure Database Migration Service
Use this network topology if your environment requires one or more of the follow
## Next steps - For an overview of Azure Database Migration Service, see the article [What is Azure Database Migration Service?](dms-overview.md).-- For current information about regional availability of Azure Database Migration Service, see the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration) page.
+- For current information about regional availability of Azure Database Migration Service, see the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration) page.
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-scenario-status.md
description: Learn which migration scenarios are currently supported for Azure Database Migration Service and their availability status. - Last updated 06/13/2022 - # Azure Database Migration Service supported scenarios
dms Tutorial Azure Postgresql To Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md
Title: "Tutorial: Migrate Azure Database for PostgreSQL to Azure Database for PostgreSQL online via the Azure portal" description: Learn to perform an online migration from one Azure Database for PostgreSQL to another Azure Database for PostgreSQL by using Azure Database Migration Service via the Azure portal.----++ Last updated : 07/21/2020 -- Previously updated : 07/21/2020+ # Tutorial: Migrate/Upgrade Azure Database for PostgreSQL - Single Server to Azure Database for PostgreSQL - Single Server online using DMS via the Azure portal
dms Tutorial Login Migration Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-login-migration-ads.md
Last updated 01/31/2023 -
dms Tutorial Mongodb Cosmos Db Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mongodb-cosmos-db-online.md
Title: "Tutorial: Migrate MongoDB online to Azure Cosmos DB for MongoDB" description: Learn to migrate from MongoDB on-premises to Azure Cosmos DB for MongoDB online by using Azure Database Migration Service.- - Last updated : 09/21/2021 -- Previously updated : 09/21/2021+
+ - seo-nov-2020
+ - ignite-2022
# Tutorial: Migrate MongoDB to Azure Cosmos DB for MongoDB online using DMS
dms Tutorial Mongodb Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mongodb-cosmos-db.md
Title: "Tutorial: Migrate MongoDB offline to Azure Cosmos DB for MongoDB" description: Migrate from MongoDB on-premises to Azure Cosmos DB for MongoDB offline via Azure Database Migration Service.- - Last updated : 09/21/2021 -- Previously updated : 09/21/2021+
+ - seo-lt-2019
+ - ignite-2022
# Tutorial: Migrate MongoDB to Azure Cosmos DB for MongoDB offline
dms Tutorial Mysql Azure Mysql Offline Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-mysql-offline-portal.md
Title: "Tutorial: Migrate MySQL to Azure Database for MySQL offline using DMS" description: "Learn to perform an offline migration from MySQL on-premises to Azure Database for MySQL by using Azure Database Migration Service."-----+++ Last updated : 04/11/2021 -- Previously updated : 04/11/2021+
+ - seo-lt-2019
+ - ignite-2022
# Tutorial: Migrate MySQL to Azure Database for MySQL offline using DMS
dms Tutorial Mysql Azure Single To Flex Offline Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-offline-portal.md
Title: "Tutorial: Migrate Azure Database for MySQL - Single Server to Flexible Server offline using DMS via the Azure portal" description: "Learn to perform an offline migration from Azure Database for MySQL - Single Server to Flexible Server by using Azure Database Migration Service."----+++ Last updated 09/17/2022
dms Tutorial Mysql Azure Single To Flex Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-online-portal.md
Title: "Tutorial: Migrate Azure Database for MySQL - Single Server to Flexible Server online using DMS via the Azure portal" description: "Learn to perform an online migration from Azure Database for MySQL - Single Server to Flexible Server by using Azure Database Migration Service."----+++ Last updated 09/17/2022
dms Tutorial Postgresql Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-postgresql-azure-postgresql-online-portal.md
Title: "Tutorial: Migrate PostgreSQL to Azure Database for PostgreSQL online via the Azure portal" description: Learn to perform an online migration from PostgreSQL on-premises to Azure Database for PostgreSQL by using Azure Database Migration Service via the Azure portal.----++ Last updated : 04/11/2020 -- Previously updated : 04/11/2020+
+ - seo-lt-2019
+ - ignite-2022
# Tutorial: Migrate PostgreSQL to Azure Database for PostgreSQL online using DMS via the Azure portal
dms Tutorial Postgresql Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-postgresql-azure-postgresql-online.md
Title: "Tutorial: Migrate PostgreSQL to Azure Database for PostgreSQL online via the Azure CLI" description: Learn to perform an online migration from PostgreSQL on-premises to Azure Database for PostgreSQL by using Azure Database Migration Service via the CLI.----++ Last updated : 04/11/2020 -- Previously updated : 04/11/2020+
+ - seo-lt-2019
+ - devx-track-azurecli
+ - ignite-2022
# Tutorial: Migrate PostgreSQL to Azure Database for PostgreSQL online using DMS via the Azure CLI
dms Tutorial Rds Postgresql Server Azure Db For Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-rds-postgresql-server-azure-db-for-postgresql-online.md
Title: "Tutorial: Migrate RDS PostgreSQL online to Azure Database for PostgreSQL" description: Learn to perform an online migration from RDS PostgreSQL to Azure Database for PostgreSQL by using the Azure Database Migration Service.----++ Last updated : 04/11/2020 -- Previously updated : 04/11/2020+ # Tutorial: Migrate RDS PostgreSQL to Azure DB for PostgreSQL online using DMS
dms Tutorial Sql Server Azure Sql Database Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-azure-sql-database-offline-ads.md
Title: "Tutorial: Migrate SQL Server to Azure SQL Database (preview) offline in Azure Data Studio" description: Learn how to migrate on-premises SQL Server to Azure SQL Database (preview) offline by using Azure Data Studio and Azure Database Migration Service.- -- Last updated : 01/12/2023 -- Previously updated : 01/12/2023+ # Tutorial: Migrate SQL Server to Azure SQL Database (preview) offline in Azure Data Studio
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
Title: "Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline in Azure Data Studio" description: Learn how to migrate on-premises SQL Server to Azure SQL Managed Instance offline by using Azure Data Studio and Azure Database Migration Service.- -- Last updated : 01/26/2023 -- Previously updated : 01/26/2023+ # Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline in Azure Data Studio
Migrating to Azure SQL Managed Instance by using the Azure SQL extension for Azu
- Complete a quickstart to [migrate a database to SQL Managed Instance by using the T-SQL RESTORE command](/azure/azure-sql/managed-instance/restore-sample-database-quickstart). - Learn more about [SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview). - Learn how to [connect apps to SQL Managed Instance](/azure/azure-sql/managed-instance/connect-application-instance).-- To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
+- To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
Title: "Tutorial: Migrate SQL Server to Azure SQL Managed Instance online by using Azure Data Studio" description: Learn how to migrate on-premises SQL Server to Azure SQL Managed Instance only by using Azure Data Studio and Azure Database Migration Service.----++ Last updated : 01/26/2023 -- Previously updated : 01/26/2023+ # Tutorial: Migrate SQL Server to Azure SQL Managed Instance online in Azure Data Studio
Migrating to Azure SQL Managed Instance by using the Azure SQL extension for Azu
* For a tutorial showing you how to migrate a database to SQL Managed Instance using the T-SQL RESTORE command, see [Restore a backup to SQL Managed Instance using the restore command](/azure/azure-sql/managed-instance/restore-sample-database-quickstart). * For information about SQL Managed Instance, see [What is SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview). * For information about connecting apps to SQL Managed Instance, see [Connect applications](/azure/azure-sql/managed-instance/connect-application-instance).
-* To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
+* To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online.md
Title: "Tutorial: Migrate SQL Server online to SQL Managed Instance"
-description: Learn to perform an online migration from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service (classic)
-
+description: Learn to perform an online migration from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service (classic)
- Last updated : 02/08/2023 -- Previously updated : 02/08/2023+
+ - seo-lt-2019
+ - ignite-2022
# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online using DMS (classic)
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-azure-sql.md
Title: "Tutorial: Migrate SQL Server offline to Azure SQL Database" description: Learn to migrate from SQL Server to Azure SQL Database offline by using Azure Database Migration Service (classic).----++ Last updated : 02/08/2023 -- Previously updated : 02/08/2023+
+ - seo-lt-2019
+ - ignite-2022
# Tutorial: Migrate SQL Server to Azure SQL Database using DMS (classic)
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-managed-instance.md
Title: "Tutorial: Migrate SQL Server to SQL Managed Instance" description: Learn to migrate from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service (classic).----++ Last updated : 02/08/2023 -- Previously updated : 02/08/2023+
+ - seo-lt-2019
+ - fasttrack-edit
+ - ignite-2022
# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance offline using DMS (classic)
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
Title: "Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machines offline in Azure Data Studio" description: Learn how to migrate on-premises SQL Server to SQL Server on Azure Virtual Machines offline by using Azure Data Studio and Azure Database Migration Service.- - Last updated : 01/12/2023 -- Previously updated : 01/12/2023+ # Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machines offline in Azure Data Studio
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
Title: "Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine online using Azure Data Studio" description: Learn how to migrate on-premises SQL Server to SQL Server on Azure Virtual Machines online by using Azure Data Studio and Azure Database Migration Service.- - Last updated : 01/26/2023 -- Previously updated : 01/26/2023+ # Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machines online in Azure Data Studio
Migrating to SQL Server on Azure VMs by using the Azure SQL extension for Azure
* How to migrate a database to SQL Server on Azure Virtual Machines using the T-SQL RESTORE command, see [Migrate a SQL Server database to SQL Server on a virtual machine](/azure/azure-sql/virtual-machines/windows/migrate-to-vm-from-sql-server). * For information about SQL Server on Azure Virtual Machines, see [Overview of SQL Server on Azure Windows Virtual Machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview). * For information about connecting apps to SQL Server on Azure Virtual Machines, see [Connect applications](/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql).
-* To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
+* To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
dms Tutorial Transparent Data Encryption Migration Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-transparent-data-encryption-migration-ads.md
Last updated 02/03/2023 - # Tutorial: Migrate TDE-enabled databases (preview) to Azure SQL in Azure Data Studio
dns Dns Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-bicep.md
Last updated 09/27/2022 -+ #Customer intent: As an administrator or developer, I want to learn how to configure Azure DNS using Bicep so I can use Azure DNS for my name resolution.
Remove-AzResourceGroup -Name exampleRG
In this quickstart, you created a: - DNS zone-- `A` record
+- `A` record
dns Dns Get Started Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-template.md
Last updated 09/27/2022 -+ #Customer intent: As an administrator or developer, I want to learn how to configure Azure DNS using Azure ARM template so I can use Azure DNS for my name resolution.
dns Dns Private Resolver Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-bicep.md
Last updated 10/07/2022 -+ #Customer intent: As an administrator or developer, I want to learn how to create Azure DNS Private Resolver using Bicep so I can use Azure DNS Private Resolver as forwarder.
dns Dns Private Resolver Get Started Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-template.md
Last updated 10/07/2022 -+ #Customer intent: As an administrator or developer, I want to learn how to create Azure DNS Private Resolver using ARM template so I can use Azure DNS Private Resolver as forwarder..
dns Private Resolver Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-architecture.md
Consider the following hub and spoke VNet topology in Azure with a private resol
- The DNS forwarding ruleset is linked to the spoke VNet. - A ruleset rule is configured to forward queries for the private zone to the inbound endpoint.
-**DNS resolution in the hub VNet**: The virtual network link from the private zone to the Hub VNet enables resources inside the hub VNet to automatically resolve DNS records in **azure.contoso.com** using Azure-provided DNS ([168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md)). All other namespaces are also resolved using Azure-provided DNS. The hub VNet doesn't use ruleset rules to resolve DNS names because it is not linked to the ruleset. To use forwarding rules in the hub VNet, create and link another ruleset to the Hub VNet.
+**DNS resolution in the hub VNet**: The virtual network link from the private zone to the Hub VNet enables resources inside the hub VNet to automatically resolve DNS records in **azure.contoso.com** using Azure-provided DNS ([168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md)). All other namespaces are also resolved using Azure-provided DNS. The hub VNet doesn't use ruleset rules to resolve DNS names because it isn't linked to the ruleset. To use forwarding rules in the hub VNet, create and link another ruleset to the Hub VNet.
-**DNS resolution in the spoke VNet**: The virtual network link from the ruleset to the spoke VNet enables the spoke VNet to resolve **azure.contoso.com** using the configured forwarding rule. A link from the private zone to the spoke VNet is not required here. The spoke VNet sends queries for **azure.contoso.com**, and any other namespaces that have been configured in the ruleset, to the hub VNet. DNS queries that don't match a ruleset rule use Azure-provided DNS.
+**DNS resolution in the spoke VNet**: The virtual network link from the ruleset to the spoke VNet enables the spoke VNet to resolve **azure.contoso.com** using the configured forwarding rule. A link from the private zone to the spoke VNet isn't required here. The spoke VNet sends queries for **azure.contoso.com** to the hub's inbound endpoint. Other namespaces are also resolved for the spoke VNet using the linked ruleset if rules for those names are configured in a rule. DNS queries that don't match a ruleset rule use Azure-provided DNS.
> [!IMPORTANT] > In this example configuration, the hub VNet must be linked to the private zone, but must **not** be linked to a forwarding ruleset with an inbound endpoint forwarding rule. Linking a forwarding ruleset that contains a rule with the inbound endpoint as a destination to the same VNet where the inbound endpoint is provisioned can cause DNS resolution loops.
Consider the following hub and spoke VNet topology with an inbound endpoint prov
**DNS resolution in the hub VNet**: The virtual network link from the private zone to the Hub VNet enables resources inside the hub VNet to automatically resolve DNS records in **azure.contoso.com** using Azure-provided DNS ([168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md)). If configured, ruleset rules determine how DNS names are resolved. Namespaces that don't match a ruleset rule are resolved using Azure-provided DNS.
-**DNS resolution in the spoke VNet**: In this example, the spoke VNet sends all of its DNS traffic to the inbound endpoint in the Hub VNet. Since **azure.contoso.com** has a virtual network link to the Hub VNet, all resources in the Hub can resolve **azure.contoso.com**, including the inbound endpoint (10.10.0.4). The spoke VNet also resolves all DNS names using rules provisioned in a forwarding ruleset if one is present and linked to the hub VNet.
+**DNS resolution in the spoke VNet**: In this example, the spoke VNet sends all of its DNS traffic to the inbound endpoint in the Hub VNet. Since **azure.contoso.com** has a virtual network link to the Hub VNet, all resources in the Hub can resolve **azure.contoso.com**, including the inbound endpoint (10.10.0.4). Thus, the spoke uses the hub inbound endpoint to resolve the private zone. Other DNS names are resolved for the spoke VNet according to rules provisioned in a forwarding ruleset, if they exist.
> [!NOTE]
-> In the centralized DNS architecture scenario, both the hub and the spoke VNets can use the optional hub-linked ruleset when resolving DNS names. This is because all DNS traffic from the spoke VNet is being sent to the hub due to the VNet's custom DNS setting. The hub VNet doesn't require an outbound endpoint or ruleset here, but if one is provisioned and linked to the hub (as shown in Figure 2), both the hub and spoke VNets will use the forwarding rules. As mentioned previously, it is important that a forwarding rule for the private zone is not present in the ruleset because this configuration can cause a DNS resolution loop.
+> In the centralized DNS architecture scenario, both the hub and the spoke VNets can use the optional hub-linked ruleset when resolving DNS names. This is because all DNS traffic from the spoke VNet is being sent to the hub due to the VNet's custom DNS setting. The hub VNet doesn't require an outbound endpoint or ruleset here, but if one is provisioned and linked to the hub (as shown in Figure 2), both the hub and spoke VNets will use the forwarding rules. As mentioned previously, it is important that a forwarding rule for the private zone isn't present in the ruleset because this configuration can cause a DNS resolution loop.
## Next steps
dns Find Unhealthy Dns Records https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/scripts/find-unhealthy-dns-records.md
Last updated 10/04/2022 -+ # Find unhealthy DNS records in Azure DNS - PowerShell script sample
education-hub Enroll Renew Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/azure-dev-tools-teaching/enroll-renew-subscription.md
This article describes the process for enrolling in Azure Dev Tools for Teaching
## Enroll a new subscription
-1. Navigate to the [Azure Dev Tools for Teaching webpage](https://azure.microsoft.com/education/institutions/).
+1. Navigate to the [Azure Dev Tools for Teaching webpage](https://portal.azureforeducation.microsoft.com/).
1. Select the **Sign up** button. 1. Select **Enroll or Renew** on the Azure Dev Tools for Teaching banner. 1. Select the type of subscription you're enrolling:
event-grid Add Identity Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/add-identity-roles.md
Title: Add managed identity to a role on Azure Event Grid destination description: This article describes how to add managed identity to Azure roles on destinations such as Azure Service Bus and Azure Event Hubs. + Last updated 03/25/2021
az role assignment create --role "$role" --assignee "$topic_pid" --scope "$sbust
``` ## Next steps
-Now that you have assigned a system-assigned identity to your system topic, custom topic, or domain, and added the identity to appropriate roles on destinations, see [Deliver events using the managed identity](managed-service-identity.md) on delivering events to destinations using the identity.
+Now that you have assigned a system-assigned identity to your system topic, custom topic, or domain, and added the identity to appropriate roles on destinations, see [Deliver events using the managed identity](managed-service-identity.md) on delivering events to destinations using the identity.
event-grid Configure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-firewall.md
Title: Configure IP firewall for Azure Event Grid topics or domains description: This article describes how to configure firewall settings for Event Grid topics or domains. + Last updated 03/07/2022
event-grid Custom Event To Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-to-function.md
The second example uses PowerShell to perform similar steps.
2. Set the following variables. After you copy and paste each command, update the **topic name** and **resource group name** before you run the command: ```powershell
- $resourceGroupName = <resource group name>
+ $resourceGroupName = "RESOURCEGROUPNAME"
``` ```powershell
- $topicName = <topic name>
+ $topicName = "TOPICNAME"
``` 3. Run the following commands to get the **endpoint** and the **keys** for the topic:
event-grid How To Filter Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/how-to-filter-events.md
Title: How to filter events for Azure Event Grid description: This article shows how to filter events (by event type, by subject, by operators and data, etc.) when creating an Event Grid subscription. + Last updated 08/11/2021
curl -X POST -H "aeg-sas-key: $key" -d "$event" $topicEndpoint
## Next steps To learn more about filters (event types, subject, and advanced), see [Understand event filtering for Event Grid subscriptions](event-filtering.md). -
event-grid Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/metrics.md
- Title: Metrics supported by Azure Event Grid
-description: This article provides Azure Monitor metrics supported by the Azure Event Grid service.
- Previously updated : 03/17/2021--
-# Metrics supported by Azure Event Grid
-This article provides lists of Event Grid metrics that are categorized by namespaces.
-
-## System topics
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|AdvancedFilterEvaluationCount|Yes|Advanced Filter Evaluations|Count|Total|Total advanced filters evaluated across event subscriptions for this topic.|EventSubscriptionName|
-|DeadLetteredCount|Yes|Dead Lettered Events|Count|Total|Total dead lettered events matching to this event subscription|DeadLetterReason, EventSubscriptionName|
-|DeliveryAttemptFailCount|No|Delivery Failed Events|Count|Total|Total events failed to deliver to this event subscription|Error, ErrorType, EventSubscriptionName|
-|DeliverySuccessCount|Yes|Delivered Events|Count|Total|Total events delivered to this event subscription|EventSubscriptionName|
-|DestinationProcessingDurationInMs|No|Destination Processing Duration|Milliseconds|Average|Destination processing duration in milliseconds|EventSubscriptionName|
-|DroppedEventCount|Yes|Dropped Events|Count|Total|Total dropped events matching to this event subscription|DropReason, EventSubscriptionName|
-|MatchedEventCount|Yes|Matched Events|Count|Total|Total events matched to this event subscription|EventSubscriptionName|
-|PublishFailCount|Yes|Publish Failed Events|Count|Total|Total events failed to publish to this topic|ErrorType, Error|
-|PublishSuccessCount|Yes|Published Events|Count|Total|Total events published to this topic|No Dimensions|
-|PublishSuccessLatencyInMs|Yes|Publish Success Latency|Milliseconds|Total|Publish success latency in milliseconds|No Dimensions|
-|UnmatchedEventCount|Yes|Unmatched Events|Count|Total|Total events not matching any of the event subscriptions for this topic|No Dimensions|
--
-## Custom topics
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|AdvancedFilterEvaluationCount|Yes|Advanced Filter Evaluations|Count|Total|Total advanced filters evaluated across event subscriptions for this topic.|EventSubscriptionName|
-|DeadLetteredCount|Yes|Dead Lettered Events|Count|Total|Total dead lettered events matching to this event subscription|DeadLetterReason, EventSubscriptionName|
-|DeliveryAttemptFailCount|No|Delivery Failed Events|Count|Total|Total events failed to deliver to this event subscription|Error, ErrorType, EventSubscriptionName|
-|DeliverySuccessCount|Yes|Delivered Events|Count|Total|Total events delivered to this event subscription|EventSubscriptionName|
-|DestinationProcessingDurationInMs|No|Destination Processing Duration|Milliseconds|Average|Destination processing duration in milliseconds|EventSubscriptionName|
-|DroppedEventCount|Yes|Dropped Events|Count|Total|Total dropped events matching to this event subscription|DropReason, EventSubscriptionName|
-|MatchedEventCount|Yes|Matched Events|Count|Total|Total events matched to this event subscription|EventSubscriptionName|
-|PublishFailCount|Yes|Publish Failed Events|Count|Total|Total events failed to publish to this topic|ErrorType, Error|
-|PublishSuccessCount|Yes|Published Events|Count|Total|Total events published to this topic|No Dimensions|
-|PublishSuccessLatencyInMs|Yes|Publish Success Latency|Milliseconds|Total|Publish success latency in milliseconds|No Dimensions|
-|UnmatchedEventCount|Yes|Unmatched Events|Count|Total|Total events not matching any of the event subscriptions for this topic|No Dimensions|
-
-## Domains
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|AdvancedFilterEvaluationCount|Yes|Advanced Filter Evaluations|Count|Total|Total advanced filters evaluated across event subscriptions for this topic.|Topic, EventSubscriptionName, DomainEventSubscriptionName|
-|DeadLetteredCount|Yes|Dead Lettered Events|Count|Total|Total dead lettered events matching to this event subscription|Topic, EventSubscriptionName, DomainEventSubscriptionName, DeadLetterReason|
-|DeliveryAttemptFailCount|No|Delivery Failed Events|Count|Total|Total events failed to deliver to this event subscription|Topic, EventSubscriptionName, DomainEventSubscriptionName, Error, ErrorType|
-|DeliverySuccessCount|Yes|Delivered Events|Count|Total|Total events delivered to this event subscription|Topic, EventSubscriptionName, DomainEventSubscriptionName|
-|DestinationProcessingDurationInMs|No|Destination Processing Duration|Milliseconds|Average|Destination processing duration in milliseconds|Topic, EventSubscriptionName, DomainEventSubscriptionName|
-|DroppedEventCount|Yes|Dropped Events|Count|Total|Total dropped events matching to this event subscription|Topic, EventSubscriptionName, DomainEventSubscriptionName, DropReason|
-|MatchedEventCount|Yes|Matched Events|Count|Total|Total events matched to this event subscription|Topic, EventSubscriptionName, DomainEventSubscriptionName|
-|PublishFailCount|Yes|Publish Failed Events|Count|Total|Total events failed to publish to this topic|Topic, ErrorType, Error|
-|PublishSuccessCount|Yes|Published Events|Count|Total|Total events published to this topic|Topic|
-|PublishSuccessLatencyInMs|Yes|Publish Success Latency|Milliseconds|Total|Publish success latency in milliseconds|No Dimensions|
-
-## Event subscriptions
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|DeadLetteredCount|Yes|Dead Lettered Events|Count|Total|Total dead lettered events matching to this event subscription|DeadLetterReason|
-|DeliveryAttemptFailCount|No|Delivery Failed Events|Count|Total|Total events failed to deliver to this event subscription|Error, ErrorType|
-|DeliverySuccessCount|Yes|Delivered Events|Count|Total|Total events delivered to this event subscription|No Dimensions|
-|DestinationProcessingDurationInMs|No|Destination Processing Duration|Milliseconds|Average|Destination processing duration in milliseconds|No Dimensions|
-|DroppedEventCount|Yes|Dropped Events|Count|Total|Total dropped events matching to this event subscription|DropReason|
-|MatchedEventCount|Yes|Matched Events|Count|Total|Total events matched to this event subscription|No Dimensions|
--
-## Extension topics
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|PublishFailCount|Yes|Publish Failed Events|Count|Total|Total events failed to publish to this topic|ErrorType, Error|
-|PublishSuccessCount|Yes|Published Events|Count|Total|Total events published to this topic|No Dimensions|
-|PublishSuccessLatencyInMs|Yes|Publish Success Latency|Milliseconds|Total|Publish success latency in milliseconds|No Dimensions|
-|UnmatchedEventCount|Yes|Unmatched Events|Count|Total|Total events not matching any of the event subscriptions for this topic|No Dimensions|
-
-## Partner namespaces
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|DeadLetteredCount|Yes|Dead Lettered Events|Count|Total|Total dead lettered events matching to this event subscription|DeadLetterReason, EventSubscriptionName|
-|DeliveryAttemptFailCount|No|Delivery Failed Events|Count|Total|Total events failed to deliver to this event subscription|Error, ErrorType, EventSubscriptionName|
-|DeliverySuccessCount|Yes|Delivered Events|Count|Total|Total events delivered to this event subscription|EventSubscriptionName|
-|DestinationProcessingDurationInMs|No|Destination Processing Duration|Milliseconds|Average|Destination processing duration in milliseconds|EventSubscriptionName|
-|DroppedEventCount|Yes|Dropped Events|Count|Total|Total dropped events matching to this event subscription|DropReason, EventSubscriptionName|
-|MatchedEventCount|Yes|Matched Events|Count|Total|Total events matched to this event subscription|EventSubscriptionName|
-|PublishFailCount|Yes|Publish Failed Events|Count|Total|Total events failed to publish to this topic|ErrorType, Error|
-|PublishSuccessCount|Yes|Published Events|Count|Total|Total events published to this topic|No Dimensions|
-|PublishSuccessLatencyInMs|Yes|Publish Success Latency|Milliseconds|Total|Publish success latency in milliseconds|No Dimensions|
-|UnmatchedEventCount|Yes|Unmatched Events|Count|Total|Total events not matching any of the event subscriptions for this topic|No Dimensions|
--
-## Partner topics
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|AdvancedFilterEvaluationCount|Yes|Advanced Filter Evaluations|Count|Total|Total advanced filters evaluated across event subscriptions for this topic.|EventSubscriptionName|
-|DeadLetteredCount|Yes|Dead Lettered Events|Count|Total|Total dead lettered events matching to this event subscription|DeadLetterReason, EventSubscriptionName|
-|DeliveryAttemptFailCount|No|Delivery Failed Events|Count|Total|Total events failed to deliver to this event subscription|Error, ErrorType, EventSubscriptionName|
-|DeliverySuccessCount|Yes|Delivered Events|Count|Total|Total events delivered to this event subscription|EventSubscriptionName|
-|DestinationProcessingDurationInMs|No|Destination Processing Duration|Milliseconds|Average|Destination processing duration in milliseconds|EventSubscriptionName|
-|DroppedEventCount|Yes|Dropped Events|Count|Total|Total dropped events matching to this event subscription|DropReason, EventSubscriptionName|
-|MatchedEventCount|Yes|Matched Events|Count|Total|Total events matched to this event subscription|EventSubscriptionName|
-|PublishFailCount|Yes|Publish Failed Events|Count|Total|Total events failed to publish to this topic|ErrorType, Error|
-|PublishSuccessCount|Yes|Published Events|Count|Total|Total events published to this topic|No Dimensions|
-|UnmatchedEventCount|Yes|Unmatched Events|Count|Total|Total events not matching any of the event subscriptions for this topic|No Dimensions|
--
-## Next steps
-See the following article: [Diagnostic logs](diagnostic-logs.md)
event-grid Monitor Event Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/monitor-event-delivery.md
Last updated 03/17/2021
This article describes how to use the portal to see metrics for Event Grid topics and subscriptions, and create alerts on them. > [!IMPORTANT]
-> For a list of metrics supported Azure Event Grid, see [Metrics](metrics.md).
+> For a list of metrics supported Azure Event Grid, see [Metrics](../azure-monitor/essentials/metrics-supported.md#microsofteventgriddomains).
## View custom topic metrics
If you've published a custom topic, you can view the metrics for it.
:::image type="content" source="./media/monitor-event-delivery/system-topic-metrics-page.png" alt-text="System Topic - Metrics page"::: > [!IMPORTANT]
- > For a list of metrics supported Azure Event Grid, see [Metrics](metrics.md).
+ > For a list of metrics supported Azure Event Grid, see [Metrics](../azure-monitor/essentials/metrics-supported.md#microsofteventgriddomains).
## Next steps See the following articles:
event-grid Post To Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/post-to-custom-topic.md
Title: Post event to custom Azure Event Grid topic
description: This article describes how to post an event to a custom topic. It shows the format of the post and event data. Last updated 11/17/2022 - # Publish events to Azure Event Grid custom topics using access keys
event-grid Resize Images On Storage Blob Upload Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/resize-images-on-storage-blob-upload-event.md
description: 'Tutorial: Azure Event Grid can trigger on blob uploads in Azure St
Last updated 03/21/2022 ms.devlang: csharp, javascript-+ # Tutorial Step 2: Automate resizing uploaded images using Event Grid
event-grid Storage Upload Process Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/storage-upload-process-images.md
Last updated 02/09/2023 ms.devlang: csharp, javascript-+ # Step 1: Upload image data in the cloud with Azure Storage
event-hubs Event Hubs Bicep Namespace Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-bicep-namespace-event-hub.md
description: 'Quickstart: Create an Event Hubs namespace with an event hub and a
-+ Last updated 03/22/2022
In this article, you created an Event Hubs namespace and an event hub in the nam
[Understand the structure and syntax of Bicep files]: ../azure-resource-manager/bicep/file.md [Deploy resources with Bicep and Azure PowerShell]: ../azure-resource-manager/bicep/deploy-powershell.md
-[Deploy resource with Bicep and Azure CLI]: ../azure-resource-manager/bicep/deploy-cli.md
+[Deploy resource with Bicep and Azure CLI]: ../azure-resource-manager/bicep/deploy-cli.md
event-hubs Event Hubs Messaging Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-messaging-exceptions.md
Title: Azure Event Hubs - exceptions (legacy) description: This article provides a list of Azure Event Hubs messaging exceptions and suggested actions. Previously updated : 02/10/2021 Last updated : 03/08/2023 ms.devlang: csharp
This error can occur for one of two reasons:
On the **Overview** page, in the **Show metrics** section, switch to the **Throughput** tab. Select the chart to open it in a larger window with 1-minute intervals on the x-axis. Look at the peak values and divide them by 60 to get incoming bytes/second or outgoing bytes/second. Use similar approach to calculate number of requests per second at peak times on the **Requests** tab.
- If you see values higher than number of TUs * limits (1 MB per second for ingress or 1000 requests for ingress/second, 2 MB per second for egress), increase the number of TUs by using the **Scale** (on the left menu) page of an Event Hubs namespace to manually scale higher or to use the [Auto-inflate](event-hubs-auto-inflate.md) feature of Event Hubs. Note that auto-Inflate can only increase up to 20 TUS. To raise it to exactly 40 TUs, submit a [support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+ If you see values higher than number of TUs * limits (1 MB per second for ingress or 1000 requests for ingress/second, 2 MB per second for egress), increase the number of TUs by using the **Scale** (on the left menu) page of an Event Hubs namespace to manually scale higher or to use the [Auto-inflate](event-hubs-auto-inflate.md) feature of Event Hubs. You can scale up to 40 TUs when you are manually scaling or automatically scaling the namespace.
### Error code 50008
event-hubs Event Hubs Resource Manager Namespace Event Hub Enable Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-resource-manager-namespace-event-hub-enable-capture.md
Title: Create an event hub with capture enabled - Azure Event Hubs | Microsoft D
description: Create an Azure Event Hubs namespace with one event hub and enable Capture using Azure Resource Manager template Last updated 08/26/2022-+ ms.devlang: azurecli
event-hubs Event Hubs Resource Manager Namespace Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-resource-manager-namespace-event-hub.md
Title: 'Quickstart: Create an event hub with consumer group - Azure Event Hubs' description: 'Quickstart: Create an Event Hubs namespace with an event hub and a consumer group using Azure Resource Manager templates' -+ Last updated 06/08/2021
event-hubs Resource Governance With App Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/resource-governance-with-app-groups.md
Title: Govern resources for client applications with application groups
description: Learn how to use application groups to govern resources for client applications that connect with Event Hubs. Last updated 10/12/2022-+ # Govern resources for client applications with application groups
expressroute Expressroute Howto Add Ipv6 Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-add-ipv6-cli.md
description: Learn how to add IPv6 support to connect to Azure deployments using
+ Last updated 09/27/2021
expressroute Expressroute Howto Add Ipv6 Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-add-ipv6-powershell.md
description: Learn how to add IPv6 support to connect to Azure deployments using
+ Last updated 03/02/2021
expressroute Expressroute Howto Circuit Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-circuit-classic.md
Last updated 11/05/2019 -- # Modify an ExpressRoute circuit using PowerShell (classic)
expressroute Expressroute Howto Coexist Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-coexist-resource-manager.md
Last updated 09/16/2021 --+ # Configure ExpressRoute and Site-to-Site coexisting connections using PowerShell > [!div class="op_single_selector"]
expressroute Expressroute Howto Erdirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-erdirect.md
description: Learn how to use Azure PowerShell to configure Azure ExpressRoute D
+ Last updated 06/09/2022 - # How to configure ExpressRoute Direct
expressroute Expressroute Howto Expressroute Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-expressroute-direct-cli.md
Last updated 12/14/2020 --+ # Configure ExpressRoute Direct by using the Azure CLI
expressroute Expressroute Howto Linkvnet Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-classic.md
Last updated 12/06/2019 - # Connect a virtual network to an ExpressRoute circuit using PowerShell (classic)
New-AzureDedicatedCircuitLink -ServiceKey "*****************************" -VNetN
## Next steps For more information about ExpressRoute, see the [ExpressRoute FAQ](expressroute-faqs.md).-
expressroute Expressroute Howto Linkvnet Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-cli.md
Last updated 07/18/2022 -+ # Tutorial: Connect a virtual network to an ExpressRoute circuit using Azure CLI
To learn how to configure route filters for Microsoft peering using Azure CLI, a
> [!div class="nextstepaction"] > [Configure route filters for Microsoft peering](how-to-routefilter-cli.md)-
expressroute How To Configure Custom Bgp Communities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-configure-custom-bgp-communities.md
description: Learn how to apply or update BGP community value for a new or an ex
+ Last updated 12/27/2022
expressroute How To Custom Route Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-custom-route-alert.md
+ Last updated 05/29/2020 - # Configure custom alerts to monitor advertised routes
expressroute Quickstart Create Expressroute Vnet Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/quickstart-create-expressroute-vnet-bicep.md
Last updated 03/24/2022 -+ # Quickstart: Create an ExpressRoute circuit with private peering using Bicep
expressroute Quickstart Create Expressroute Vnet Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/quickstart-create-expressroute-vnet-template.md
Last updated 10/12/2020 -+ # Quickstart: Create an ExpressRoute circuit with private peering using an ARM template
firewall-manager Create Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/create-policy-powershell.md
Last updated 08/16/2021 -+ # Quickstart: Create and update an Azure Firewall policy using Azure PowerShell
firewall-manager Quick Firewall Policy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-firewall-policy-bicep.md
Last updated 07/05/2022 -+ # Quickstart: Create an Azure Firewall and a firewall policy - Bicep
firewall-manager Quick Firewall Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-firewall-policy.md
Last updated 02/17/2021 -+ # Quickstart: Create an Azure Firewall and a firewall policy - ARM template
firewall-manager Quick Secure Virtual Hub Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-secure-virtual-hub-bicep.md
Last updated 06/28/2022 -+ # Quickstart: Secure your virtual hub using Azure Firewall Manager - Bicep
firewall-manager Quick Secure Virtual Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-secure-virtual-hub.md
Last updated 08/28/2020 -+ # Quickstart: Secure your virtual hub using Azure Firewall Manager - ARM template
firewall Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-bicep.md
-+ Last updated 06/28/2022
firewall Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-cli.md
description: In this article, you learn how to deploy and configure Azure Firewa
+ Last updated 10/31/2022
firewall Deploy Ps Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-ps-policy.md
description: In this article, you learn how to deploy and configure Azure Firewa
+ Last updated 11/03/2022
firewall Deploy Rules Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-rules-powershell.md
description: In this article, you learn how to add or modify multiple Azure Fire
+ Last updated 02/23/2022
firewall Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-template.md
-+ Last updated 05/10/2021
firewall Firewall Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-diagnostics.md
You can view and analyze activity log data by using any of the following methods
* **Azure tools**: Retrieve information from the activity log through Azure PowerShell, the Azure CLI, the Azure REST API, or the Azure portal. Step-by-step instructions for each method are detailed in the [Activity operations with Resource Manager](../azure-monitor/essentials/activity-log.md) article. * **Power BI**: If you don't already have a [Power BI](https://powerbi.microsoft.com/pricing) account, you can try it for free. By using the [Azure Activity Logs content pack for Power BI](https://powerbi.microsoft.com/en-us/documentation/powerbi-content-pack-azure-audit-logs/), you can analyze your data with preconfigured dashboards that you can use as is or customize.
-* **Microsoft Sentinel**: You can connect Azure Firewall logs to Microsoft Sentinel, enabling you to view log data in workbooks, use it to create custom alerts, and incorporate it to improve your investigation. The Azure Firewall data connector in Microsoft Sentinel is currently in public preview. For more information, see [Connect data from Azure Firewall](../sentinel/data-connectors-reference.md#azure-firewall).
+* **Microsoft Sentinel**: You can connect Azure Firewall logs to Microsoft Sentinel, enabling you to view log data in workbooks, use it to create custom alerts, and incorporate it to improve your investigation. The Azure Firewall data connector in Microsoft Sentinel is currently in public preview. For more information, see [Connect data from Azure Firewall](../sentinel/data-connectors/azure-firewall.md).
See the following video by Mohit Kumar for an overview: > [!VIDEO https://www.microsoft.com/videoplayer/embed/RWI4nn]
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Last updated 11/07/2022
# What is Azure Firewall?
-Azure Firewall is a cloud-native and intelligent network firewall security service that provides the best of breed threat protection for your cloud workloads running in Azure. It's a fully stateful, firewall as a service with built-in high availability and unrestricted cloud scalability. It provides both east-west and north-south traffic inspection.
+Azure Firewall is a cloud-native and intelligent network firewall security service that provides the best of breed threat protection for your cloud workloads running in Azure. It's a fully stateful, firewall as a service with built-in high availability and unrestricted cloud scalability. It provides both east-west and north-south traffic inspection. To learn what's east-west and north-south traffic, see [East-west and north-south traffic](/azure/architecture/framework/security/design-network-flow#east-west-and-north-south-traffic).
Azure Firewall is offered in three SKUs: Standard, Premium, and Basic.
firewall Protect Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-kubernetes-service.md
Title: Use Azure Firewall to protect Azure Kubernetes Service (AKS) clusters
description: Learn how to use Azure Firewall to protect Azure Kubernetes Service (AKS) clusters + Last updated 10/27/2022
firewall Quick Create Ipgroup Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-ipgroup-bicep.md
-+ Last updated 08/25/2022
firewall Quick Create Ipgroup Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-ipgroup-template.md
-+ Last updated 05/10/2021
firewall Quick Create Multiple Ip Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-multiple-ip-bicep.md
-+ Last updated 08/11/2022
firewall Quick Create Multiple Ip Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-multiple-ip-template.md
-+ Last updated 08/28/2020
firewall Sample Create Firewall Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/scripts/sample-create-firewall-test.md
ms.devlang: powershell
Last updated 11/19/2019 - # Create an Azure Firewall test environment
frontdoor Create Front Door Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-bicep.md
Last updated 07/08/2022
-+ #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
frontdoor Create Front Door Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-template.md
Last updated 07/12/2022
-+ #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
frontdoor Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/domain.md
Previously updated : 02/07/2023 Last updated : 03/06/2023
Sometimes, you might need to provide your own TLS certificates. Common scenarios
#### Certificate requirements
-When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a non-allowed CA, your request will be rejected. The root CA must be part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If a certificate without complete chain is presented, the requests that involve that certificate aren't guaranteed to work as expected.
+To use your certificate with Azure Front Door, it must meet the following requirements:
-The common name (CN) of the certificate must match the domain configured in Azure Front Door.
-
-Azure Front Door doesn't support certificates with elliptic curve (EC) cryptography algorithms.
+- **Complete certificate chain:** When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a non-allowed CA, your request will be rejected. The root CA must be part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If a certificate without complete chain is presented, the requests that involve that certificate aren't guaranteed to work as expected.
+- **Common name:** The common name (CN) of the certificate must match the domain configured in Azure Front Door.
+- **Algorithm:** Azure Front Door doesn't support certificates with elliptic curve (EC) cryptography algorithms.
+- **File (content) type:** Your certificate must be uploaded to your key vault from a PFX file, which uses the `application/x-pkcs12` content type.
#### Import a certificate to Azure Key Vault
frontdoor Quickstart Create Front Door Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-bicep.md
Last updated 03/30/2022
-+ #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
frontdoor Quickstart Create Front Door Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-template.md
Last updated 09/14/2020
-+ #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
governance Create Blueprint Azurecli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/create-blueprint-azurecli.md
Title: 'Quickstart: Create a blueprint with the Azure CLI'
description: In this quickstart, you use Azure Blueprints to create, define, and deploy artifacts by using the Azure CLI. Last updated 08/17/2021 + # Quickstart: Define and assign an Azure blueprint with the Azure CLI
governance Create Blueprint Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/create-blueprint-powershell.md
Title: 'Quickstart: Create a blueprint with PowerShell'
description: In this quickstart, you use Azure Blueprints to create, define, and deploy artifacts by using PowerShell. Last updated 08/17/2021 -+ # Quickstart: Define and assign an Azure blueprint with PowerShell
governance Manage Assignments Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/how-to/manage-assignments-ps.md
Title: How to manage assignments with PowerShell
description: Learn how to manage blueprint assignments with the official Azure Blueprints PowerShell module, Az.Blueprint. Last updated 08/17/2021 + # How to manage assignments with PowerShell
governance Machine Configuration Azure Automation Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-azure-automation-migration.md
description: This article provides process and technical guidance for customers
Last updated 03/06/2023 +
governance Machine Configuration Create Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-publish.md
description: Learn how to publish a machine configuration package file top Azure
Last updated 07/25/2022 +
governance Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/manage.md
Title: Manage your Azure subscriptions at scale with management groups - Azure G
description: Learn how to view, maintain, update, and delete your management group hierarchy. Last updated 12/01/2022 +
governance Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/resource-graph-samples.md
Title: Azure Resource Graph sample queries for management groups
description: Sample Azure Resource Graph queries for management groups showing use of resource types and tables to access management group details. Last updated 07/07/2022 -+ # Azure Resource Graph sample queries for management groups
governance Assign Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-powershell.md
Title: "Quickstart: New policy assignment with PowerShell"
description: In this quickstart, you use Azure PowerShell to create an Azure Policy assignment to identify non-compliant resources. Last updated 08/17/2021 + # Quickstart: Create a policy assignment to identify non-compliant resources using Azure PowerShell
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
These effects are currently supported in a policy definition:
- [Audit](#audit) - [AuditIfNotExists](#auditifnotexists) - [Deny](#deny)-- [DenyAction](#denyaction)
+- [DenyAction (preview)](#denyaction-preview)
- [DeployIfNotExists](#deployifnotexists) - [Disabled](#disabled) - [Manual (preview)](#manual-preview)
location of the Constraint template to use in Kubernetes to limit the allowed co
} } ```
-## DenyAction
+## DenyAction (preview)
`DenyAction` is used to block requests on intended action to resources. The only supported action today is `DELETE`. This effect will help prevent any accidental deletion of critical resources.
assignment.
`Microsoft.Authorization/policyAssignments`, `Microsoft.Authorization/denyAssignments`, `Microsoft.Blueprint/blueprintAssignments`, `Microsoft.Resources/deploymentStacks`, and `Microsoft.Authorization/locks` are all exempt from DenyAction enforcement to prevent lockout scenarios.
+> [!NOTE]
+> Under preview, assignments with `denyAction` effect will show a `Not Started` compliance state.
+ #### Subscription deletion Policy won't block removal of resources that happens during a subscription deletion.
governance Evaluate Impact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/evaluate-impact.md
reviews the request. When the policy definition effect is [Modify](./effects.md#
[Append](./effects.md#deny), or [DeployIfNotExists](./effects.md#deployifnotexists), Policy alters the request or adds to it. When the policy definition effect is [Audit](./effects.md#audit) or [AuditIfNotExists](./effects.md#auditifnotexists), Policy causes an Activity log entry to be created
-for new and updated resources. And when the policy definition effect is [Deny](./effects.md#deny) or [DenyAction](./effects.md#denyaction), Policy stops the creation or alteration of the request.
+for new and updated resources. And when the policy definition effect is [Deny](./effects.md#deny) or [DenyAction](./effects.md#denyaction-preview), Policy stops the creation or alteration of the request.
These outcomes are exactly as desired when you know the policy is defined correctly. However, it's important to validate a new policy works as intended before allowing it to change or block work. The
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/get-compliance-data.md
description: Azure Policy evaluations and effects determine compliance. Learn ho
Last updated 11/03/2022 + # Get compliance data of Azure resources
governance Programmatically Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/programmatically-create.md
Title: Programmatically create policies
description: This article walks you through programmatically creating and managing policies for Azure Policy with Azure CLI, Azure PowerShell, and REST API. Last updated 08/17/2021 + # Programmatically create policies
governance Remediate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/remediate-resources.md
Title: Remediate non-compliant resources
description: This guide walks you through the remediation of resources that are non-compliant to policies in Azure Policy. Last updated 07/29/2022 +
governance Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Policy
description: Sample Azure Resource Graph queries for Azure Policy showing use of resource types and tables to access Azure Policy related resources and properties. Last updated 07/07/2022 -+ # Azure Resource Graph sample queries for Azure Policy
governance Create And Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/create-and-manage.md
Title: "Tutorial: Build policies to enforce compliance"
description: In this tutorial, you use policies to enforce standards, control costs, maintain security, and impose enterprise-wide design principles. Last updated 08/17/2021 + # Tutorial: Create and manage policies to enforce compliance
governance Explore Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/explore-resources.md
Title: Explore your Azure resources
description: Learn to use the Resource Graph query language to explore your resources and discover how they're connected. Last updated 08/17/2021 + # Explore your Azure resources with Resource Graph
governance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/advanced.md
Title: Advanced query samples
description: Use Azure Resource Graph to run some advanced queries, including working with columns, listing tags used, and matching resources with regular expressions. Last updated 06/15/2022 +
governance Starter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/starter.md
Last updated 07/19/2022 + # Starter Resource Graph query samples
governance Shared Query Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/shared-query-azure-cli.md
Title: "Quickstart: Create a shared query with Azure CLI"
description: In this quickstart, you follow the steps to enable the Resource Graph extension for Azure CLI and create a shared query. Last updated 08/17/2021 + # Quickstart: Create a Resource Graph shared query using Azure CLI
hdinsight Apache Hadoop Use Hive Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-powershell.md
Title: Use Apache Hive with PowerShell in HDInsight - Azure
description: Use PowerShell to run Apache Hive queries in Apache Hadoop in Azure HDInsight -+ Last updated 08/30/2022
hdinsight Apache Hadoop Use Mapreduce Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-mapreduce-powershell.md
Title: Use MapReduce and PowerShell with Apache Hadoop - Azure HDInsight
description: Learn how to use PowerShell to remotely run MapReduce jobs with Apache Hadoop on HDInsight. -+ Last updated 01/08/2020
hdinsight Apache Hbase Build Java Maven Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-build-java-maven-linux.md
Title: Use Apache Maven to build a Java HBase client for Azure HDInsight
description: Learn how to use Apache Maven to build a Java-based Apache HBase application, then deploy it to HBase on Azure HDInsight. -+ Last updated 09/23/2022
hdinsight Hdinsight 40 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-40-component-versioning.md
Title: Open-source components and versions - Azure HDInsight 4.0
description: Learn about the open-source components and versions in Azure HDInsight 4.0. Previously updated : 02/16/2023 Last updated : 03/08/2023 # HDInsight 4.0 component versions
Apache Spark versions supported in Azure HDIinsight
|--|--|--|--|--|--| |2.4|July 8, 2019|End of Life Announced (EOLA)| Feb10,2023| Aug 10,2023|Feb 10,2024| |3.1|March 11,2022|GA |-|-|-|
-|3.3|March 22,2023|Public Preview|-|-|-|
+|3.3|To be announced for Public Preview|-|-|-|-|
## Apache Spark 2.4 to Spark 3.x Migration Guides
hdinsight Hdinsight 50 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-50-component-versioning.md
Apache Spark versions supported in Azure HDIinsight
|--|--|--|--|--|--| |2.4|July 8, 2019|End of Life Announced (EOLA)| Feb10,2023| Aug 10,2023|Feb 10,2024| |3.1|March 11,2022|GA |-|-|-|
-|3.3|March 22,2023|Public Preview|-|-|-|
+|3.3|To be announced for Public Preview|-|-|-|-|
## Apache Spark 2.4 to Spark 3.x Migration Guides
hdinsight Hdinsight 51 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-51-component-versioning.md
The Open-source component versions associated with HDInsight 5.1 listed in the f
| Component | HDInsight 5.1 | HDInsight 5.0 | ||||
-| Apache Spark | 3.3 * | 3.1.2 |
+| Apache Spark | 3.3 * | 3.1.3 |
| Apache Hive | 3.1.2 * | 3.1.2 | | Apache Kafka | 3.2.0 ** | 2.4.1 | | Apache Hadoop with YARN | 3.3.4 * | 3.1.1 |
Apache Spark versions supported in Azure HDIinsight
|--|--|--|--|--|--| |2.4|July 8, 2019|End of Life Announced (EOLA)| Feb10,2023| Aug 10,2023|Feb 10,2024| |3.1|March 11,2022|GA |-|-|-|
-|3.3|March 22,2023|Public Preview|-|-|-|
+|3.3|To be announced for Public Preview|-|-|-|-|
## Apache Spark 2.4 to Spark 3.x Migration Guides
hdinsight Hdinsight Hadoop Customize Cluster Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-customize-cluster-linux.md
Title: Customize Azure HDInsight clusters by using script actions
description: Add custom components to HDInsight clusters by using script actions. Script actions are Bash scripts that can be used to customize the cluster configuration. Or add additional services and utilities like Hue, Solr, or R. -+ Last updated 06/08/2022
hdinsight Hdinsight Hadoop Linux Use Ssh Unix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-linux-use-ssh-unix.md
Title: Use SSH with Hadoop - Azure HDInsight
description: "You can access HDInsight using Secure Shell (SSH). This document provides information on connecting to HDInsight using the ssh commands from Windows, Linux, Unix, or macOS clients." -+ Last updated 03/31/2022
hdinsight Hdinsight Hadoop Manage Ambari Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-manage-ambari-rest-api.md
Title: Monitor and manage Hadoop with Ambari REST API - Azure HDInsight
description: Learn how to use Ambari to monitor and manage Hadoop clusters in Azure HDInsight. In this document, you'll learn how to use the Ambari REST API included with HDInsight clusters. -+ Last updated 06/09/2022
hdinsight Hdinsight Hadoop Provision Linux Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-provision-linux-clusters.md
Title: Set up clusters in HDInsight with Apache Hadoop, Apache Spark, Apache Kaf
description: Set up Hadoop, Kafka, Spark, or HBase clusters for HDInsight from a browser, the Azure classic CLI, Azure PowerShell, REST, or SDK. -+ Last updated 08/17/2022
hdinsight Hdinsight Restrict Public Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-restrict-public-connectivity.md
Title: Restrict public connectivity in Azure HDInsight description: Learn how to remove access to all outbound public IP addresses. + Last updated 12/31/2022
hdinsight Hdinsight Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-upload-data.md
Title: Upload data for Apache Hadoop jobs in HDInsight
description: Learn how to upload and access data for Apache Hadoop jobs in HDInsight. Use Azure classic CLI, Azure Storage Explorer, Azure PowerShell, the Hadoop command line, or Sqoop. -+ Last updated 04/27/2020
hdinsight Apache Hive Query Odbc Driver Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-query-odbc-driver-powershell.md
keywords: hive,hive odbc,powershell
Last updated 04/29/2022-- #Customer intent: As a HDInsight user, I want to query data from my Apache Hive datasets so that I can view and interpret the data.
hdinsight Apache Kafka Connect Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-connect-vpn-gateway.md
Title: Connect to Kafka using virtual networks - Azure HDInsight
description: Learn how to directly connect to Kafka on HDInsight through an Azure Virtual Network. Learn how to connect to Kafka from development clients using a VPN gateway, or from clients in your on-premises network by using a VPN gateway device. -+ Last updated 05/30/2022
hdinsight Service Endpoint Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdi