Updates from: 02/08/2023 02:13:26
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Provider Facebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-facebook.md
If you don't already have a Facebook account, sign up at [https://www.facebook.c
1. Select **Save Changes**. 1. From the menu, select the **plus** sign or **Add Product** link next to **PRODUCTS**. Under the **Add Products to Your App**, select **Set up** under **Facebook Login**. 1. From the menu, select **Facebook Login**, select **Settings**.
-1. In **Valid OAuth redirect URIs**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain.
+1. In **Valid OAuth redirect URIs**, enter `https://your-tenant-name.b2clogin.com/your-tenant-id/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-id/oauth2/authresp`. Replace `your-tenant-id` with the id of your tenant, and `your-domain-name` with your custom domain.
1. Select **Save Changes** at the bottom of the page. 1. To make your Facebook application available to Azure AD B2C, select the Status selector at the top right of the page and turn it **On** to make the Application public, and then select **Switch Mode**. At this point, the Status should change from **Development** to **Live**. For more information, see [Facebook App Development](https://developers.facebook.com/docs/development/release).
If the sign-in process is successful, your browser is redirected to `https://jwt
- Learn how to [pass Facebook token to your application](idp-pass-through-user-flow.md). - Check out the Facebook federation [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#facebook), and how to pass Facebook access token [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#facebook-with-access-token)
active-directory-b2c Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/page-layout.md
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
## Self-asserted page (selfasserted)
+**2.1.20**
+- Fixed an XSS issue on input from textbox
+
+**2.1.19**
+- Fixed accessibility bugs
+- Handle Undefined Error message for existing user sign up
+- Move Password Mismatch Error to Inline instead of Page Level
+- Accessibility changes related to High Contrast button display and anchor focus improvements
+
+**2.1.18**
+- Add asterisk for required fields
+- TOTP Store Icons position fixes for Classic Template
+- Activate input items only when verification code is verified
+- Add Alt Text for Background Image
+- Added customization for server errors by TOTP verification
+
+**2.1.17**
+- Add descriptive error message and fixed forgotPassword link
+- Make checkbox as group
+- Enforce Validation Error Update on control change and enable continue on email verified
+- Added additional field to error code to validation failure response
+
+**2.1.16**
+- Fixed "Claims for verification control have not been verified" bug while verifying code.
+- Hide error message on validation succeeds and send code to verify
+
+**2.1.15**
+- Fixed QR code generation bug due to QR text length
+ **2.1.14** - Fixed WCAG 2.1 accessibility bug for the TOTP multifactor authentication screens.
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
> [!TIP] > If you localize your page to support multiple locales, or languages in a user flow. The [localization IDs](localization-string-ids.md) article provides the list of localization IDs that you can use for the page version you select.
+**2.1.9**
+- Fix accessibility bugs
+- Accessibility changes related to High Contrast button display and anchor focus improvements
+
+**2.1.8**
+- Add descriptive error message and fixed forgotPassword link!
**2.1.7**
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md
Within the SAML tokens, these claims will be emitted with the following URI form
## Configuring groups optional claims
-This section covers the configuration options under optional claims for changing the group attributes used in group claims from the default group objectID to attributes synced from on-premises Windows Active Directory. You can configure groups optional claims for your application through the UI or application manifest.
+This section covers the configuration options under optional claims for changing the group attributes used in group claims from the default group objectID to attributes synced from on-premises Windows Active Directory. You can configure groups optional claims for your application through the UI or application manifest. Group optional claims are only emitted in the JWT for **user principals**. **Service principals** _will not_ have group optional claims emitted in the JWT.
> [!IMPORTANT] > Azure AD limits the number of groups emitted in a token to 150 for SAML assertions and 200 for JWT, including nested groups. For more information on group limits and important caveats for group claims from on-premises attributes, see [Configure group claims for applications with Azure AD](../hybrid/how-to-connect-fed-group-claims.md).
active-directory Scenario Mobile Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-mobile-overview.md
Previously updated : 05/07/2019 Last updated : 02/07/2023
Considerations for mobile apps:
- **User experience is key**: Allow users to see the value of your app before you ask for sign-in. Request only the required permissions. - **Support all user configurations**: Many mobile business users must adhere to conditional-access policies and device-compliance policies. Be sure to support these key scenarios.-- **Implement single sign-on (SSO)**: By using MSAL and Microsoft identity platform, you can enable single sign-on through the device's browser or Microsoft Authenticator (and Intune Company Portal on Android).
+- **Implement single sign-on (SSO)**: By using MSAL and Microsoft identity platform, you can enable SSO through the device's browser or Microsoft Authenticator (and Intune Company Portal on Android).
- **Implement shared device mode**: Enable your application to be used in shared-device scenarios, for example hospitals, manufacturing, retail, and finance. [Read more about supporting shared device mode](msal-shared-devices.md). ## Specifics
active-directory V2 Supported Account Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-supported-account-types.md
Previously updated : 07/14/2020 Last updated : 02/06/2023
# Supported account types
-This article explains what account types (sometimes called *audiences*) are supported in the Microsoft identity platform applications.
+This article explains what account types (sometimes called _audiences_) are supported in the Microsoft identity platform applications.
<!-- This section can be in an include for many of the scenarios (SPA, web app signing-in users, protecting a web API, Desktop (depending on the flows), Mobile -->
This article explains what account types (sometimes called *audiences*) are supp
In the Microsoft Azure public cloud, most types of apps can sign in users with any audience: -- If you're writing a line-of-business (LOB) application, you can sign in users in your own organization. Such an application is sometimes called *single-tenant*.-- If you're an ISV, you can write an application that signs in users:
+- If you're writing a line-of-business (LOB) application, you can sign in users in your own organization. Such an application is sometimes called _single-tenant_.
+- If you're an independent software vendor (ISV), you can write an application that signs in users:
- - In any organization. Such an application is called a *multitenant* web application. You'll sometimes read that it signs in users with their work or school accounts.
+ - In any organization. Such an application is called a _multitenant_ web application. You'll sometimes read that it signs in users with their work or school accounts.
- With their work or school or personal Microsoft accounts. - With only personal Microsoft accounts.
-
+ - If you're writing a business-to-consumer application, you can also sign in users with their social identities, by using Azure Active Directory B2C (Azure AD B2C). ## Account type support in authentication flows
-Some account types can't be used with certain authentication flows. For instance, in desktop, UWP, or daemon applications:
+Some account types can't be used with certain authentication flows. For instance, in desktop, Universal Windows Platform (UWP), or daemon applications:
- Daemon applications can be used only with Azure AD organizations. It doesn't make sense to try to use daemon applications to manipulate Microsoft personal accounts. The admin consent will never be granted. - You can use the integrated Windows authentication flow only with work or school accounts (in your organization or any organization). Integrated Windows authentication works with domain accounts, and it requires the machines to be domain-joined or Azure AD-joined. This flow doesn't make sense for personal Microsoft accounts.
active-directory Troubleshoot Mac Sso Extension Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-mac-sso-extension-plugin.md
Work with your MDM administrator (or Device Management team) to ensure that the
The following table provides specific MDM installation guidance depending on which OS you're deploying the extension to: -- [**iOS/iPadOS**: Deploy the Microsoft Enterprise SSO plug-in](/mem/intune/configuration/use-enterprise-sso-plug-in-macos-with-intune)-- [**macOS**: Deploy the Microsoft Enterprise SSO plug-in](/mem/intune/configuration/use-enterprise-sso-plug-in-ios-ipados-with-intune)
+- [**iOS/iPadOS**: Deploy the Microsoft Enterprise SSO plug-in](/mem/intune/configuration/use-enterprise-sso-plug-in-ios-ipados-with-intune)
+- [**macOS**: Deploy the Microsoft Enterprise SSO plug-in](/mem/intune/configuration/use-enterprise-sso-plug-in-macos-with-intune)
> [!IMPORTANT] > Although, any MDM is supported for deploying the SSO Extension, many organizations implement [**device-based conditional access polices**](../conditional-access/concept-conditional-access-grant.md#require-device-to-be-marked-as-compliant) by way of evaluating MDM compliance policies. If a third-party MDM is being used, ensure that the MDM vendor supports [**Intune Partner Compliance**](/mem/intune/protect/device-compliance-partners) if you would like to use device-based Conditional Access policies. When the SSO Extension is deployed via Intune or an MDM provider that supports Intune Partner Compliance, the extension can pass the device certificate to Azure AD so that device authentication can be completed.
active-directory How To Connect Pta Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-quick-start.md
If you have already installed Azure AD Connect by using the [express installatio
Follow these instructions to verify that you have enabled Pass-through Authentication correctly:
-1. Sign in to the [Azure Active Directory admin center](https://aad.portal.azure.com) with the Hybrid Identity Administratoristrator credentials for your tenant.
+1. Sign in to the [Azure Active Directory admin center](https://aad.portal.azure.com) with the Hybrid Identity Administrator credentials for your tenant.
2. Select **Azure Active Directory** in the left pane. 3. Select **Azure AD Connect**. 4. Verify that the **Pass-through authentication** feature appears as **Enabled**.
active-directory V2 Howto App Gallery Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/v2-howto-app-gallery-listing.md
To publish your application in the gallery, you must first read and agree to spe
- Implement support for *single sign-on* (SSO). To learn more about supported options, see [Plan a single sign-on deployment](plan-sso-deployment.md). - For password SSO, make sure that your application supports form authentication so that password vaulting can be used. - For federated applications (OpenID and SAML/WS-Fed), the application must support the [software-as-a-service (SaaS) model](https://azure.microsoft.com/overview/what-is-saas/). Enterprise gallery applications must support multiple user configurations and not any specific user.
+ - For federated applications (OpenID and SAML/WS-Fed), the application can be single **or** multitenanted
- For Open ID Connect, the application must be multitenanted and the [Azure AD consent framework](../develop/consent-framework.md) must be correctly implemented. - Provisioning is optional yet highly recommended. To learn more about Azure AD SCIM, see [build a SCIM endpoint and configure user provisioning with Azure AD](../app-provisioning/use-scim-to-provision-users-and-groups.md).
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Previously updated : 02/03/2023 Last updated : 02/07/2023 # Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
The traditional [Azure Container Networking Interface (CNI)](./configure-azure-c
With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network (VNet) subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (using the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS. > [!NOTE]
-> Azure CNI Overlay is currently available only in the following regions:
-> - North Central US
-> - West Central US
-> - East US
-> - UK South
-> - Australia East
+> Azure CNI Overlay is currently **_unavailable_** in the following regions:
+> - East US 2
+> - South Central US
+> - West US
+> - West US 2
+ ## Overview of overlay networking
Ingress connectivity to the cluster can be achieved using an ingress controller
Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address space logically different from the VNet but has scaling and other limitations. The below table provides a detailed comparison between Kubenet and Azure CNI Overlay. If you do not want to assign VNet IP addresses to pods due to IP shortage, then Azure CNI Overlay is the recommended solution.
-| Area | Azure CNI Overlay | Kubenet |
-| -- | -- | -- |
-| Cluster scale | 1000 nodes and 250 pods/node | 400 nodes and 250 pods/node |
-| Network configuration | Simple - no additional configuration required for pod networking | Complex - requires route tables and UDRs on cluster subnet for pod networking |
-| Pod connectivity performance | Performance on par with VMs in a VNet | Additional hop adds minor latency |
-| Kubernetes Network Policies | Azure Network Policies, Calico | Calico |
-| OS platforms supported | Linux and Windows | Linux only |
+| Area | Azure CNI Overlay | Kubenet |
+|||-|
+| Cluster scale | 1000 nodes and 250 pods/node | 400 nodes and 250 pods/node |
+| Network configuration | Simple - no additional configuration required for pod networking | Complex - requires route tables and UDRs on cluster subnet for pod networking |
+| Pod connectivity performance | Performance on par with VMs in a VNet | Additional hop adds minor latency |
+| Kubernetes Network Policies | Azure Network Policies, Calico | Calico |
+| OS platforms supported | Linux and Windows | Linux only |
## IP address planning
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
This article shows you how to use Azure CNI networking to create and use a virtu
* The cluster identity used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) permissions on the subnet within your virtual network. If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required: * `Microsoft.Network/virtualNetworks/subnets/join/action` * `Microsoft.Network/virtualNetworks/subnets/read`
+ * `Microsoft.Authorization/roleAssignments/write`
* The subnet assigned to the AKS node pool cannot be a [delegated subnet](../virtual-network/subnet-delegation-overview.md). * AKS doesn't apply Network Security Groups (NSGs) to its subnet and will not modify any of the NSGs associated with that subnet. If you provide your own subnet and add NSGs associated with that subnet, you must ensure the security rules in the NSGs allow traffic within the node CIDR range. For more details, see [Network security groups][aks-network-nsg].
aks Image Cleaner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md
Previously updated : 02/03/2023 Last updated : 02/07/2023 # Use Image Cleaner to clean up stale images on your Azure Kubernetes Service cluster (preview)
az aks update -g MyResourceGroup -n MyManagedCluster
## Logging
-Deletion image logs are stored in `eraser-aks-nodepool-xxx` pods for manually deleted images, and in `eraser-collector-xxx` pods for automatically deleted images.
+Deletion image logs are stored in `eraser-aks-nodepool-xxx` pods for manually deleted images, and in `collector-aks-nodes-xxx` pods for automatically deleted images.
You can view these logs by running `kubectl logs <pod name> -n kubesystem`. However, this command may return only the most recent logs, since older logs are routinely deleted. To view all logs, follow these steps to enable the [Azure Monitor add-on](./monitor-aks.md) and use the Container Insights pod log table.
You can view these logs by running `kubectl logs <pod name> -n kubesystem`. Howe
1. In the Azure portal, search for the workspace resource ID, then select **Logs**.
-1. Copy this query into the table, replacing `name` with either `eraser-aks-nodepool-xxx` (for manual mode) or `eraser-collector-xxx` (for automatic mode).
+1. Copy this query into the table, replacing `name` with either `eraser-aks-nodepool-xxx` (for manual mode) or `collector-aks-nodes-xxx` (for automatic mode).
```kusto let startTimestamp = ago(1h);
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
The following table compares features available in the managed gateway versus th
| [Custom domains](configure-custom-domain.md) | ✔️ | ✔️ | ✔️ | | [Built-in cache](api-management-howto-cache.md) | ✔️ | ❌ | ❌ | | [External Redis-compatible cache](api-management-howto-cache-external.md) | ✔️ | ✔️ | ✔️ |
-| [Virtual network injection](virtual-network-concepts.md) | Developer, Premium | ❌ | ✔️<sup>1</sup> |
+| [Virtual network injection](virtual-network-concepts.md) | Developer, Premium | ❌ | ✔️<sup>1,2</sup> |
| [Private endpoints](private-endpoint.md) | ✔️ | ❌ | ❌ | | [Availability zones](zone-redundancy.md) | Premium | ❌ | ✔️<sup>1</sup> | | [Multi-region deployment](api-management-howto-deploy-multi-region.md) | Premium | ❌ | ✔️<sup>1</sup> |
-| [CA root certificates](api-management-howto-ca-certificates.md) for certificate validation | ✔️ | ❌ | ✔️<sup>2</sup> |
+| [CA root certificates](api-management-howto-ca-certificates.md) for certificate validation | ✔️ | ❌ | ✔️<sup>3</sup> |
| [Managed domain certificates](configure-custom-domain.md?tabs=managed#domain-certificate-options) | ✔️ | ✔️ | ❌ | | [TLS settings](api-management-howto-manage-protocols-ciphers.md) | ✔️ | ✔️ | ✔️ | <sup>1</sup> Depends on how the gateway is deployed, but is the responsibility of the customer.<br/>
-<sup>2</sup> Requires configuration of local CA certificates.<br/>
+<sup>2</sup> Connectivity to the self-hosted gateway v2 [configuration endpoint](self-hosted-gateway-overview.md#fqdn-dependencies) requires DNS resolution of the default endpoint hostname; custom domain name is currently not supported.<br/>
+<sup>3</sup> Requires configuration of local CA certificates.<br/>
### Backend APIs
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-internal-vnet.md
This article explains how to set up VNet connectivity for your API Management in
* Git > [!NOTE]
-> None of the API Management endpoints are registered on the public DNS. The endpoints remain inaccessible until you [configure DNS](#dns-configuration) for the VNet.
+> * None of the API Management endpoints are registered on the public DNS. The endpoints remain inaccessible until you [configure DNS](#dns-configuration) for the VNet.
+> * To use the self-hosted gateway in this mode, also enable private connectivity to the self-hosted gateway [configuration endpoint](self-hosted-gateway-overview.md#fqdn-dependencies). Currently, API Management doesn't enable configuring a custom domain name for the v2 endpoint.
Use API Management in internal mode to:
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
We recommend setting resource requests to two cores and 2 GiB as a starting poin
## Custom domain names and SSL certificates
-If you use custom domain names for the API Management endpoints, especially if you use a custom domain name for the Management endpoint, you might need to update the value of `config.service.endpoint` in the **\<gateway-name\>.yaml** file to replace the default domain name with the custom domain name. Make sure that the Management endpoint can be accessed from the pod of the self-hosted gateway in the Kubernetes cluster.
+If you use custom domain names for the [API Management endpoints](self-hosted-gateway-overview.md#fqdn-dependencies), especially if you use a custom domain name for the Management endpoint, you might need to update the value of `config.service.endpoint` in the **\<gateway-name\>.yaml** file to replace the default domain name with the custom domain name. Make sure that the Management endpoint can be accessed from the pod of the self-hosted gateway in the Kubernetes cluster.
In this scenario, if the SSL certificate that's used by the Management endpoint isn't signed by a well-known CA certificate, you must make sure that the CA certificate is trusted by the pod of the self-hosted gateway.
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
To operate properly, each self-hosted gateway needs outbound connectivity on por
| Description | Required for v1 | Required for v2 | Notes | |:|:|:|:|
-| Hostname of the configuration endpoint | `<apim-service-name>.management.azure-api.net` | `<apim-service-name>.configuration.azure-api.net` | |
+| Hostname of the configuration endpoint | `<apim-service-name>.management.azure-api.net` | `<apim-service-name>.configuration.azure-api.net` | Connectivity to v2 endpoint requires DNS resolution of the default hostname.<br/><br/>Currently, API Management doesn't enable configuring a custom domain name for the v2 endpoint<sup>1</sup>. |
| Public IP address of the API Management instance | ✔️ | ✔️ | IP addresses of primary location is sufficient. |
-| Public IP addresses of Azure Storage [service tag](../virtual-network/service-tags-overview.md) | ✔️ | Optional<sup>1</sup> | IP addresses must correspond to primary location of API Management instance. |
-| Hostname of Azure Blob Storage account | ✔️ | Optional<sup>1</sup> | Account associated with instance (`<blob-storage-account-name>.blob.core.windows.net`) |
-| Hostname of Azure Table Storage account | ✔️ | Optional<sup>1</sup> | Account associated with instance (`<table-storage-account-name>.table.core.windows.net`) |
-| Endpoints for [Azure Application Insights integration](api-management-howto-app-insights.md) | Optional<sup>2</sup> | Optional<sup>2</sup> | Minimal required endpoints are:<ul><li>`rt.services.visualstudio.com:443`</li><li>`dc.services.visualstudio.com:443`</li><li>`{region}.livediagnostics.monitor.azure.com:443`</li></ul>Learn more in [Azure Monitor docs](../azure-monitor/app/ip-addresses.md#outgoing-ports) |
-| Endpoints for [Event Hubs integration](api-management-howto-log-event-hubs.md) | Optional<sup>2</sup> | Optional<sup>2</sup> | Learn more in [Azure Event Hubs docs](../event-hubs/network-security.md) |
-| Endpoints for [external cache integration](api-management-howto-cache-external.md) | Optional<sup>2</sup> | Optional<sup>2</sup> | This requirement depends on the external cache that is being used |
-
-<sup>1</sup> Only required in v2 when API inspector or quotas are used in policies.<br/>
-<sup>2</sup> Only required when feature is used and requires public IP address, port and hostname information.<br/>
+| Public IP addresses of Azure Storage [service tag](../virtual-network/service-tags-overview.md) | ✔️ | Optional<sup>2</sup> | IP addresses must correspond to primary location of API Management instance. |
+| Hostname of Azure Blob Storage account | ✔️ | Optional<sup>2</sup> | Account associated with instance (`<blob-storage-account-name>.blob.core.windows.net`) |
+| Hostname of Azure Table Storage account | ✔️ | Optional<sup>2</sup> | Account associated with instance (`<table-storage-account-name>.table.core.windows.net`) |
+| Endpoints for [Azure Application Insights integration](api-management-howto-app-insights.md) | Optional<sup>3</sup> | Optional<sup>3</sup> | Minimal required endpoints are:<ul><li>`rt.services.visualstudio.com:443`</li><li>`dc.services.visualstudio.com:443`</li><li>`{region}.livediagnostics.monitor.azure.com:443`</li></ul>Learn more in [Azure Monitor docs](../azure-monitor/app/ip-addresses.md#outgoing-ports) |
+| Endpoints for [Event Hubs integration](api-management-howto-log-event-hubs.md) | Optional<sup>3</sup> | Optional<sup>3</sup> | Learn more in [Azure Event Hubs docs](../event-hubs/network-security.md) |
+| Endpoints for [external cache integration](api-management-howto-cache-external.md) | Optional<sup>3</sup> | Optional<sup>3</sup> | This requirement depends on the external cache that is being used |
+
+<sup>1</sup>For an API Management instance in an internal virtual network, enable private connectivity to the v2 configuration endpoint from the location of the self-hosted gateway, for example, using a private DNS in a peered network.<br/>
+<sup>2</sup>Only required in v2 when API inspector or quotas are used in policies.<br/>
+<sup>3</sup> Only required when feature is used and requires public IP address, port, and hostname information.<br/>
> [!IMPORTANT] > * DNS hostnames must be resolvable to IP addresses and the corresponding IP addresses must be reachable.
application-gateway Tutorial Ingress Controller Add On Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-existing.md
az network application-gateway create -n myApplicationGateway -l eastus -g myRes
``` > [!NOTE]
-> application gateway ingress controller (AGIC) add-on **only** supports application gateway v2 SKUs (Standard and WAF), and **not** the application gateway v1 SKUs.
+> The application gateway ingress controller (AGIC) add-on **only** supports application gateway v2 SKUs (Standard and WAF), and **not** the application gateway v1 SKUs.
## Enable the AGIC add-on in existing AKS cluster through Azure CLI
automation Automation Tutorial Runbook Textual Python 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/automation-tutorial-runbook-textual-python-3.md
Title: Create a Python 3 runbook (preview) in Azure Automation
-description: This article teaches you to create, test, and publish a simple Python 3 runbook (preview) in your Azure Automation account.
+ Title: Create a Python 3.8 runbook (preview) in Azure Automation
+description: This article teaches you to create, test, and publish a simple Python 3.8 runbook (preview) in your Azure Automation account.
Previously updated : 04/28/2021 Last updated : 02/07/2023
-# Tutorial: Create a Python 3 runbook (preview)
+# Tutorial: Create a Python 3.8 runbook (preview)
-This tutorial walks you through the creation of a [Python 3 runbook](../automation-runbook-types.md#python-runbooks) (preview) in Azure Automation. Python runbooks compile under Python 2 and 3. You can directly edit the code of the runbook using the text editor in the Azure portal.
+This tutorial walks you through the creation of a [Python 3.8 runbook](../automation-runbook-types.md#python-runbooks) (preview) in Azure Automation. Python runbooks compile under Python 2.7 and 3.8 You can directly edit the code of the runbook using the text editor in the Azure portal.
> [!div class="checklist"] > * Create a simple Python runbook
This tutorial walks you through the creation of a [Python 3 runbook](../automati
## Prerequisites
-To complete this tutorial, you need the following:
+To complete this tutorial, you need:
-- Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-- [Automation account](../automation-security-overview.md) to hold the runbook and authenticate to Azure resources. This account must have permission to start and stop the virtual machine. The [Run As account](../automation-security-overview.md#run-as-accounts) is required for this tutorial. --- An Azure virtual machine. During this tutorial, you will start and stop this machine, so it should not be a production VM.
+- An [Automation account](../automation-security-overview.md) to hold the runbook and authenticate to Azure resources using Managed Identities. A managed identity is automatically created for you when you create the Automation account.
+
+- An Azure virtual machine. During this tutorial, you'll start and stop this machine, so it shouldn't be a production VM.
## Create a new runbook
You start by creating a simple runbook that outputs the text *Hello World*.
1. In the Azure portal, open your Automation account.
- The Automation account page gives you a quick view of the resources in this account. You should already have some assets. Most of those assets are the modules that are automatically included in a new Automation account. You should also have the Run As account credential asset that's mentioned in the [prerequisites](#prerequisites).
+ The Automation account page gives you a quick view of the resources in this account. You should already have some assets. Most of those assets are the modules that are automatically included in a new Automation account.
+
+ You should also have a managed identity enabled that's mentioned in the [prerequisites](#prerequisites). You can verify that by viewing the **Identity** resource under **Account Settings**.
-2. Select **Runbooks** under **Process Automation** to open the list of runbooks.
+1. Select **Runbooks** under **Process Automation** to open the list of runbooks.
-3. Select **Add a runbook** to create a new runbook.
+1. Select **Create a runbook** to create a new runbook.
-4. Give the runbook the name **MyFirstRunbook-Python**.
+1. Give the runbook the name **MyFirstRunbook-Python**.
-5. Select **Python 3** for **Runbook type**.
+1. Select **Python** for the **Runbook type**.
-6. Select **Create** to create the runbook and open the textual editor.
+1. Select **Python 3.8** for the **Runtime version**.
+
+1. Select **Create** to create the runbook and open the textual editor.
## Add code to the runbook
Before you publish the runbook to make it available in production, you want to t
1. Select **Test pane** to open the **Test** pane.
-2. Select **Start** to start the test. This should be the only enabled option.
+1. Select **Start** to start the test. This option should be the only enabled option.
-3. A [runbook job](../automation-runbook-execution.md) is created and its status displayed.
- The job status starts as **Queued**, indicating that it is waiting for a runbook worker in the cloud to become available. It changes to **Starting** when a worker claims the job, and then **Running** when the runbook actually starts running.
+1. A [runbook job](../automation-runbook-execution.md) is created and its status displayed.
+ The job status starts as **Queued**, indicating that it's waiting for a runbook worker in the cloud to become available. It changes to **Starting** when a worker claims the job, and then **Running** when the runbook actually starts running.
-4. When the runbook job completes, its output is displayed. In this case, you should see `Hello World`.
+1. When the runbook job completes, its output is displayed. In this case, you should see `Hello World`.
-5. Close the **Test** pane to return to the canvas.
+1. Close the **Test** pane to return to the canvas.
## Publish and start the runbook
The runbook that you created is still in draft mode. You need to publish it befo
1. Select **Publish** to publish the runbook and then **Yes** when prompted.
-2. If you scroll left to view the runbook on the **Runbooks** page, you should see an **Authoring Status** of **Published**.
+1. If you close the **MyFirstRunbook_python** pane, you are back on the **Runbooks** page where you should see an **Authoring Status** of **Published**.
-3. Scroll back to the right to view the pane for **MyFirstRunbook-Python3**.
+1. Select the **MyFirstRunbook-Python** name in the list, you'll go back into the **MyFirstRunbook-Python** pane.
- The options across the top allow you to start the runbook, view the runbook, or schedule it to start at some time in the future.
+ The options across the top allow you to start the runbook, view the runbook, edit the runbook, schedule it to start at some time in the future, and other actions.
-4. Select **Start** and then select **OK** when the **Start Runbook** pane opens.
+1. Select **Start** and then select **OK** when the **Start Runbook** pane opens.
-5. A **Job** pane is opened for the runbook job that you created. You can close this pane, but let's leave it open so that you can watch the job's progress.
+1. A **Job** pane is opened for the runbook job that you created. You can close this pane, but let's keep it open, so that you can watch the job's progress.
-6. The job status is shown in **Job Summary** and matches the statuses that you saw when you tested the runbook.
+1. The job status is shown in **Status** field under **Essentials**. The values here match the status values when you tested the runbook.
-7. Once the runbook status shows **Completed**, select **Output**. The **Output** pane is opened, where you can see `Hello World`.
+1. Once the runbook status shows **Completed**, select the **Output** tab. In the **Output** tab, you can see `Hello World`.
-8. Close the **Output** pane.
+1. Close the **Output** tab.
-9. Select **All Logs** to open the **Streams** pane for the runbook job. You should only see `Hello World` in the Output stream. However, this pane can show other streams for a runbook job, such as **Verbose** and **Error**, if the runbook writes to them.
+1. Select **All Logs** tab to view streams for the runbook job. You should only see `Hello World` in the output stream. However, this tab can show other streams for a runbook job, such as **Verbose** and **Error**, if the runbook writes to them.
-10. Close the **Streams** pane and the **Job** pane to return to the **MyFirstRunbook-Python3** pane.
+1. Close the **Jobs** pane to return to the **MyFirstRunbook-Python** pane.
-11. Select **Jobs** to open the **Jobs** page for this runbook. This page lists all jobs created by this runbook. You should only see one job listed since you only ran the job once.
+1. Select **Jobs** resource to open the **Jobs** resource page for this runbook. This page lists all jobs created by this runbook. You should only see one job listed since you only ran the job once.
-12. You can select this job to open the same **Job** pane that you viewed when you started the runbook. This pane allows you to go back in time and view the details of any job that was created for a particular runbook.
+1. You can select this job to open the same **Job** pane that you viewed when you started the runbook. This pane allows you to go back in time and view the details of any job that was created for a particular runbook.
## Add authentication to manage Azure resources
-You've tested and published your runbook, but so far it doesn't do anything useful. You want to have it manage Azure resources.
-To do this, the script has to authenticate using the Run As account credential from your Automation account.
+You've tested and published your runbook, but so far it doesn't do anything useful. You want to have it manage Azure resources. To manage resources, the script has to authenticate.
+
+The recommended way to authenticate is with **managed identity**. When you create an Azure Automation Account, a managed identity is automatically created for you.
+
+To use these samples, [add the following packages](../python-3-packages.md) in the **Python Packages** resource of the Automation Account. You can add the WHL files for these packages with these links.
+
+* [azure-core](https://pypi.org/project/azure-core/#files)
+* [azure-identity](https://pypi.org/project/azure-identity/#files)
+* [azure-mgmt-compute](https://pypi.org/project/azure-mgmt-compute/#files)
+* [azure-mgmt-core](https://pypi.org/project/azure-mgmt-core/#files)
+* [msal](https://pypi.org/project/msal/#files)
+* [typing-extensions](https://pypi.org/project/typing-extensions/#files)
+
+When you add these packages, select a runtime version that matches your runbook.
> [!NOTE]
-> The Automation account must have been created with the Run As account for there to be a Run As certificate.
-> If your Automation account was not created with the Run As account, you can authenticate as described in
-> [Authenticate with the Azure Management Libraries for Python](/azure/developer/python/sdk/authentication-overview) or [create a Run As account](../create-run-as-account.md).
+> The following code was tested with runtime version 3.8.
+
+### Managed identity
-1. Open the textual editor by selecting **Edit** on the **MyFirstRunbook-Python3** pane.
+To use managed identity, ensure that it is enabled:
+
+* To verify if the Managed identity is enabled for the Automation account go to your **Automation account** > **Account Settings** > **Identity** and set the **Status** to **On**.
+* The managed identity has a role assigned to manage the resource. In this example of managing a virtual machine resource, add the "Virtual Machine Contributor" role on the resource group of that contains the Virtual Machine. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md)
+
+With the manage identity role configured, you can start adding code.
+
+1. Open the textual editor by selecting **Edit** on the **MyFirstRunbook-Python** pane.
2. Add the following code to authenticate to Azure:
- ```python
- import os
- from azure.mgmt.compute import ComputeManagementClient
- import azure.mgmt.resource
- import automationassets
-
- def get_automation_runas_credential(runas_connection):
- from OpenSSL import crypto
- import binascii
- from msrestazure import azure_active_directory
- import adal
-
- # Get the Azure Automation RunAs service principal certificate
- cert = automationassets.get_automation_certificate("AzureRunAsCertificate")
- pks12_cert = crypto.load_pkcs12(cert)
- pem_pkey = crypto.dump_privatekey(crypto.FILETYPE_PEM,pks12_cert.get_privatekey())
-
- # Get run as connection information for the Azure Automation service principal
- application_id = runas_connection["ApplicationId"]
- thumbprint = runas_connection["CertificateThumbprint"]
- tenant_id = runas_connection["TenantId"]
-
- # Authenticate with service principal certificate
- resource ="https://management.core.windows.net/"
- authority_url = ("https://login.microsoftonline.com/"+tenant_id)
- context = adal.AuthenticationContext(authority_url)
- return azure_active_directory.AdalAuthentication(
- lambda: context.acquire_token_with_client_certificate(
- resource,
- application_id,
- pem_pkey,
- thumbprint)
- )
-
- # Authenticate to Azure using the Azure Automation RunAs service principal
- runas_connection = automationassets.get_automation_connection("AzureRunAsConnection")
- azure_credential = get_automation_runas_credential(runas_connection)
- ```
+```python
+#!/usr/bin/env python3
+from azure.identity import DefaultAzureCredential
+from azure.mgmt.compute import ComputeManagementClient
+
+SUBSCRIPTION_ID="YOUR_SUBSCRIPTION_ID"
+
+azure_credential = DefaultAzureCredential()
+
+import os
+import requests
+# printing environment variables
+endpoint = os.getenv('IDENTITY_ENDPOINT')+"?resource=https://management.azure.com/"
+identityHeader = os.getenv('IDENTITY_HEADER')
+payload={}
+headers = {
+'X-IDENTITY-HEADER' : identityHeader,
+'Metadata' : True
+}
+response = requests.get(endpoint, headers)
+print(response.text)
+```
## Add code to create Python Compute client and start the VM To work with Azure VMs, create an instance of the [Azure Compute client for Python](/python/api/azure-mgmt-compute/azure.mgmt.compute.computemanagementclient).
-Use the compute client to start the VM. Add the following code to the runbook:
- ```python
-# Initialize the compute management client with the Run As credential and specify the subscription to work against.
+# Initialize client with the credential and subscription.
compute_client = ComputeManagementClient( azure_credential,
- str(runas_connection["SubscriptionId"])
+ SUBSCRIPTION_ID
) - print('\nStart VM')
-async_vm_start = compute_client.virtual_machines.start(
+async_vm_start = compute_client.virtual_machines.begin_start(
"MyResourceGroup", "TestVM") async_vm_start.wait()
+print('\nFinished start.')
``` Where `MyResourceGroup` is the name of the resource group that contains the VM, and `TestVM` is the name of the VM that you want to start.
resource_group_name = str(sys.argv[1])
vm_name = str(sys.argv[2]) ```
-This imports the `sys` module, and creates two variables to hold the resource group and VM names. Notice that the element of the argument list, `sys.argv[0]`, is the name of the script, and is not input by the user.
+This code imports the `sys` module, and creates two variables to hold the resource group and VM names. Notice that the element of the argument list, `sys.argv[0]`, is the name of the script, and isn't input by the user.
Now you can modify the last two lines of the runbook to use the input parameter values instead of using hard-coded values: ```python
-async_vm_start = compute_client.virtual_machines.start(
+async_vm_start = compute_client.virtual_machines.begin_start(
resource_group_name, vm_name) async_vm_start.wait() ```
azure-app-configuration Pull Key Value Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/pull-key-value-devops-pipeline.md
The [Azure App Configuration](https://marketplace.visualstudio.com/items?itemNam
- App Configuration store - create one for free in the [Azure portal](https://portal.azure.com). - Azure DevOps project - [create one for free](https://go.microsoft.com/fwlink/?LinkId=2014881) - Azure App Configuration task - download for free from the [Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=AzureAppConfiguration.azure-app-configuration-task#:~:text=Navigate%20to%20the%20Tasks%20tab,the%20Azure%20App%20Configuration%20instance.). -- [Node 10](https://nodejs.org/en/blog/release/v10.21.0/) - for users running the task on self-hosted agents.
+- [Node 16](https://nodejs.org/en/blog/release/v16.16.0/) - for users running the task on self-hosted agents.
## Create a service connection
azure-app-configuration Push Kv Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/push-kv-devops-pipeline.md
The [Azure App Configuration Push](https://marketplace.visualstudio.com/items?it
- App Configuration resource - create one for free in the [Azure portal](https://portal.azure.com). - Azure DevOps project - [create one for free](https://go.microsoft.com/fwlink/?LinkId=2014881) - Azure App Configuration Push task - download for free from the [Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=AzureAppConfiguration.azure-app-configuration-task-push).-- [Node 10](https://nodejs.org/en/blog/release/v10.21.0/) - for users running the task on self-hosted agents.
+- [Node 16](https://nodejs.org/en/blog/release/v16.16.0/) - for users running the task on self-hosted agents.
## Create a service connection
azure-arc Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/deploy-cli.md
+
+ Title: Azure Arc resource bridge (preview) deployment command overview
+description: Learn about the Azure CLI commands which can be used to manage your Azure Arc resource bridge (preview) deployment.
Last updated : 02/06/2023+++
+# Azure Arc resource bridge (preview) deployment command overview
+
+[Azure CLI](/cli/azure/install-azure-cli) is required to deploy the Azure Arc resource bridge. When deploying Arc resource bridge with a corresponding partner product, the Azure CLI commands may be combined into an automation script, along with additional provider-specific commands. To learn about installing Arc resource bridge with a corresponding partner product, see:
+
+- [Connect VMware vCenter Server to Azure with Arc resource bridge](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md)
+- [Connect System Center Virtual Machine Manager (SCVMM) to Azure with Arc resource bridge](../system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md#download-the-onboarding-script)
+- [Azure Stack HCI VM Management through Arc resource bridge](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites)
+
+This topic provides an overview of the [Azure CLI commands](/cli/azure/arcappliance) that are used to manage Arc resource bridge (preview) deployment, in the order in which they are typically used for deployment.
+
+## az arcappliance createconfig
+
+Creates the configuration files used by Arc resource bridge. Credentials that are provided during `createconfig`, such as vCenter credentials for VMware vSphere, are stored in a configuration file and locally within Arc resource bridge. These credentials should be a separate user account used only by Arc resource bridge, with permission to view, create, delete, and manage on-premises resources. If the credentials change, then the credentials on the resource bridge should be rotated.
+
+The `createconfig` command features two modes: interactive and non-interactive. Interactive mode provides helpful prompts that explain the parameter and what to pass. To initiate interactive mode, pass only the three required parameters. Non-interactive mode allows you to pass all the parameters needed to create the configuration files without being prompted, which saves time and is useful for automation scripts. Three configuration files are generated: resource.yaml, appliance.yaml and infra.yaml. These files should be kept and stored in a secure location, as they're required for maintenance of Arc resource bridge.
+
+> [!NOTE]
+> Azure Stack HCI and Hybrid AKS use different commands to create the Arc resource bridge configuration files.
+
+## az arcappliance validate
+
+Checks the configuration files for a valid schema, cloud and core validations (such as management machine connectivity to required URLs), network settings, and no proxy settings.
+
+## az arcappliance prepare
+
+Downloads the OS images from Microsoft and uploads them to the on-premises cloud image gallery to prepare for the creation of the appliance VM.
+
+This command can take up to 30 minutes to complete, depending on the network download speed. Allow the command to complete before continuing with the deployment.
+
+## az arcappliance deploy
+
+Deploys an on-premises instance of Arc resource bridge as an appliance VM, bootstrapped to be a Kubernetes management cluster. Gets all necessary pods into a running state.
+
+## az arcappliance create
+
+Creates Arc resource bridge in Azure as an ARM resource, then establishes the connection between the ARM resource and on-premises appliance VM.
+
+Running this command is the last step in the deployment process.
+
+## az arcappliance show
+
+Gets the ARM resource information for Arc resource bridge. This information helps you monitor the status of the appliance. Successful appliance creation results in `ProvisioningState = Succeeded` and `Status = Running`.
+
+## az arcappliance delete
+
+Deletes the appliance VM and Azure resources. It doesn't clean up the OS image, which remains in the on-premises cloud gallery.
+
+If a deployment fails, you must run this command to clean up the environment before you attempt to deploy again.
+
+## Next steps
+
+- Explore the full list of [Azure CLI commands and required parameters](/cli/azure/arcappliance) for Arc resource bridge.
+- Get [troubleshooting tips for Arc resource bridge](troubleshoot-resource-bridge.md).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
If you are deploying on Azure Stack HCI, the x32 Azure CLI installer can be used
### Supported regions
-Arc resource bridge currently supports the following Azure regions:
+In order to use Arc resource bridge in a region, Arc resource bridge and the private cloud product must be supported in the region. For example, to use Arc resource bridge with Azure Stack HCI in East US, Arc resource bridge and Azure Stack HCI must be supported in East US. Please check with the private cloud product for their region availability - it is typically called out in their deployment instructions of Arc resource bridge. There are instances where Arc Resource Bridge may be available in a region where private cloud support is not yet available.
+
+Arc resource bridge supports the following Azure regions:
* East US * West Europe
azure-arc Manage Automatic Vm Extension Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-automatic-vm-extension-upgrade.md
If an extension upgrade fails, Azure will try to repair the extension by perform
If you continue to have trouble upgrading an extension, you can [disable automatic extension upgrade](#manage-automatic-extension-upgrade) to prevent the system from trying again while you troubleshoot the issue. You can [enable automatic extension upgrade](#manage-automatic-extension-upgrade) again when you're ready.
+### Timing of automatic extension upgrades
+
+When a new version of a VM extension is published, it becomes available for installation and manual upgrade on Arc-enabled servers. For servers that already have the extension installed and automatic extension upgrade enabled, it may take up to 5 weeks for every server with that extension to get the automatic upgrade. Upgrades are issued in batches across Azure regions and subscriptions, so you may see the extension get upgraded on some of your servers before others. If you need to upgrade an extension immediately, follow the guidance to manually upgrade extensions using the [Azure portal](manage-vm-extensions-portal.md#upgrade-extensions), [Azure PowerShell](manage-vm-extensions-cli.md#upgrade-extensions) or [Azure CLI](manage-vm-extensions-powershell.md#upgrade-extension).
+ ## Supported extensions Automatic extension upgrade supports the following extensions:
azure-cache-for-redis Cache How To Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-geo-replication.md
Previously updated : 10/20/2022 Last updated : 02/06/2023
Passive geo-replication links together two Premium tier Azure Cache for Redis in
Compare _active-passive_ to _active-active_, where you can write to either side of the pair, and it will synchronize with the other side.
-With passive geo-replication, the cache instances are typically located in different Azure regions, though that isn't required. One instance acts as the primary, and the other as the secondary. The primary handles read and write requests and propagates changes to the secondary.
+With passive geo-replication, the cache instances are typically located in different Azure regions, though that isn't required. One instance acts as the primary, and the other as the secondary. The primary handles read and write requests, and the primary propagates changes to the secondary.
-Failover is not automatic. For more information on how to use failover, see [Initiate a failover from geo-primary to geo-secondary (preview)](#initiate-a-failover-from-geo-primary-to-geo-secondary-preview).
+Failover isn't automatic. For more information on how to use failover, see [Initiate a failover from geo-primary to geo-secondary](#initiate-a-failover-from-geo-primary-to-geo-secondary).
> [!NOTE]
-> Geo-replication is designed as a disaster-recovery solution.
+> Passive geo-replication is designed as a disaster-recovery solution.
> > ## Scope of availability
Failover is not automatic. For more information on how to use failover, see [Ini
||||| |Available | No | Yes | Yes |
-_Passive geo-replication_ is only available in the Premium tier of Azure Cache for Redis. The Enterprise and Enterprise Flash tiers also offer geo-replication, but those tiers use a more advanced version called _active geo-replication_.
+_Passive geo-replication_ is only available in the Premium tier of Azure Cache for Redis. The Enterprise and Enterprise Flash tiers also offer geo-replication, but those tiers use a more advanced version called [_active geo-replication_](cache-how-to-active-geo-replication.md).
## Geo-replication prerequisites
After geo-replication is configured, the following restrictions apply to your li
- You can't [Import](cache-how-to-import-export-data.md#import) into the secondary linked cache. - You can't delete either linked cache, or the resource group that contains them, until you unlink the caches. For more information, see [Why did the operation fail when I tried to delete my linked cache?](#why-did-the-operation-fail-when-i-tried-to-delete-my-linked-cache) - If the caches are in different regions, network egress costs apply to the data moved across regions. For more information, see [How much does it cost to replicate my data across Azure regions?](#how-much-does-it-cost-to-replicate-my-data-across-azure-regions)-- Failover is not automatic. You must start the failover from the primary to the secondary inked cache. For more information on how to use failover, see [Initiate a failover from geo-primary to geo-secondary (preview)](#initiate-a-failover-from-geo-primary-to-geo-secondary-preview).
+- Failover isn't automatic. You must start the failover from the primary to the secondary linked cache. For more information on how to use failover, see [Initiate a failover from geo-primary to geo-secondary](#initiate-a-failover-from-geo-primary-to-geo-secondary).
- Private links can't be added to caches that are already geo-replicated. To add a private link to a geo-replicated cache: 1. Unlink the geo-replication. 2. Add a Private Link. 3. Last, relink the geo-replication.
After geo-replication is configured, the following restrictions apply to your li
:::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-confirm-link.png" alt-text="Screenshot showing how to link caches for geo-replication.":::
-1. You can view the progress of the replication process using **Geo-replication** on the left.
+1. You can view the progress of the replication process using **Geo-replication** in the Resource menu.
:::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-linking.png" alt-text="Screenshot showing the current Linking status.":::
- You can also view the linking status on the left, using **Overview**, for both the primary and secondary caches.
+ You can also view the linking status using **Overview** from the Resource menu for both the primary and secondary caches.
:::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-link-status.png" alt-text="Screenshot that highlights how to view the linking status for the primary and secondary caches.":::
- Once the replication process is complete, the **Link status** changes to **Succeeded**.
+ Once the replication process is complete, the **Link provisioning status** changes to **Succeeded**.
:::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-link-successful.png" alt-text="Screenshot showing cache linking status as Succeeded."::: The primary linked cache remains available for use during the linking process. The secondary linked cache isn't available until the linking process completes.
-## Geo-primary URLs (preview)
+## Geo-primary URL
-Once the caches are linked, URLs are generated that always point to the geo-primary cache. If a failover is initiated from the geo-primary to the geo-secondary, the URL remains the same, and the underlying DNS record is updated automatically to point to the new geo-primary.
+Once the caches are linked, a URL is generated for each cache that always points to the geo-primary cache. If a failover is initiated from the geo-primary to the geo-secondary, the URL remains the same, and the underlying DNS record is updated automatically to point to the new geo-primary.
:::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-urls.png" alt-text="Screenshot showing four URLs created by adding geo-replication.":::
-Four URLs are shown:
+Three URLs are shown:
-- **Geo-Primary URL** is a proxy URL with the format of `<cache-1-name>.geo.redis.cache.windows.net`. This URL always has the name of the first cache to be linked, but it always points to whichever cache is the current geo-primary.-- **Linked cache Geo-Primary URL** is a proxy URL with the format of `<cache-2-name>.geo.redis.cache.windows.net`. This URL always has the name of the second cache to be linked, and it will also always point to whichever cache is the current geo-primary.-- **Current Geo Primary Cache** is the direct address of the cache that is currently the geo-primary. The address is `redis.cache.windows.net` not `geo.redis.cache.windows.net`. The address listed in this field changes if a failover is initiated. -- **Current Geo Secondary Cache** is the direct address of the cache that is currently the geo-secondary. The address is `redis.cache.windows.net` not `geo.redis.cache.windows.net`. The address listed in this field changes if a failover is initiated.
+- **Geo-Primary URL** is a proxy URL with the format of `<cachename>.geo.redis.cache.windows.net`. The URL always points to whichever cache in the geo-replication pair is the current geo-primary.
+- **Current Geo Primary Cache** is the direct address of the cache that is currently the geo-primary. The address is `redis.cache.windows.net` not `geo.redis.cache.windows.net`. The address listed in the field changes if a failover is initiated.
+- **Current Geo Secondary Cache** is the direct address of the cache that is currently the geo-secondary. The address is `redis.cache.windows.net` not `geo.redis.cache.windows.net`. The address listed in the field changes if a failover is initiated.
-The goal of the two geo-primary URLs is to make updating the cache address easier on the application side in the event of a failover. Changing the address of either linked cache from `redis.cache.windows.net` to `geo.redis.cache.windows.net` ensures that your application is always pointing to the geo-primary, even if a failover is triggered.
-The URLs for the current geo-primary and current geo-secondary cache are provided in case youΓÇÖd like to link directly to a cache resource without any automatic routing.
+## Initiate a failover from geo-primary to geo-secondary
-## Initiate a failover from geo-primary to geo-secondary (preview)
-
-With one click, you can trigger a failover from the geo-primary to the geo-secondary.
+With one select, you can trigger a failover from the geo-primary to the geo-secondary.
:::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-failover.png" alt-text="Screenshot of linked caches with Failover highlighted.":::
Be sure to check the following items:
- If youΓÇÖre using a firewall in either cache, make sure that the firewall settings are similar so you have no connection issues. - Make sure both caches are using the same port and TLS/SSL settings-- The geo-primary and geo-secondary caches have different access keys. In the event of a failover being triggered, make sure your application can update the access key it's using to match the new geo-primary.
+- The geo-primary and geo-secondary caches have different access keys. If a failover is triggered, make sure your application can update the access key it's using to match the new geo-primary.
### Failover with minimal data loss
There's no need to run the CLIENT UNPAUSE command as the new geo-primary does re
- [Can I use geo-replication with a Standard or Basic tier cache?](#can-i-use-geo-replication-with-a-standard-or-basic-tier-cache) - [Is my cache available for use during the linking or unlinking process?](#is-my-cache-available-for-use-during-the-linking-or-unlinking-process)-- Can I track the health of the geo-replication link?
+- [When can I write to the new geo-primary after initiating failover?](#when-can-i-write-to-the-new-geo-primary-after-initiating-failover)
+- [Can I track the health of the geo-replication link?](#can-i-track-the-health-of-the-geo-replication-link)
- [Can I link more than two caches together?](#can-i-link-more-than-two-caches-together) - [Can I link two caches from different Azure subscriptions?](#can-i-link-two-caches-from-different-azure-subscriptions) - [Can I link two caches with different sizes?](#can-i-link-two-caches-with-different-sizes)
No, passive geo-replication is only available in the Premium tier. A more advanc
- The secondary linked cache isn't available until the linking process completes. - Both caches remain available until the unlinking process completes.
+### When can I write to the new geo-primary after initiating failover?
+
+When the failover process is initiated, you'll see the link provisioning status update to **Deleting**, which indicates that the previous link is being cleaned up. After this completes, the link provisioning status will update to **Creating**. This indicates that the new geo-primary is up-and-running and attempting to re-establish a geo-replication link to the old geo-primary cache. At this point, you'll be able to immediately connect to the new geo-primary cache instance for both reads and writes.
+ ### Can I track the health of the geo-replication link?
-Yes, there are several metrics available to help track the status of the geo-replication. These metrics are available in the Azure portal.
+Yes, there are several [metrics available](cache-how-to-monitor.md#list-of-metrics) to help track the status of the geo-replication. These metrics are available in the Azure portal.
- **Geo Replication Healthy** shows the status of the geo-replication link. The link will show up as unhealthy if either the geo-primary or geo-secondary caches are down. This is typically due to standard patching operations, but it could also indicate a failure situation. - **Geo Replication Connectivity Lag** shows the time since the last successful data synchronization between geo-primary and geo-secondary.
Yes, there are several metrics available to help track the status of the geo-rep
- **Geo Replication Fully Sync Event Started** indicates that a full synchronization action has been initiated between the geo-primary and geo-secondary caches. This occurs if standard replication can't keep up with the number of new writes. - **Geo Replication Full Sync Event Finished** indicates that a full synchronization action has been completed.
+There is also a [pre-built workbook](cache-how-to-monitor.md#organize-with-workbooks) called the **Geo-Replication Dashboard** that includes all of the geo-replication health metrics in one view. Using this view is recommended because it aggregates information that is emitted only from the geo-primary or geo-secondary cache instances.
+ ### Can I link more than two caches together?
-No, you can only link two caches together.
+No, you can only link two caches together when using passive geo-replication. [Active geo-replication](cache-how-to-active-geo-replication.md) supports up to five linked caches.
### Can I link two caches from different Azure subscriptions?
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md
Previously updated : 05/06/2022 Last updated : 02/06/2022 # Monitor Azure Cache for Redis
The types **Count** and **ΓÇ£Sum** can be misleading for certain metrics (connec
For non-clustered caches, we recommend using the metrics without the suffix `Instance Based`. For example, to check server load for your cache instance, use the metric `Server Load`.
-In contrast, for clustered caches, we recommend using the metrics with the suffix `Instance Based`, Then, add a split or filter on `ShardId`. For example, to check the server load of shard 1, use the metric "Server Load (Instance Based)", then apply filter `ShardId = 1`.
+In contrast, for clustered caches, we recommend using the metrics with the suffix `Instance Based`. Then, add a split or filter on `ShardId`. For example, to check the server load of shard 1, use the metric "Server Load (Instance Based)", then apply filter `ShardId = 1`.
## List of metrics
In contrast, for clustered caches, we recommend using the metrics with the suffi
> Geo-replication metrics are affected by monthly internal maintenance operations. The Azure Cache for Redis service periodically patches all caches with the latest platform features and improvements. During these updates, each cache node is taken offline, which temporarily disables the geo-replication link. If your geo replication link is unhealthy, check to see if it was caused by a patching event on either the geo-primary or geo-secondary cache by using **Diagnose and Solve Problems** from the Resource menu in the portal. Depending on the amount of data in the cache, the downtime from patching can take anywhere from a few minutes to an hour. If the geo-replication link is unhealthy for over an hour, [file a support request](../azure-portal/supportability/how-to-create-azure-support-request.md). > -- Geo Replication Connectivity Lag (preview)
+> [!NOTE]
+> The [Geo-Replication Dashboard](#organize-with-workbooks) workbook is a simple and easy way to view all Premium-tier geo-replication metrics in the same place. This dashboard will pull together metrics that are only emitted by the geo-primary or geo-secondary, so they can be viewed simultaneously.
+>
+
+- Geo Replication Connectivity Lag
- Depicts the time, in seconds, since the last successful data synchronization between geo-primary & geo-secondary. If the link goes down, this value continues to increase, indicating a problem. - This metric is only emitted **from the geo-secondary** cache instance. On the geo-primary instance, this metric has no value. - This metric is only available in the Premium tier for caches with geo-replication enabled.-- Geo Replication Data Sync Offset (preview)
+- Geo Replication Data Sync Offset
- Depicts the approximate amount of data, in bytes, that has yet to be synchronized to geo-secondary cache.
- - This metric is only emitted **from the geo-primary** cache instance. On the geo-secondary instance, this metric has no value.
+ - This metric is only emitted _from the geo-primary_ cache instance. On the geo-secondary instance, this metric has no value.
- This metric is only available in the Premium tier for caches with geo-replication enabled.-- Geo Replication Full Sync Event Finished (preview)
+- Geo Replication Full Sync Event Finished
- Depicts the completion of full synchronization between geo-replicated caches. When you see lots of writes on geo-primary, and replication between the two caches canΓÇÖt keep up, then a full sync is needed. A full sync involves copying the complete data from geo-primary to geo-secondary by taking an RDB snapshot rather than a partial sync that occurs on normal instances. See [this page](https://redis.io/docs/manual/replication/#how-redis-replication-works) for a more detailed explanation.
- - This metric reports zero most of the time because geo-replication uses partial resynchronizations for any new data added after the initial full synchronization.
- - This metric is only emitted **from the geo-secondary** cache instance. On the geo-primary instance, this metric has no value.
+ - The metric reports zero most of the time because geo-replication uses partial resynchronizations for any new data added after the initial full synchronization.
+ - This metric is only emitted _from the geo-secondary_ cache instance. On the geo-primary instance, this metric has no value.
- This metric is only available in the Premium tier for caches with geo-replication enabled. -- Geo Replication Full Sync Event Started (preview)
- - Depicts the start of full synchronization between geo-replicated caches. When there are a lot of writes in geo-primary, and replication between the two caches canΓÇÖt keep up, then a full sync is needed. A full sync involves copying the complete data from geo-primary to geo-secondary by taking an RDB snapshot rather than a partial sync that occurs on normal instances. See [this page](https://redis.io/docs/manual/replication/#how-redis-replication-works) for a more detailed explanation.
- - This metric reports zero most of the time because geo-replication uses partial resynchronizations for any new data added after the initial full synchronization.
- - This metric is only emitted **from the geo-secondary** cache instance. On the geo-primary instance, this metric has no value.
- - This metric is only available in the Premium tier for caches with geo-replication enabled.
+- Geo Replication Full Sync Event Started
+ - Depicts the start of full synchronization between geo-replicated caches. When there are many writes in geo-primary, and replication between the two caches canΓÇÖt keep up, then a full sync is needed. A full sync involves copying the complete data from geo-primary to geo-secondary by taking an RDB snapshot rather than a partial sync that occurs on normal instances. See [this page](https://redis.io/docs/manual/replication/#how-redis-replication-works) for a more detailed explanation.
+ - The metric reports zero most of the time because geo-replication uses partial resynchronizations for any new data added after the initial full synchronization.
+ - The metric is only emitted _from the geo-secondary_ cache instance. On the geo-primary instance, this metric has no value.
+ - The metric is only available in the Premium tier for caches with geo-replication enabled.
- Geo Replication Healthy - Depicts the status of the geo-replication link between caches. There can be two possible states that the replication link can be in: - 0 disconnected/unhealthy - 1 ΓÇô healthy
- - This metric is only emitted **from the geo-secondary** cache instance. On the geo-primary instance, this metric has no value.
- - This metric is only available in the Premium tier for caches with geo-replication enabled.
+ - The metric is available in the Enterprise, Enterprise Flash tiers, and Premium tier caches with geo-replication enabled.
+ - In caches on the Premium tier, this metric is only emitted *from the geo-secondary* cache instance. On the geo-primary instance, this metric has no value.
- This metric may indicate a disconnected/unhealthy replication status for several reasons, including: monthly patching, host OS updates, network misconfiguration, or failed geo-replication link provisioning.
- - A value of 0 does not mean that data on the geo-replica is lost. It just means that the link between geo-primary and geo-secondary is unhealthy.
+ - A value of 0 doesn't mean that data on the geo-replica is lost. It just means that the link between geo-primary and geo-secondary is unhealthy.
- If the geo-replication link is unhealthy for over an hour, [file a support request](../azure-portal/supportability/how-to-create-azure-support-request.md). - Gets
For more information about configuring and using Alerts, see [Overview of Alerts
## Organize with workbooks
-Once you've defined a metric, you can send it to a workbook. Workbooks provide a way to organize your metrics into groups that provide the information in coherent way.
+Once you've defined a metric, you can send it to a workbook. Workbooks provide a way to organize your metrics into groups that provide the information in coherent way. Azure Cache for Redis provides two workbooks by default in the **Azure Cache for Redis Insights** section:
+
+ :::image type="content" source="media/cache-how-to-monitor/cache-monitoring-workbook.png" alt-text="Screenshot showing the workbooks selected in the Resource menu.":::
For information on creating a metric, see [Create your own metrics](#create-your-own-metrics).
+The two workbooks provided are:
+- **Azure Cache For Redis Resource Overview** combines many of the most commonly used metrics so that the health and performance of the cache instance can be viewed at a glance.
+ :::image type="content" source="media/cache-how-to-monitor/cache-monitoring-resource-overview.png" alt-text="Screenshot of graphs showing a resource overview for the cache.":::
+
+- **Geo-Replication Dashboard** pulls geo-replication health and status metrics from both the geo-primary and geo-secondary cache instances to give a complete picture of geo-replcation health. Using this dashboard is recommended, as some geo-replication metrics are only emitted from either the geo-primary or geo-secondary.
+ :::image type="content" source="media/cache-how-to-monitor/cache-monitoring-geo-dashboard.png" alt-text="Screenshot showing the geo-replication dashboard with a geo-primary and geo-secondary cache set.":::
+ ## Next steps - [Azure Monitor for Azure Cache for Redis](redis-cache-insights-overview.md)
azure-cache-for-redis Cache Moving Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-moving-resources.md
After geo-replication is configured, the following restrictions apply to your li
- You can't [Import](cache-how-to-import-export-data.md#import) into the secondary linked cache. - You can't delete either linked cache, or the resource group that contains them, until you unlink the caches. For more information, see [Why did the operation fail when I tried to delete my linked cache?](cache-how-to-geo-replication.md#why-did-the-operation-fail-when-i-tried-to-delete-my-linked-cache) - If the caches are in different regions, network egress costs apply to the data moved across regions. For more information, see [How much does it cost to replicate my data across Azure regions?](cache-how-to-geo-replication.md#how-much-does-it-cost-to-replicate-my-data-across-azure-regions)-- Failover is not automatic. You must start the failover from the primary to the secondary inked cache. For more information on how to failover a client application, see [Initiate a failover from geo-primary to geo-secondary (preview)](cache-how-to-geo-replication.md#initiate-a-failover-from-geo-primary-to-geo-secondary-preview).
+- Failover is not automatic. You must start the failover from the primary to the secondary inked cache. For more information on how to fail over a client application, see [Initiate a failover from geo-primary to geo-secondary](cache-how-to-geo-replication.md#initiate-a-failover-from-geo-primary-to-geo-secondary).
### Move
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
Previously updated : 10/2/2022 Last updated : 02/06/2023
Several enhancements have been made to the passive geo-replication functionality
- Geo Replication Full Sync Event Finished (preview) - Geo Replication Full Sync Event Started (preview) -- Customers can now initiate a failover between geo-primary and geo-replica caches with a single selection or CLI command, eliminating the hassle of manually unlinking and relinking caches. For more information, see [Initiate a failover from geo-primary to geo-secondary (preview)](cache-how-to-geo-replication.md#initiate-a-failover-from-geo-primary-to-geo-secondary-preview).
+- Customers can now initiate a failover between geo-primary and geo-replica caches with a single selection or CLI command, eliminating the hassle of manually unlinking and relinking caches. For more information, see [Initiate a failover from geo-primary to geo-secondary](cache-how-to-geo-replication.md#initiate-a-failover-from-geo-primary-to-geo-secondary).
-- A global cache URL is also now offered that automatically updates their DNS records after geo-failovers are triggered, allowing their application to manage only one cache address. For more information, see [Geo-primary URLs (preview)](cache-how-to-geo-replication.md#geo-primary-urls-preview).
+- A global cache URL is also now offered that automatically updates their DNS records after geo-failovers are triggered, allowing their application to manage only one cache address. For more information, see [Geo-primary URL](cache-how-to-geo-replication.md#geo-primary-url).
## September 2022
azure-fluid-relay Azure Function Token Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/azure-function-token-provider.md
description: How to write a custom token provider as an Azure Function and deplo
Previously updated : 10/05/2021 Last updated : 02/05/2023 fluid.url: https://fluidframework.com/docs/build/tokenproviders/
The complete solution has two pieces:
### Create an endpoint for your TokenProvider using Azure Functions
-Using [Azure Functions](../../azure-functions/functions-overview.md) is a fast way to create such an HTTPS endpoint. The example below implements that pattern in a class called [AzureFunctionTokenProvider](https://fluidframework.com/docs/apis/azure-client/azurefunctiontokenprovider-class). It accepts the URL to your Azure Function, `userId` and`userName`.
+Using [Azure Functions](../../azure-functions/functions-overview.md) is a fast way to create such an HTTPS endpoint.
This example demonstrates how to create your own **HTTPTrigger Azure Function** that fetches the token by passing in your tenant key.
TokenProviders can be implemented in many ways, but must implement two separate
To ensure that the tenant secret key is kept secure, it's stored in a secure backend location and is only accessible from within the Azure Function. To retrieve tokens, you need to make a `GET` or `POST` request to your deployed Azure Function, providing the `tenantID` and `documentId`, and `userID`/`userName`. The Azure Function is responsible for the mapping between the tenant ID and a tenant key secret to appropriately generate and sign the token.
-This example implementation below uses the [axios](https://www.npmjs.com/package/axios) library to make HTTP requests. You can use other libraries or approaches to making an HTTP request from server code.
+The example implementation below handles making these requests to your Azure Function. It uses the [axios](https://www.npmjs.com/package/axios) library to make HTTP requests. You can use other libraries or approaches to making an HTTP request from server code.
```typescript import { ITokenProvider, ITokenResponse } from "@fluidframework/routerlicious-driver";
export class AzureFunctionTokenProvider implements ITokenProvider {
} } ```+
+### Add efficiency and error handling
+
+The `AzureFunctionTokenProvider` is a simple implementation of `TokenProvider` which should be treated as a starting point when implementing your own custom token provider. For the implementation of a production-ready token provider, you should consider various failure scenarios which the token provider needs to handle. For example, the `AzureFunctionTokenProvider` implementation fails to handle network disconnect situations because it doesn't cache the token on the client side.
+
+When the container disconnects, the connection manager attempts to get a new token from the TokenProvider before reconnecting to the container. While the network is disconnected, the API get request made in `fetchOrdererToken` will fail and throw a non-retryable error. This in turn leads to the container being disposed and not being able to reconnect even if a network connection is re-established.
+
+A potential solution for this disconnect issue is to cache valid tokens in [Window.localStorage](https://developer.mozilla.org/docs/Web/API/Window/localStorage). With token-caching the container will retrieve a valid stored token instead of making an API get request while the network is disconnected. Note that a locally stored token could expire after a certain period of time and you would still need to make an API request to get a new valid token. In this case, additional error handling and retry logic would be required to prevent the container from disposing after a single failed attempt.
+
+How you choose to implement these improvements is completely up to you and the requirements of your application. Note that with the `localStorage` token solution, you'll also see performance improvements in your application because you're removing a network request on each `getContainer` call.
+
+Token-caching with something like `localStorage` may come with security implications, and it is up to your discretion when deciding what solution is appropriate for your application. Whether or not you implement token-caching, you should add error-handling and retry logic in `fetchOrdererToken` and `fetchStorageToken` so that the container isn't disposed after a single failed call. Consider, for example, wrapping the call of `getToken` in a `try` block with a `catch` block that retries and throws an error only after a specified number of retries.
+ ## See also - [Add custom data to an auth token](connect-fluid-azure-service.md#adding-custom-data-to-tokens)
azure-fluid-relay Quickstart Dice Roll https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/quickstarts/quickstart-dice-roll.md
You'll also need the following software installed on your computer.
## Getting Started Locally
-First, you'll need to download the sample app from GitHub. Open a new command window and navigate to the folder where you'd like to download the code and use Git to clone the [FluidHelloWorld repo](https://github.com/microsoft/FluidHelloWorld). The cloning process will create a subfolder named FluidHelloWorld with the project files in it.
+First, you'll need to download the sample app from GitHub. Open a new command window and navigate to the folder where you'd like to download the code and use Git to clone the [FluidHelloWorld repo](https://github.com/microsoft/FluidHelloWorld/tree/main-azure) and check out the `main-azure` branch. The cloning process will create a subfolder named FluidHelloWorld with the project files in it.
```cli git clone -b main-azure https://github.com/microsoft/FluidHelloWorld.git
azure-government Documentation Government Overview Jps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-jps.md
recommendations: false Previously updated : 02/01/2023 Last updated : 02/06/2023 # Azure for public safety and justice ## Overview
-Public safety and justice agencies are under mounting pressure to keep communities safe, reduce crime, and improve responsiveness. Cloud computing is transforming the way law enforcement agencies approach their work. It is helping with intelligent policing awareness systems, body camera systems across the country/region, and day-to-day mobile police collaboration.
+Public safety and justice agencies are under mounting pressure to keep communities safe, reduce crime, and improve responsiveness. Cloud computing is transforming the way law enforcement agencies approach their work. It's helping with intelligent policing awareness systems, body camera systems across the country/region, and day-to-day mobile police collaboration.
When they're properly planned and secured, cloud services can deliver powerful new capabilities for public safety and justice agencies. These capabilities include digital evidence management, data analysis, and real-time decision support. Solutions can be delivered on the latest mobile devices. However, not all cloud providers are equal. As law enforcement agencies embrace the cloud, they need a cloud service provider they can trust. The core of the law enforcement mission demands partners who are committed to meeting a full range of security, compliance, and operational needs.
Microsoft's commitment to meeting the applicable CJIS regulatory controls help c
The remainder of this article discusses technologies that you can use to safeguard CJI stored or processed in Azure cloud services. **These technologies can help you establish sole control over CJI that you're responsible for.** > [!NOTE]
-> You are wholly responsible for ensuring your own compliance with all applicable laws and regulations. Information provided in this article does not constitute legal advice, and you should consult your legal advisor for any questions regarding regulatory compliance.
+> You're wholly responsible for ensuring your own compliance with all applicable laws and regulations. Information provided in this article doesn't constitute legal advice, and you should consult your legal advisor for any questions regarding regulatory compliance.
## Location of customer data
Technologies like [Intel Software Guard Extensions](https://software.intel.com/s
## Multi-factor authentication (MFA)
-The CJIS Security Policy v5.9.2 revised multi-factor authentication (MFA) requirements for CJI protection. MFA requires the use of two or more different factors defined as follows:
+The CJIS Security Policy v5.9.2 revised the multi-factor authentication (MFA) requirements for CJI protection. MFA requires the use of two or more different factors defined as follows:
- Something you know, for example, username/password or personal identification number (PIN) - Something you have, for example, a hard token such as a cryptographic key stored on or a one-time password (OTP) transmitted to a specialized hardware device
Moreover, Azure can help you meet and **exceed** your CJIS Security Policy MFA r
Azure Active Directory (Azure AD) supports both authenticator and verifier NIST SP 800-63B AAL3 requirements: - **Authenticator requirements:** FIDO2 security keys, smartcards, and Windows Hello for Business can help you meet AAL3 requirements, including the underlying FIPS 140 validation requirements. Azure AD support for NIST SP 800-63B AAL3 **exceeds** the CJIS Security Policy MFA requirements.-- **Verifier requirements:** Azure AD uses the [Windows FIPS 140 Level 1](/windows/security/threat-protection/fips-140-validation) overall validated cryptographic module for all its authentication related cryptographic operations. It is therefore a FIPS 140 compliant verifier.
+- **Verifier requirements:** Azure AD uses the [Windows FIPS 140 Level 1](/windows/security/threat-protection/fips-140-validation) overall validated cryptographic module for all its authentication related cryptographic operations. It's therefore a FIPS 140 compliant verifier.
For more information, see [Azure NIST SP 800-63 documentation](/azure/compliance/offerings/offering-nist-800-63).
azure-monitor Azure Monitor Agent Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration-tools.md
To install DCR Config Generator:
- Windows ARM template and parameter files - if the target workspace contains Windows performance counters or Windows events. - Linux ARM template and parameter files - if the target workspace contains Linux performance counters or Linux Syslog events.
-1. Use the built-in rule association policies to [associate the generated data collection rules with virtual machines](./data-collection-rule-azure-monitor-agent.md#create-a-data-collection-rule) running the new agent.
+ If the Log Analytics workspace was not [configured to collect data](./log-analytics-agent.md#data-collected) from connected agents, the generated files will be empty. This is a scenario in which the agent was connected to a Log Analytics workspace, but was not configured to send any data from the host machine.
+
+1. [Deploy the generated ARM template](../../azure-resource-manager/templates/deployment-tutorial-local-template.md) to associate the generated data collection rules with virtual machines running the new agent.
azure-monitor Alerts Manage Alert Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alert-instances.md
Last updated 08/03/2022
# Manage your alert instances
-The alerts page summarizes all alert instances in all your Azure resources generated in the last 30 days. You can see all types of alerts from multiple subscriptions in a single pane. You can search for a specific alert and manage alert instances.
+The **Alerts** page summarizes all alert instances in all your Azure resources generated in the last 30 days. You can see all types of alerts from multiple subscriptions in a single pane. You can search for a specific alert and manage alert instances.
-There are a few ways to get to the alerts page:
+There are a few ways to get to the **Alerts** page:
- From the home page in the [Azure portal](https://portal.azure.com/), select **Monitor** > **Alerts**.
- :::image type="content" source="media/alerts-managing-alert-instances/alerts-monitor-menu.png" alt-text="Screenshot of the alerts link on the Azure monitor menu. ":::
+ :::image type="content" source="media/alerts-managing-alert-instances/alerts-monitor-menu.png" alt-text="Screenshot that shows Alerts on the Azure Monitor menu. ":::
-- From a specific resource, go to the **Monitoring** section, and choose **Alerts**. The landing page contains the alerts on that specific resource.
+- From a specific resource, go to the **Monitoring** section and select **Alerts**. The page that opens contains the alerts for the specific resource.
- :::image type="content" source="media/alerts-managing-alert-instances/alerts-resource-menu.png" alt-text="Screenshot of the alerts link on the menu of a resource in the Azure portal.":::
+ :::image type="content" source="media/alerts-managing-alert-instances/alerts-resource-menu.png" alt-text="Screenshot that shows Alerts on the menu of a resource in the Azure portal.":::
-## The alerts summary pane
+## Alerts summary pane
-The alerts summary pane summarizes the alerts fired in the last 24 hours. You can filter the list of alert instances by **time range**, **subscription**, **alert condition**, **severity**, and more. If you navigated to the alerts page by selecting a specific alert severity, the list is pre-filtered for that severity.
+The **Alerts** summary pane summarizes the alerts fired in the last 24 hours. You can filter the list of alert instances by **Time range**, **Subscription**, **Alert condition**, **Severity**, and more. If you selected a specific alert severity to open the **Alerts** page, the list is pre-filtered for that severity.
-To see more details about a specific alert instance, select the alert instance to open the **Alert Details** page.
-
+To see more information about a specific alert instance, select the alert instance to open the **Alert details** page.
-## The alerts details page
-The **alerts details** page provides details about the selected alert.
-
+## Alert details page
+
+The **Alert details** page provides more information about the selected alert:
+
+ - To change the user response to the alert, select **Change user response**.
- To see all closed alerts, select the **History** tab. ## Manage your alerts programmatically
-You can query your alerts instances to create custom views outside of the Azure portal, or to analyze your alerts to identify patterns and trends.
-We recommended that you use [Azure Resource Graphs](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade) with the 'AlertsManagementResources' schema for managing alerts across multiple subscriptions. For a sample query, see [Azure Resource Graph sample queries for Azure Monitor](../resource-graph-samples.md).
+You can query your alerts instances to create custom views outside of the Azure portal or to analyze your alerts to identify patterns and trends.
+
+We recommend that you use [Azure Resource Graph](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade) with the `AlertsManagementResources` schema to manage alerts across multiple subscriptions. For a sample query, see [Azure Resource Graph sample queries for Azure Monitor](../resource-graph-samples.md).
-You can use Azure Resource Graphs:
-
-You can also use the [Alert Management REST API](/rest/api/monitor/alertsmanagement/alerts) for lower scale querying or to update fired alerts.
+You can use Resource Graph:
+ - With [Azure PowerShell](/powershell/module/az.monitor/).
+ - In the Azure portal.
+
+You can also use the [Alert Management REST API](/rest/api/monitor/alertsmanagement/alerts) for lower-scale querying or to update fired alerts.
## Next steps - [Learn about Azure Monitor alerts](./alerts-overview.md) - [Create a new alert rule](alerts-log.md)-
azure-monitor Alerts Non Common Schema Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-non-common-schema-definitions.md
Title: Non-common alert schema definitions in Azure Monitor for Test Action Group
-description: Understanding the non-common alert schema definitions for Azure Monitor for Test Action group
+ Title: Non-common alert schema definitions in Azure Monitor for test action group
+description: Understanding the non-common alert schema definitions for Azure Monitor for the test action group feature.
Last updated 01/25/2022
-# Non-common alert schema definitions for Test Action Group (Preview)
+# Non-common alert schema definitions for test action group (preview)
-This article describes the non common alert schema definitions for Azure Monitor, including those for webhooks, Azure Logic Apps, Azure Functions, and Azure Automation runbooks.
+This article describes the non-common alert schema definitions for Azure Monitor, including definitions for:
+- Webhooks
+- Azure Logic Apps
+- Azure Functions
+- Azure Automation runbooks
## What is the non-common alert schema?
-The non-common alert schema lets you customize the consumption experience for alert notifications in Azure today. Historically, the three alert types in Azure today (metric, log, and activity log) have had their own email templates, webhook schemas, etc.
+The non-common alert schema lets you customize the consumption experience for alert notifications in Azure today. Historically, the three alert types in Azure today (metric, log, and activity log) have had their own email templates and webhook schemas.
## Alert context
-### Metric alerts - Static threshold
+See sample values for two metric alerts.
+
+### Metric alerts: Static threshold
**Sample values**+ ```json { "schemaId": "AzureMonitorMetricAlert",
The non-common alert schema lets you customize the consumption experience for al
} ```
-### Metric alerts - Dynamic threshold
+### Metric alerts: Dynamic threshold
+ **Sample values**+ ```json { "schemaId": "AzureMonitorMetricAlert",
The non-common alert schema lets you customize the consumption experience for al
} } ```+ ### Log alerts
-#### `monitoringService` = `Log Alerts V1 ΓÇô Metric`
+
+See sample values for two log alerts.
+
+#### monitoringService = Log Alerts V1 ΓÇô Metric
**Sample values**+ ```json { "SubscriptionId": "11111111-1111-1111-1111-111111111111",
The non-common alert schema lets you customize the consumption experience for al
} ```
-#### `monitoringService` = `Log Alerts V1 - Numresults`
+#### monitoringService = Log Alerts V1 - Numresults
**Sample values**+ ```json { "SubscriptionId": "11111111-1111-1111-1111-111111111111",
The non-common alert schema lets you customize the consumption experience for al
### Activity log alerts
-#### `monitoringService` = `Activity Log - Administrative`
+See sample values for four activity log alerts.
+
+#### monitoringService = Activity Log - Administrative
**Sample values**+ ```json { "schemaId": "Microsoft.Insights/activityLogs",
The non-common alert schema lets you customize the consumption experience for al
} ```
-#### `monitoringService` = `ServiceHealth`
+#### monitoringService = Service Health
**Sample values**+ ```json { "schemaId": "Microsoft.Insights/activityLogs",
The non-common alert schema lets you customize the consumption experience for al
} ```
-#### `monitoringService` = `Resource Health`
+#### monitoringService = Resource Health
**Sample values**+ ```json { "schemaId": "Microsoft.Insights/activityLogs",
The non-common alert schema lets you customize the consumption experience for al
} } ```
-#### `monitoringService` = `Actual Cost Budget` or `Forecasted Budget`
+
+#### monitoringService = Actual Cost Budget or Forecasted Budget
**Sample values**+ ```json { "schemaId": "AIP Budget Notification",
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
Title: Overview of Azure Monitor Alerts
+ Title: Overview of Azure Monitor alerts
description: Learn about Azure Monitor alerts, alert rules, action processing rules, and action groups, and how they work together to monitor your system. Last updated 07/19/2022 -+
-# What are Azure Monitor Alerts?
+# What are Azure Monitor alerts?
-Alerts help you detect and address issues before users notice them by proactively notifying you when Azure Monitor data indicates that there may be a problem with your infrastructure or application.
+Alerts help you detect and address issues before users notice them by proactively notifying you when Azure Monitor data indicates there might be a problem with your infrastructure or application.
You can alert on any metric or log data source in the Azure Monitor data platform.
-This diagram shows you how alerts work:
+This diagram shows you how alerts work.
+
+An *alert rule* monitors your telemetry and captures a signal that indicates something is happening on the specified resource. The alert rule captures the signal and checks to see if the signal meets the criteria of the condition. If the conditions are met, an alert is triggered, which initiates the associated action group and updates the state of the alert.
-An **alert rule** monitors your telemetry and captures a signal that indicates that something is happening on the specified resource. The alert rule captures the signal and checks to see if the signal meets the criteria of the condition. If the conditions are met, an alert is triggered, which initiates the associated action group and updates the state of the alert.
-
An alert rule combines:-
-If you're monitoring more than one resource, the condition is evaluated separately for each of the resources and alerts are fired for each resource separately.
-
-Once an alert is triggered, the alert is made up of:
- - Notification methods such as email, SMS, and push notifications.
- - Automation Runbooks
- - Azure functions
- - ITSM incidents
- - Logic Apps
- - Secure webhooks
- - Webhooks
- - Event hubs
-- **Alert conditions** are set by the system. When an alert fires, the alertΓÇÖs monitor condition is set to ΓÇÿfiredΓÇÖ, and when the underlying condition that caused the alert to fire clears, the monitor condition is set to ΓÇÿresolvedΓÇÖ.-- The **user response** is set by the user and doesnΓÇÖt change until the user changes it. -
-You can see all alert instances in all your Azure resources generated in the last 30 days on the **[Alerts page](alerts-page.md)** in the Azure portal.
+ - The resources to be monitored.
+ - The signal or telemetry from the resource.
+ - Conditions.
+
+If you're monitoring more than one resource, the condition is evaluated separately for each of the resources. Alerts are fired for each resource separately.
+
+After an alert is triggered, the alert is made up of:
+ - **Alert processing rules**: You can use these rules to apply processing on fired alerts. Alert processing rules modify the fired alerts as they're being fired. You can use alert processing rules to add or suppress action groups, apply filters, or have the rule processed on a predefined schedule.
+ - **Action groups**: These groups can trigger notifications or an automated workflow to let users know that an alert has been triggered. Action groups can include:
+ - Notification methods, such as email, SMS, and push notifications.
+ - Automation runbooks.
+ - Azure functions.
+ - ITSM incidents.
+ - Logic apps.
+ - Secure webhooks.
+ - Webhooks.
+ - Event hubs.
+- **Alert conditions**: These conditions are set by the system. When an alert fires, the alert's monitor condition is set to **fired**. After the underlying condition that caused the alert to fire clears, the monitor condition is set to **resolved**.
+- **User response**: The response is set by the user and doesn't change until the user changes it.
+
+You can see all alert instances in all your Azure resources generated in the last 30 days on the [Alerts page](alerts-page.md) in the Azure portal.
## Types of alerts
-This table provides a brief description of each alert type.
-See [this article](alerts-types.md) for detailed information about each alert type and how to choose which alert type best suits your needs.
+This table provides a brief description of each alert type. For more information about each alert type and how to choose which alert type best suits your needs, see [Types of Azure Monitor alerts](alerts-types.md).
|Alert type|Description| |:|:|
-|[Metric alerts](alerts-types.md#metric-alerts)|Metric alerts evaluate resource metrics at regular intervals. Metrics can be platform metrics, custom metrics, logs from Azure Monitor converted to metrics or Application Insights metrics. Metric alerts have several additional features, such as the ability to apply multiple conditions and dynamic thresholds.|
+|[Metric alerts](alerts-types.md#metric-alerts)|Metric alerts evaluate resource metrics at regular intervals. Metrics can be platform metrics, custom metrics, logs from Azure Monitor converted to metrics, or Application Insights metrics. Metric alerts can also apply multiple conditions and dynamic thresholds.|
|[Log alerts](alerts-types.md#log-alerts)|Log alerts allow users to use a Log Analytics query to evaluate resource logs at a predefined frequency.|
-|[Activity log alerts](alerts-types.md#activity-log-alerts)|Activity log alerts are triggered when a new activity log event occurs that matches defined conditions. **Resource Health** alerts and **Service Health** alerts are activity log alerts that report on your service and resource health.|
+|[Activity log alerts](alerts-types.md#activity-log-alerts)|Activity log alerts are triggered when a new activity log event occurs that matches defined conditions. Resource Health alerts and Service Health alerts are activity log alerts that report on your service and resource health.|
|[Smart detection alerts](alerts-types.md#smart-detection-alerts)|Smart detection on an Application Insights resource automatically warns you of potential performance problems and failure anomalies in your web application. You can migrate smart detection on your Application Insights resource to create alert rules for the different smart detection modules.|
-|[Prometheus alerts (preview)](alerts-types.md#prometheus-alerts-preview)|Prometheus alerts are used for alerting on performance and health of Kubernetes clusters (including AKS). The alert rules are based on PromQL, which is an open source query language.|
+|[Prometheus alerts (preview)](alerts-types.md#prometheus-alerts-preview)|Prometheus alerts are used for alerting on the performance and health of Kubernetes clusters, including Azure Kubernetes Service (AKS). The alert rules are based on PromQL, which is an open-source query language.|
## Out-of-the-box alert rules (preview)
If you don't have alert rules defined for the selected resource, you can [enable
> - AKS resources > - Log Analytics workspaces
-## Azure role-based access control (Azure RBAC) for alerts
+## Azure role-based access control for alerts
You can only access, create, or manage alerts for resources for which you have permissions.
-To create an alert rule, you need to have:
+To create an alert rule, you must have:
+ - Read permission on the target resource of the alert rule.
+ - Write permission on the resource group in which the alert rule is created. If you're creating the alert rule from the Azure portal, the alert rule is created by default in the same resource group in which the target resource resides.
+ - Read permission on any action group associated with the alert rule, if applicable.
-These built-in Azure roles, supported at all Azure Resource Manager scopes, have permissions to and access alerts information and create alert rules:
+These built-in Azure roles, supported at all Azure Resource Manager scopes, have permissions to and can access alerts information and create alert rules:
+ - **Monitoring contributor**: A contributor can create alerts and use resources within their scope.
+ - **Monitoring reader**: A reader can view alerts and read resources within their scope.
-If the target action group or rule location is in a different scope than the two built-in roles, you need to create a user with the appropriate permissions.
+If the target action group or rule location is in a different scope than the two built-in roles, create a user with the appropriate permissions.
-## Alerts and State
+## Alerts and state
-You can configure whether log or metric alerts are stateful or stateless. Activity log alerts are stateless.
+You can configure whether log or metric alerts are stateful or stateless. Activity log alerts are stateless.
- Stateless alerts fire each time the condition is met, even if fired previously. The frequency of notifications for stateless metric alerts differs based on the alert rule's configured frequency:
- - **Alert frequency of less than 5 minutes**: While the condition continues to be met, a notification is sent somewhere between one and six minutes.
- - **Alert frequency of more than 5 minutes**: While the condition continues to be met, a notification is sent between the configured frequency and double the frequency. For example, for an alert rule with a frequency of 15 minutes, a notification is sent somewhere between 15 to 30 minutes.
+ - **Alert frequency of less than 5 minutes**: While the condition continues to be met, a notification is sent sometime between one and six minutes.
+ - **Alert frequency of more than 5 minutes**: While the condition continues to be met, a notification is sent between the configured frequency and double the frequency. For example, for an alert rule with a frequency of 15 minutes, a notification is sent sometime between 15 to 30 minutes.
-- Stateful alerts fire when the condition is met and then don't fire again or trigger any more actions until the conditions are resolved.
-For stateful alerts, the alert is considered resolved when:
+- Stateful alerts fire when the condition is met. They don't fire again or trigger any more actions until the conditions are resolved, as described in this table:
-|Alert type |The alert is resolved when |
-|||
-|Metric alerts|The alert condition isn't met for three consecutive checks.|
-|Log alerts| A log alert is considered resolved when the condition isn't met for a specific time range. The time range differs based on the frequency of the alert:<ul> <li>**1 minute**: The alert condition isn't met for 10 minutes.</li> <li>**5-15 minutes**: The alert condition isn't met for three frequency periods.</li> <li>**15 minutes - 11 hours**: The alert condition isn't met for two frequency periods.</li> <li>**11 to 12 hours**: The alert condition isn't met for one frequency period.</li></ul>|
+ |Alert type |The alert is resolved when |
+ |||
+ |Metric alerts|The alert condition isn't met for three consecutive checks.|
+ |Log alerts| The alert condition isn't met for a specific time range. The time range differs based on the frequency of the alert:<ul> <li>**1 minute**: The alert condition isn't met for 10 minutes.</li> <li>**5 to 15 minutes**: The alert condition isn't met for three frequency periods.</li> <li>**15 minutes to 11 hours**: The alert condition isn't met for two frequency periods.</li> <li>**11 to 12 hours**: The alert condition isn't met for one frequency period.</li></ul>|
-When an alert is considered resolved, the alert rule sends out a resolved notification using webhooks or email, and the monitor state in the Azure portal is set to resolved.
+When an alert is considered resolved, the alert rule sends out a resolved notification by using webhooks or email. The monitor state in the Azure portal is set to **resolved**.
## Pricing
-See the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/) for information about pricing.
+For information about pricing, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
## Next steps
azure-monitor It Service Management Connector Secure Webhook Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/it-service-management-connector-secure-webhook-connections.md
Title: IT Service Management Connector - Secure Webhook in Azure Monitor
-description: This article shows you how to connect your ITSM products/services with Secure Webhook in Azure Monitor to centrally monitor and manage ITSM work items.
+ Title: 'IT Service Management Connector: Secure Webhook in Azure Monitor'
+description: This article shows you how to connect your IT Service Management products and services with Secure Webhook in Azure Monitor to centrally monitor and manage ITSM work items.
Last updated 03/30/2022 ms. reviewer: nolavime
ms. reviewer: nolavime
This article shows you how to configure the connection between your IT Service Management (ITSM) product or service by using Secure Webhook.
-Secure Webhook is an updated version of [IT Service Management Connector (ITSMC)](./itsmc-overview.md). Both versions allow you to create work items in an ITSM tool when Azure Monitor sends alerts. The functionality includes metric, log, and Activity Log alerts.
+Secure Webhook is an updated version of [IT Service Management Connector (ITSMC)](./itsmc-overview.md). Both versions allow you to create work items in an ITSM tool when Azure Monitor sends alerts. The functionality includes metric, log, and activity log alerts.
ITSMC uses username and password credentials. Secure Webhook has stronger authentication because it uses Azure Active Directory (Azure AD). Azure AD is Microsoft's cloud-based identity and access management service. It helps users sign in and access internal or external resources. Using Azure AD with ITSM helps to identify Azure alerts (through the Azure AD application ID) that were sent to the external system.
ITSMC uses username and password credentials. Secure Webhook has stronger authen
The Secure Webhook architecture introduces the following new capabilities: * **New action group**: Alerts are sent to the ITSM tool through the Secure Webhook action group, instead of the ITSM action group that ITSMC uses.
-* **Azure AD authentication**: Authentication occurs through Azure AD instead of username/password credentials.
+* **Azure AD authentication**: Authentication occurs through Azure AD instead of username and password credentials.
## Secure Webhook data flow The steps of the Secure Webhook data flow are: 1. Azure Monitor sends an alert that's configured to use Secure Webhook.
-2. The alert payload is sent by a Secure Webhook action to the ITSM tool.
-3. The ITSM application checks with Azure AD if the alert is authorized to enter the ITSM tool.
-4. If the alert is authorized, the application:
-
+1. The alert payload is sent by a Secure Webhook action to the ITSM tool.
+1. The ITSM application checks with Azure AD to determine if the alert is authorized to enter the ITSM tool.
+1. If the alert is authorized, the application:
+ 1. Creates a work item (for example, an incident) in the ITSM tool.
- 2. Binds the ID of the configuration item (CI) to the customer management database (CMDB).
+ 1. Binds the ID of the configuration item to the customer management database.
-![Diagram that shows how the ITSM tool communicates with Azure A D, Azure alerts, and an action group.](media/it-service-management-connector-secure-webhook-connections/secure-export-diagram.png)
+![Diagram that shows how the ITSM tool communicates with Azure Active Directory, Azure alerts, and an action group.](media/it-service-management-connector-secure-webhook-connections/secure-export-diagram.png)
## Benefits of Secure Webhook The main benefits of the integration are: * **Better authentication**: Azure AD provides more secure authentication without the timeouts that commonly occur in ITSMC.
-* **Alerts resolved in the ITSM tool**: Metric alerts implement "fired" and "resolved" states. When the condition is met, the alert state is "fired." When condition is not met anymore, the alert state is "resolved." In ITSMC, alerts can't be resolved automatically. With Secure Webhook, the resolved state flows to the ITSM tool and so is updated automatically.
-* **[Common alert schema](./alerts-common-schema.md)**: In ITSMC, the schema of the alert payload differs based on the alert type. In Secure Webhook, there's a common schema for all alert types. This common schema contains the CI for all alert types. All alert types will be able to bind their CI with the CMDB.
+* **Alerts resolved in the ITSM tool**: Metric alerts implement **fired** and **resolved** states. When the condition is met, the alert state is fired. When the condition isn't met anymore, the alert state is resolved. In ITSMC, alerts can't be resolved automatically. With Secure Webhook, the resolved state flows to the ITSM tool, so it's updated automatically.
+* [Common alert schema](./alerts-common-schema.md): In ITSMC, the schema of the alert payload differs based on the alert type. In Secure Webhook, there's a common schema for all alert types. This common schema contains the configuration item for all alert types. All alert types will be able to bind their configuration item with the customer management database.
## Next steps
-* [Create ITSM work items from Azure alerts](./itsmc-overview.md)
+[Create ITSM work items from Azure alerts](./itsmc-overview.md)
azure-monitor Itsmc Secure Webhook Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-secure-webhook-connections-servicenow.md
Title: IT Service Management Connector - Secure Webhook in Azure Monitor - Configuration with ServiceNow
-description: This article shows you how to connect your ITSM products/services with ServiceNow on Secure Webhook in Azure Monitor.
+ Title: 'ITSM Connector: Configure ServiceNow for Secure Webhook'
+description: This article shows you how to connect your IT Service Management products and services with ServiceNow and Secure Webhook in Azure Monitor.
Last updated 03/30/2022 - # Connect ServiceNow to Azure Monitor
-The following sections provide details about how to connect your ServiceNow product and Secure Webhook in Azure.
+The following sections provide information about how to connect your ServiceNow product and Secure Webhook in Azure.
## Prerequisites Ensure that you've met the following prerequisites:
-* Azure AD is registered.
-* You have the supported version of The ServiceNow Event Management - ITOM (version New York or later).
-* [Application](https://store.servicenow.com/sn_appstore_store.do#!/store/application/ac4c9c57dbb1d090561b186c1396191a/2.2.0) installed on ServiceNow instance.
+* Azure Active Directory is registered.
+* You have the supported version of ServiceNow Event Management - ITOM (version New York or later).
+* The [application](https://store.servicenow.com/sn_appstore_store.do#!/store/application/ac4c9c57dbb1d090561b186c1396191a/2.2.0) is installed on the ServiceNow instance.
## Configure the ServiceNow connection
-1. Use the link https://(instance name).service-now.com/api/sn_em_connector/em/inbound_event?source=azuremonitor the URI for the secure Webhook definition.
+1. Use the link `https://(instance name).service-now.com/api/sn_em_connector/em/inbound_event?source=azuremonitor` to the URI for the Secure Webhook definition.
-2. Follow the instructions according to the version:
+1. Follow the instructions according to the version:
* [Rome](https://docs.servicenow.com/bundle/rome-it-operations-management/page/product/event-management/concept/azure-integration.html) * [Quebec](https://docs.servicenow.com/bundle/quebec-it-operations-management/page/product/event-management/concept/azure-integration.html) * [Paris](https://docs.servicenow.com/bundle/paris-it-operations-management/page/product/event-management/concept/azure-integration.html)
azure-monitor Availability Standard Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-standard-tests.md
Last updated 11/15/2022
# Standard test
-Standard tests are a single request test that is similar to the [URL ping test](monitor-web-app-availability.md) but more advanced. In addition to validating whether an endpoint is responding and measuring the performance, Standard tests also includes SSL certificate validity, proactive lifetime check, HTTP request verb (for example `GET`,`HEAD`,`POST`, etc.), custom headers, and custom data associated with your HTTP request.
+A Standard test is a single request test that's similar to the [URL ping test](monitor-web-app-availability.md) but more advanced. In addition to validating whether an endpoint is responding and measuring the performance, Standard tests also include SSL certificate validity, proactive lifetime check, HTTP request verb (for example, `GET`,`HEAD`, and `POST`), custom headers, and custom data associated with your HTTP request.
-To create an availability test, you need use an existing Application Insights resource or [create an Application Insights resource](create-new-resource.md).
+To create an availability test, you must use an existing Application Insights resource or [create an Application Insights resource](create-new-resource.md).
> [!TIP]
-> If you are currently using other availability tests, like URL ping tests, you may add Standard tests along side the others. If you would like to use Standard tests instead of one of your other tests, add a Standard test and delete your old test.
+> If you're currently using other availability tests, like URL ping tests, you might add Standard tests alongside the others. If you want to use Standard tests instead of one of your other tests, add a Standard test and delete your old test.
## Create a Standard test
-To create a standard test:
+To create a Standard test:
1. Go to your Application Insights resource and select the **Availability** pane. 1. Select **Add Standard test**.
-
- :::image type="content" source="./media/availability-standard-test/standard-test.png" alt-text="Screenshot of Availability pane with add standard test tab open." lightbox="./media/availability-standard-test/standard-test.png":::
-1. Input your test name, URL and other settings (explanation below), then select **Create**.
+ :::image type="content" source="./media/availability-standard-test/standard-test.png" alt-text="Screenshot that shows the Availability pane with the Add Standard test tab open." lightbox="./media/availability-standard-test/standard-test.png":::
+1. Input your test name, URL, and other settings that are described in the following table. Then select **Create**.
-|Setting | Explanation |
-|--|-|
-|**URL** | The URL can be any web page you want to test, but it must be visible from the public internet. The URL can include a query string. So, for example, you can exercise your database a little. If the URL resolves to a redirect, we follow it up to 10 redirects.|
-|**Parse dependent requests**| Test requests images, scripts, style files, and other files that are part of the web page under test. The recorded response time includes the time taken to get these files. The test fails if any of these resources can't be successfully downloaded within the timeout for the whole test. If the option isn't checked, the test only requests the file at the URL you specified. Enabling this option results in a stricter check. The test could fail for cases, which may not be noticeable when manually browsing the site. |
-|**Enable retries**| When the test fails, it's retried after a short interval. A failure is reported only if three successive attempts fail. Subsequent tests are then performed at the usual test frequency. Retry is temporarily suspended until the next success. This rule is applied independently at each test location. **We recommend this option**. On average, about 80% of failures disappear on retry.|
-| **SSL certificate validation test** | You can verify the SSL certificate on your website to make sure it's correctly installed, valid, trusted, and doesn't give any errors to any of your users. |
-| **Proactive lifetime check** | This setting enables you to define a set time period before your SSL certificate expires. Once it expires, your test will fail. |
-|**Test frequency**| Sets how often the test is run from each test location. With a default frequency of five minutes and five test locations, your site is tested on average every minute.|
-|**Test locations**| The places from where our servers send web requests to your URL. **Our minimum number of recommended test locations is five** to ensure that you can distinguish problems in your website from network issues. You can select up to 16 locations.|
-| **Custom headers** | Key value pairs that define the operating parameters. |
-| **HTTP request verb** | Indicate what action you would like to take with your request. |
-| **Request body** | Custom data associated with your HTTP request. You can upload your own files, type in your content, or disable this feature. |
+ |Setting | Description |
+ |--|-|
+ |**URL** | The URL can be any webpage you want to test, but it must be visible from the public internet. The URL can include a query string. So, for example, you can exercise your database a little. If the URL resolves to a redirect, we follow it up to 10 redirects.|
+ |**Parse dependent requests**| Test requests images, scripts, style files, and other files that are part of the webpage under test. The recorded response time includes the time taken to get these files. The test fails if any of these resources can't be successfully downloaded within the timeout for the whole test. If the option isn't selected, the test only requests the file at the URL you specified. Enabling this option results in a stricter check. The test could fail for cases, which might not be noticeable when you manually browse the site. |
+ |**Enable retries**| When the test fails, it's retried after a short interval. A failure is reported only if three successive attempts fail. Subsequent tests are then performed at the usual test frequency. Retry is temporarily suspended until the next success. This rule is applied independently at each test location. *We recommend this option*. On average, about 80% of failures disappear on retry.|
+ | **SSL certificate validation test** | You can verify the SSL certificate on your website to make sure it's correctly installed, valid, trusted, and doesn't give any errors to any of your users. |
+ | **Proactive lifetime check** | This setting enables you to define a set time period before your SSL certificate expires. After it expires, your test will fail. |
+ |**Test frequency**| Sets how often the test is run from each test location. With a default frequency of five minutes and five test locations, your site is tested on average every minute.|
+ |**Test locations**| The places from where our servers send web requests to your URL. *Our minimum number of recommended test locations is five* to ensure that you can distinguish problems in your website from network issues. You can select up to 16 locations.|
+ | **Custom headers** | Key value pairs that define the operating parameters. |
+ | **HTTP request verb** | Indicate what action you want to take with your request. |
+ | **Request body** | Custom data associated with your HTTP request. You can upload your own files, enter your content, or disable this feature. |
## Success criteria
-|Setting| Explanation|
+|Setting| Description|
|-||
-| **Test timeout** |Decrease this value to be alerted about slow responses. The test is counted as a failure if the responses from your site haven't been received within this period. If you selected **Parse dependent requests**, then all the images, style files, scripts, and other dependent resources must have been received within this period.|
-| **HTTP response** | The returned status code that is counted as a success. 200 is the code that indicates that a normal web page has been returned.|
-| **Content match** | A string, like "Welcome!" We test that an exact case-sensitive match occurs in every response. It must be a plain string, without wildcards. Don't forget that if your page content changes you might have to update it. **Only English characters are supported with content match** |
+| **Test timeout** |Decrease this value to be alerted about slow responses. The test is counted as a failure if the responses from your site haven't been received within this period. If you selected **Parse dependent requests**, all the images, style files, scripts, and other dependent resources must have been received within this period.|
+| **HTTP response** | The returned status code that's counted as a success. The number 200 is the code that indicates that a normal webpage has been returned.|
+| **Content match** | A string, like "Welcome!" We test that an exact case-sensitive match occurs in every response. It must be a plain string, without wildcards. Don't forget that if your page content changes, you might have to update it. *Only English characters are supported with content match.* |
## Alerts
-|Setting| Explanation|
+|Setting| Description|
|-||
-|**Near-realtime** | We recommend using Near-realtime alerts. Configuring this type of alert is done after your availability test is created. |
+|**Near real time** | We recommend using near real time alerts. Configuring this type of alert is done after your availability test is created. |
|**Alert location threshold**|We recommend a minimum of 3/5 locations. The optimal relationship between alert location threshold and the number of test locations is **alert location threshold** = **number of test locations - 2, with a minimum of five test locations.**| ## Location population tags
-The following population tags can be used for the geo-location attribute when deploying an availability URL ping test using Azure Resource Manager.
+You can use the following population tags for the geo-location attribute when you deploy an availability URL ping test by using Azure Resource Manager.
### Azure Government
-| Display Name | Population Name |
+| Display name | Population name |
|-|| | USGov Virginia | usgov-va-azr | | USGov Arizona | usgov-phx-azr |
The following population tags can be used for the geo-location attribute when de
### Azure China
-| Display Name | Population Name |
+| Display name | Population name |
|-|| | China East | mc-cne-azr | | China East 2 | mc-cne2-azr |
The following population tags can be used for the geo-location attribute when de
#### Azure
-| Display Name | Population Name |
+| Display name | Population name |
|-|-| | Australia East | emea-au-syd-edge | | Brazil South | latam-br-gru-edge |
The following population tags can be used for the geo-location attribute when de
## See your availability test results
-Availability test results can be visualized with both line and scatter plot views.
+Availability test results can be visualized with both **Line** and **Scatter Plot** views.
After a few minutes, select **Refresh** to see your test results.
-The scatterplot view shows samples of the test results that have diagnostic test-step detail in them. The test engine stores diagnostic detail for tests that have failures. For successful tests, diagnostic details are stored for a subset of the executions. Hover over any of the green/red dots to see the test, test name, and location.
+The **Scatter Plot** view shows samples of the test results that have diagnostic test-step detail in them. The test engine stores diagnostic detail for tests that have failures. For successful tests, diagnostic details are stored for a subset of the executions. Hover over any of the green/red dots to see the test, test name, and location.
-Select a particular test, location, or reduce the time period to see more results around the time period of interest. Use Search Explorer to see results from all executions, or use Analytics queries to run custom reports on this data.
+Select a particular test or location. Or you can reduce the time period to see more results around the time period of interest. Use Search Explorer to see results from all executions. Or you can use Log Analytics queries to run custom reports on this data.
## Inspect and edit tests
-To edit, temporarily disable, or delete a test, select the ellipses next to a test name. It may take up to 20 minutes for configuration changes to propagate to all test agents after a change is made.
+To edit, temporarily disable, or delete a test, select the ellipses next to a test name. It might take up to 20 minutes for configuration changes to propagate to all test agents after a change is made.
You might want to disable availability tests or the alert rules associated with them while you're performing maintenance on your service.
You might want to disable availability tests or the alert rules associated with
Select a red dot. From an availability test result, you can see the transaction details across all components. Here you can:
-* Review the troubleshooting report to determine what may have caused your test to fail but your application is still available.
+* Review the troubleshooting report to determine what might have caused your test to fail but your application is still available.
* Inspect the response received from your server. * Diagnose failure with correlated server-side telemetry collected while processing the failed availability test. * Log an issue or work item in Git or Azure Boards to track the problem. The bug will contain a link to this event. * Open the web test result in Visual Studio.
-To learn more about the end to end transaction diagnostics experience, visit the [transaction diagnostics documentation](./transaction-diagnostics.md).
+To learn more about the end-to-end transaction diagnostics experience, see the [transaction diagnostics documentation](./transaction-diagnostics.md).
-Select on the exception row to see the details of the server-side exception that caused the synthetic availability test to fail. You can also get the [debug snapshot](./snapshot-debugger.md) for richer code level diagnostics.
+Select the exception row to see the details of the server-side exception that caused the synthetic availability test to fail. You can also get the [debug snapshot](./snapshot-debugger.md) for richer code-level diagnostics.
-In addition to the raw results, you can also view two key Availability metrics in [Metrics Explorer](../essentials/metrics-getting-started.md):
+In addition to the raw results, you can also view two key availability metrics in [metrics explorer](../essentials/metrics-getting-started.md):
-* Availability: Percentage of the tests that were successful, across all test executions.
-* Test Duration: Average test duration across all test executions.
+* **Availability**: Percentage of the tests that were successful across all test executions.
+* **Test Duration**: Average test duration across all test executions.
## Next steps
-* [Availability Alerts](availability-alerts.md)
+* [Availability alerts](availability-alerts.md)
* [Multi-step web tests](availability-multistep.md) * [Troubleshooting](troubleshoot-availability.md)
-* [Web Tests Azure Resource Manager template](/azure/templates/microsoft.insights/webtests?tabs=json)
+* [Web tests Azure Resource Manager template](/azure/templates/microsoft.insights/webtests?tabs=json)
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
# Azure AD authentication for Application Insights
-Application Insights now supports [Azure Active Directory (Azure AD) authentication](../../active-directory/authentication/overview-authentication.md#what-is-azure-active-directory-authentication). By using Azure AD, you can ensure that only authenticated telemetry is ingested in your Application Insights resources.
+Application Insights now supports [Azure Active Directory (Azure AD) authentication](../../active-directory/authentication/overview-authentication.md#what-is-azure-active-directory-authentication). By using Azure AD, you can ensure that only authenticated telemetry is ingested in your Application Insights resources.
-Using various authentication systems can be cumbersome and risky because it's difficult to manage credentials at scale. You can now choose to [opt-out of local authentication](#disable-local-authentication) to ensure only telemetry exclusively authenticated using [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md) and [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md) is ingested in your resource. This feature is a step to enhance the security and reliability of the telemetry used to make both critical operational ([alerting](../alerts/alerts-overview.md#what-are-azure-monitor-alerts), [autoscale](../autoscale/autoscale-overview.md#overview-of-autoscale-in-microsoft-azure), etc.) and business decisions.
+Using various authentication systems can be cumbersome and risky because it's difficult to manage credentials at scale. You can now choose to [opt out of local authentication](#disable-local-authentication) to ensure only telemetry exclusively authenticated by using [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) and [Azure AD](../../active-directory/fundamentals/active-directory-whatis.md) is ingested in your resource. This feature is a step to enhance the security and reliability of the telemetry used to make critical operational ([alerting](../alerts/alerts-overview.md#what-are-azure-monitor-alerts)and [autoscale](../autoscale/autoscale-overview.md#overview-of-autoscale-in-microsoft-azure)) and business decisions.
## Prerequisites
-The following are prerequisites to enable Azure AD authenticated ingestion.
+The following prerequisites enable Azure AD authenticated ingestion. You need to:
-- Must be in public cloud-- Familiarity with:
- - [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
+- Be in the public cloud.
+- Have familiarity with:
+ - [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
- [Service principal](../../active-directory/develop/howto-create-service-principal-portal.md).
- - [Assigning Azure roles](../../role-based-access-control/role-assignments-portal.md).
-- You have an "Owner" role to the resource group to grant access using [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
+ - [Assigning Azure roles](../../role-based-access-control/role-assignments-portal.md).
+- Have an Owner role to the resource group to grant access by using [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
- Understand the [unsupported scenarios](#unsupported-scenarios). ## Unsupported scenarios
-The following SDK's and features are unsupported for use with Azure AD authenticated ingestion.
+The following SDKs and features are unsupported for use with Azure AD authenticated ingestion:
-- [Application Insights Java 2.x SDK](deprecated-java-2x.md#monitor-dependencies-caught-exceptions-and-method-execution-times-in-java-web-apps)<br>
- Azure AD authentication is only available for Application Insights Java Agent >=3.2.0.
-- [ApplicationInsights JavaScript Web SDK](javascript.md).
+- [Application Insights Java 2.x SDK](deprecated-java-2x.md#monitor-dependencies-caught-exceptions-and-method-execution-times-in-java-web-apps).<br>
+ Azure AD authentication is only available for Application Insights Java Agent greater than or equal to 3.2.0.
+- [ApplicationInsights JavaScript web SDK](javascript.md).
- [Application Insights OpenCensus Python SDK](opencensus-python.md) with Python version 3.4 and 3.5.--- [Certificate/secret based Azure AD](../../active-directory/authentication/active-directory-certificate-based-authentication-get-started.md) isn't recommended for production. Use Managed Identities instead.-- On-by-default Codeless monitoring (for languages) for App Service, VM/Virtual machine scale sets, Azure Functions etc.
+- [Certificate/secret-based Azure AD](../../active-directory/authentication/active-directory-certificate-based-authentication-get-started.md) isn't recommended for production. Use managed identities instead.
+- On-by-default codeless monitoring (for languages) for Azure App Service, Azure Virtual Machines/Azure Virtual Machine Scale Sets, and Azure Functions.
- [Availability tests](availability-overview.md). - [Profiler](profiler-overview.md).
-## Configuring and enabling Azure AD based authentication
+## Configure and enable Azure AD-based authentication
-1. Create an identity, if you already don't have one, using either managed identity or service principal:
+1. If you don't already have an identity, create one by using either a managed identity or a service principal.
- 1. Using managed identity (Recommended):
+ 1. We recommend using a managed identity:
- [Setup a managed identity for your Azure Service](../../active-directory/managed-identities-azure-resources/services-support-managed-identities.md) (VM, App Service etc.).
+ [Set up a managed identity for your Azure service](../../active-directory/managed-identities-azure-resources/services-support-managed-identities.md) (Virtual Machines or App Service).
- 1. Using service principal (Not Recommended):
+ 1. We don't recommend using a service principal:
For more information on how to create an Azure AD application and service principal that can access resources, see [Create a service principal](../../active-directory/develop/howto-create-service-principal-portal.md).
-1. Assign role to the Azure Service.
+1. Assign a role to the Azure service.
- Follow the steps in [Assign Azure roles](../../role-based-access-control/role-assignments-portal.md) to add the "Monitoring Metrics Publisher" role from the target Application Insights resource to the Azure resource from which the telemetry is sent.
+ Follow the steps in [Assign Azure roles](../../role-based-access-control/role-assignments-portal.md) to add the Monitoring Metrics Publisher role from the target Application Insights resource to the Azure resource from which the telemetry is sent.
> [!NOTE]
- > Although role "Monitoring Metrics Publisher" says metrics, it will publish all telemetry to the App Insights resource.
+ > Although the Monitoring Metrics Publisher role says "metrics," it will publish all telemetry to the Application Insights resource.
-1. Follow the configuration guidance per language below.
+1. Follow the configuration guidance in accordance with the language that follows.
### [.NET](#tab/net)
The following SDK's and features are unsupported for use with Azure AD authentic
Application Insights .NET SDK supports the credential classes provided by [Azure Identity](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/identity/Azure.Identity#credential-classes). -- `DefaultAzureCredential` is recommended for local development.-- `ManagedIdentityCredential` is recommended for system-assigned and user-assigned managed identities.
+- We recommend `DefaultAzureCredential` for local development.
+- We recommend `ManagedIdentityCredential` for system-assigned and user-assigned managed identities.
- For system-assigned, use the default constructor without parameters.
- - For user-assigned, provide the clientId to the constructor.
-- `ClientSecretCredential` is recommended for service principals.
- - Provide the tenantId, clientId, and clientSecret to the constructor.
+ - For user-assigned, provide the client ID to the constructor.
+- We recommend `ClientSecretCredential` for service principals.
+ - Provide the tenant ID, client ID, and client secret to the constructor.
-Below is an example of manually creating and configuring a `TelemetryConfiguration` using .NET:
+The following example shows how to manually create and configure `TelemetryConfiguration` by using .NET:
```csharp TelemetryConfiguration.Active.ConnectionString = "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/";
var credential = new DefaultAzureCredential();
TelemetryConfiguration.Active.SetAzureTokenCredential(credential); ```
-Below is an example of configuring the `TelemetryConfiguration` using .NET Core:
+The following example shows how to configure `TelemetryConfiguration` by using .NET Core:
+ ```csharp services.Configure<TelemetryConfiguration>(config => {
services.AddApplicationInsightsTelemetry(new ApplicationInsightsServiceOptions
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ### [Node.js](#tab/nodejs)
-
+ > [!NOTE] > Support for Azure AD in the Application Insights Node.JS is included starting with [version 2.1.0-beta.1](https://www.npmjs.com/package/applicationinsights/v/2.1.0-beta.1).
appInsights.defaultClient.config.aadTokenCredential = credential;
### [Java](#tab/java) > [!NOTE]
-> Support for Azure AD in the Application Insights Java agent is included starting with [Java 3.2.0-BETA](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.2.0-BETA).
+> Support for Azure AD in the Application Insights Java agent is included starting with [Java 3.2.0-BETA](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.2.0-BETA).
-1. [Configure your application with the Java agent.](java-in-process-agent.md#get-started)
+1. [Configure your application with the Java agent](java-in-process-agent.md#get-started).
> [!IMPORTANT]
- > Use the full connection string which includes "IngestionEndpoint" while configuring your app with Java agent. For example `InstrumentationKey=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX;IngestionEndpoint=https://XXXX.applicationinsights.azure.com/`.
+ > Use the full connection string, which includes `IngestionEndpoint`, when you configure your app with the Java agent. For example, use `InstrumentationKey=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX;IngestionEndpoint=https://XXXX.applicationinsights.azure.com/`.
- > [!NOTE]
- > For more information about migrating from 2.X SDK to 3.X Java agent, see [Upgrading from Application Insights Java 2.x SDK](java-standalone-upgrade-from-2x.md).
+1. Add the JSON configuration to the *ApplicationInsights.json* configuration file depending on the authentication you're using. We recommend using managed identities.
-1. Add the json configuration to ApplicationInsights.json configuration file depending on the authentication being used by you. We recommend users to use managed identities.
+> [!NOTE]
+> For more information about migrating from the 2.X SDK to the 3.X Java agent, see [Upgrading from Application Insights Java 2.x SDK](java-standalone-upgrade-from-2x.md).
-#### System-assigned Managed Identity
+#### System-assigned managed identity
-Below is an example of how to configure Java agent to use system-assigned managed identity for authentication with Azure AD.
+The following example shows how to configure the Java agent to use system-assigned managed identity for authentication with Azure AD.
```JSON {
Below is an example of how to configure Java agent to use system-assigned manage
#### User-assigned managed identity
-Below is an example of how to configure Java agent to use user-assigned managed identity for authentication with Azure AD.
+The following example shows how to configure the Java agent to use user-assigned managed identity for authentication with Azure AD.
```JSON {
Below is an example of how to configure Java agent to use user-assigned managed
} } ``` #### Client secret
-Below is an example of how to configure Java agent to use service principal for authentication with Azure AD. We recommend users to use this type of authentication only during development. The ultimate goal of adding authentication feature is to eliminate secrets.
+The following example shows how to configure the Java agent to use a service principal for authentication with Azure AD. We recommend using this type of authentication only during development. The ultimate goal of adding the authentication feature is to eliminate secrets.
```JSON {
Below is an example of how to configure Java agent to use service principal for
} } ``` [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ### [Python](#tab/python) > [!NOTE]
-> Azure AD authentication is only available for Python v2.7, v3.6 and v3.7. Support for Azure AD in the Application Insights Opencensus Python SDK
+> Azure AD authentication is only available for Python v2.7, v3.6, and v3.7. Support for Azure AD in the Application Insights Opencensus Python SDK
is included starting with beta version [opencensus-ext-azure 1.1b0](https://pypi.org/project/opencensus-ext-azure/1.1b0/).
-Construct the appropriate [credentials](/python/api/overview/azure/identity-readme#credentials) and pass it into the constructor of the Azure Monitor exporter. Make sure your connection string is set up with the instrumentation key and ingestion endpoint of your resource.
+Construct the appropriate [credentials](/python/api/overview/azure/identity-readme#credentials) and pass them into the constructor of the Azure Monitor exporter. Make sure your connection string is set up with the instrumentation key and ingestion endpoint of your resource.
-Below are the following types of authentication that are supported by the `Opencensus` Azure Monitor exporters. Managed identities are recommended in production environments.
+The following types of authentication are supported by the `Opencensus` Azure Monitor exporters. We recommend using managed identities in production environments.
#### System-assigned managed identity
tracer = Tracer(
## Disable local authentication
-After the Azure AD authentication is enabled, you can choose to disable local authentication. This configuration will allow you to ingest telemetry authenticated exclusively by Azure AD and impacts data access (for example, through API Keys).
+After the Azure AD authentication is enabled, you can choose to disable local authentication. This configuration allows you to ingest telemetry authenticated exclusively by Azure AD and affects data access (for example, through API keys).
-You can disable local authentication by using the Azure portal, Azure Policy, or programmatically.
+You can disable local authentication by using the Azure portal or Azure Policy or programmatically.
### Azure portal
-1. From your Application Insights resource, select **Properties** under the *Configure* heading in the left-hand menu. Then select **Enabled (click to change)** if the local authentication is enabled.
+1. From your Application Insights resource, select **Properties** under the **Configure** heading in the menu on the left. Select **Enabled (click to change)** if the local authentication is enabled.
- :::image type="content" source="./media/azure-ad-authentication/enabled.png" alt-text="Screenshot of Properties under the *Configure* selected and enabled (select to change) local authentication button.":::
+ :::image type="content" source="./media/azure-ad-authentication/enabled.png" alt-text="Screenshot that shows Properties under the Configure section and the Enabled (click to change) local authentication button.":::
1. Select **Disabled** and apply changes.
- :::image type="content" source="./media/azure-ad-authentication/disable.png" alt-text="Screenshot of local authentication with the enabled/disabled button highlighted.":::
+ :::image type="content" source="./media/azure-ad-authentication/disable.png" alt-text="Screenshot that shows local authentication with the Enabled/Disabled button.":::
-1. Once your resource has disabled local authentication, you'll see the corresponding info in the **Overview** pane.
+1. After your resource has disabled local authentication, you'll see the corresponding information in the **Overview** pane.
- :::image type="content" source="./media/azure-ad-authentication/overview.png" alt-text="Screenshot of overview tab with the disabled (select to change) highlighted.":::
+ :::image type="content" source="./media/azure-ad-authentication/overview.png" alt-text="Screenshot that shows the Overview tab with the Disabled (click to change) local authentication button.":::
-### Azure Policy
+### Azure Policy
-Azure Policy for 'DisableLocalAuth' will deny from users to create a new Application Insights resource without this property setting to 'true'. The policy name is 'Application Insights components should block non-AAD auth ingestion'.
+Azure Policy for `DisableLocalAuth` will deny users the ability to create a new Application Insights resource without this property set to `true`. The policy name is `Application Insights components should block non-AAD auth ingestion`.
To apply this policy definition to your subscription, [create a new policy assignment and assign the policy](../../governance/policy/assign-policy-portal.md).
-Below is the policy template definition:
+The following example shows the policy template definition:
+ ```JSON { "properties": {
Below is the policy template definition:
} ```
-### Programmatic enablement
+### Programmatic enablement
-Property `DisableLocalAuth` is used to disable any local authentication on your Application Insights resource. When set to `true`, this property enforces that Azure AD authentication must be used for all access.
+The property `DisableLocalAuth` is used to disable any local authentication on your Application Insights resource. When this property is set to `true`, it enforces that Azure AD authentication must be used for all access.
-Below is an example Azure Resource Manager template that you can use to create a workspace-based Application Insights resource with local auth disabled.
+The following example shows the Azure Resource Manager template you can use to create a workspace-based Application Insights resource with `LocalAuth` disabled.
-```JSON
+```JSON
{ "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0",
Below is an example Azure Resource Manager template that you can use to create a
## Troubleshooting
-This section provides distinct troubleshooting scenarios and steps that users can take to resolve any issue before they raise a support ticket.
+This section provides distinct troubleshooting scenarios and steps that you can take to resolve an issue before you raise a support ticket.
### Ingestion HTTP errors
-The ingestion service will return specific errors, regardless of the SDK language. Network traffic can be collected using a tool such as Fiddler. You should filter traffic to the IngestionEndpoint set in the Connection String.
+The ingestion service will return specific errors, regardless of the SDK language. Network traffic can be collected by using a tool such as Fiddler. You should filter traffic to the ingestion endpoint set in the connection string.
-#### HTTP/1.1 400 Authentication not supported
+#### HTTP/1.1 400 Authentication not supported
-This error indicates that the resource has been configured for Azure AD only. The SDK hasn't been correctly configured and is sending to the incorrect API.
+This error indicates that the resource is configured for Azure AD only. The SDK hasn't been correctly configured and is sending to the incorrect API.
> [!NOTE]
-> "v2/track" does not support Azure AD. When the SDK is correctly configured, telemetry will be sent to "v2.1/track".
+> "v2/track" doesn't support Azure AD. When the SDK is correctly configured, telemetry will be sent to "v2.1/track".
-Next steps should be to review the SDK configuration.
+Next, you should review the SDK configuration.
#### HTTP/1.1 401 Authorization required
-This error indicates that the SDK has been correctly configured, but was unable to acquire a valid token. This error may indicate an issue with Azure Active Directory.
+This error indicates that the SDK is correctly configured but it's unable to acquire a valid token. This error might indicate an issue with Azure AD.
-Next steps should be to identify exceptions in the SDK logs or network errors from Azure Identity.
+Next, you should identify exceptions in the SDK logs or network errors from Azure Identity.
-#### HTTP/1.1 403 Unauthorized
+#### HTTP/1.1 403 Unauthorized
-This error indicates that the SDK has been configured with credentials that haven't been given permission to the Application Insights resource or subscription.
+This error indicates that the SDK is configured with credentials that haven't been given permission to the Application Insights resource or subscription.
-Next steps should be to review the Application Insights resource's access control. The SDK must be configured with a credential that has been granted the "Monitoring Metrics Publisher" role.
+Next, you should review the Application Insights resource's access control. The SDK must be configured with a credential that's been granted the Monitoring Metrics Publisher role.
-### Language specific troubleshooting
+### Language-specific troubleshooting
### [.NET](#tab/net)
-#### Event Source
+#### Event source
-The Application Insights .NET SDK emits error logs using event source. To learn more about collecting event source logs visit, [Troubleshooting no data- collect logs with PerfView](asp-net-troubleshoot-no-data.md#PerfView).
+The Application Insights .NET SDK emits error logs by using the event source. To learn more about collecting event source logs, see [Troubleshooting no data - collect logs with PerfView](asp-net-troubleshoot-no-data.md#PerfView).
-If the SDK fails to get a token, the exception message is logged as:
-`Failed to get AAD Token. Error message: `
+If the SDK fails to get a token, the exception message is logged as
+`Failed to get AAD Token. Error message: `.
### [Node.js](#tab/nodejs)
-Internal logs could be turned on using the following setup. Once enabled, error logs will be shown in the console including any error related to Azure AD integration. For example, failure to generate the token when wrong credentials are supplied or errors when ingestion endpoint fails to authenticate using the provided credentials.
+Internal logs could be turned on by using the following setup. After they're enabled, error logs will be shown in the console, including any error related to Azure AD integration. Examples include failure to generate the token when the wrong credentials are supplied or errors when the ingestion endpoint fails to authenticate by using the provided credentials.
```javascript let appInsights = require("applicationinsights");
appInsights.setup("InstrumentationKey=00000000-0000-0000-0000-000000000000;Inges
#### HTTP traffic
-You can inspect network traffic using a tool like Fiddler. To enable the traffic to tunnel through fiddler either add the following proxy settings in configuration file:
+You can inspect network traffic by using a tool like Fiddler. To enable the traffic to tunnel through Fiddler, either add the following proxy settings in the configuration file:
```JSON "proxy": {
You can inspect network traffic using a tool like Fiddler. To enable the traffic
} ```
-Or add following jvm args while running your application:`-Djava.net.useSystemProxies=true -Dhttps.proxyHost=localhost -Dhttps.proxyPort=8888`
-
-If Azure AD is enabled in the agent, outbound traffic will include the HTTP Header "Authorization".
-
+Or add the following JVM args while running your application:`-Djava.net.useSystemProxies=true -Dhttps.proxyHost=localhost -Dhttps.proxyPort=8888`
-#### 401 Unauthorized
+If Azure AD is enabled in the agent, outbound traffic will include the HTTP header `Authorization`.
-If the following WARN message is seen in the log file `WARN c.m.a.TelemetryChannel - Failed to send telemetry with status code: 401, please check your credentials`, it indicates the agent wasn't successful in sending telemetry. You've probably not enabled Azure AD authentication on the agent, but your Application Insights resource is configured with `DisableLocalAuth: true`. Make sure you're passing in a valid credential and that it has permission to access your Application Insights resource.
+#### 401 Unauthorized
+If the following WARN message is seen in the log file `WARN c.m.a.TelemetryChannel - Failed to send telemetry with status code: 401, please check your credentials`, it indicates the agent wasn't successful in sending telemetry. You probably haven't enabled Azure AD authentication on the agent, but your Application Insights resource is configured with `DisableLocalAuth: true`. Make sure you're passing in a valid credential and that it has permission to access your Application Insights resource.
-If using fiddler, you might see the following response header: `HTTP/1.1 401 Unauthorized - please provide the valid authorization token`.
-
+If you're using Fiddler, you might see the response header `HTTP/1.1 401 Unauthorized - please provide the valid authorization token`.
#### CredentialUnavailableException
-If the following exception is seen in the log file `com.azure.identity.CredentialUnavailableException: ManagedIdentityCredential authentication unavailable. Connection to IMDS endpoint cannot be established`, it indicates the agent wasn't successful in acquiring the access token. The probable reason might be you've provided invalid `clientId` in your User Assigned Managed Identity configuration
-
+If the following exception is seen in the log file `com.azure.identity.CredentialUnavailableException: ManagedIdentityCredential authentication unavailable. Connection to IMDS endpoint cannot be established`, it indicates the agent wasn't successful in acquiring the access token. The probable reason is that you've provided an invalid client ID in your User-Assigned Managed Identity configuration.
#### Failed to send telemetry
-If the following WARN message is seen in the log file, `WARN c.m.a.TelemetryChannel - Failed to send telemetry with status code: 403, please check your credentials`, it indicates the agent wasn't successful in sending telemetry. This warning might be because of the provided credentials don't grant the access to ingest the telemetry into the component
+If the following WARN message is seen in the log file `WARN c.m.a.TelemetryChannel - Failed to send telemetry with status code: 403, please check your credentials`, it indicates the agent wasn't successful in sending telemetry. This warning might be because the provided credentials don't grant access to ingest the telemetry into the component
-If using fiddler, you might see the following response header: `HTTP/1.1 403 Forbidden - provided credentials do not grant the access to ingest the telemetry into the component`.
+If you're using Fiddler, you might see the response header `HTTP/1.1 403 Forbidden - provided credentials do not grant the access to ingest the telemetry into the component`.
-Root cause might be one of the following reasons:
-- You've created the resource with System-assigned managed identity enabled or you might have associated the User-assigned identity with the resource but forgot to add the `Monitoring Metrics Publisher` role to the resource (if using SAMI) or User-assigned identity (if using UAMI).-- You've provided the right credentials to get the access tokens, but the credentials don't belong to the right Application Insights resource. Make sure you see your resource (vm, app service etc.) or user-assigned identity with `Monitoring Metrics Publisher` roles in your Application Insights resource.
+The root cause might be one of the following reasons:
+- You've created the resource with system-assigned managed identity enabled or you might have associated the user-assigned identity with the resource but forgot to add the Monitoring Metrics Publisher role to the resource (if using SAMI) or user-assigned identity (if using UAMI).
+- You've provided the right credentials to get the access tokens, but the credentials don't belong to the right Application Insights resource. Make sure you see your resource (VM or app service) or user-assigned identity with Monitoring Metrics Publisher roles in your Application Insights resource.
-#### Invalid TenantId
+#### Invalid Tenant ID
-If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Specified tenant identifier <TENANT-ID> is neither a valid DNS name, nor a valid external domain.`, it indicates the agent wasn't successful in acquiring the access token. The probable reason might be you've provided invalid/wrong `tenantId` in your client secret configuration.
+If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Specified tenant identifier <TENANT-ID> is neither a valid DNS name, nor a valid external domain.`, it indicates the agent wasn't successful in acquiring the access token. The probable reason is that you've provided an invalid or the wrong `tenantId` in your client secret configuration.
#### Invalid client secret
-If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Invalid client secret is provided`, it indicates the agent wasn't successful in acquiring the access token. The probable reason might be you've provided invalid `clientSecret` in your client secret configuration.
-
+If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Invalid client secret is provided`, it indicates the agent wasn't successful in acquiring the access token. The probable reason is that you've provided an invalid client secret in your client secret configuration.
-#### Invalid ClientId
+#### Invalid Client ID
-If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Application with identifier <CLIENT_ID> was not found in the directory`, it indicates the agent wasn't successful in acquiring the access token. The probable reason might be you've provided invalid/wrong "clientId" in your client secret configuration
+If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Application with identifier <CLIENT_ID> was not found in the directory`, it indicates the agent wasn't successful in acquiring the access token. The probable reason is that you've provided an invalid or the wrong client ID in your client secret configuration
- This scenario can occur if the application hasn't been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant.
+ This scenario can occur if the application hasn't been installed by the administrator of the tenant or consented to by any user in the tenant. You might have sent your authentication request to the wrong tenant.
### [Python](#tab/python) #### Error starts with "credential error" (with no status code)
-Something is incorrect about the credential you're using and the client isn't able to obtain a token for authorization. It's due to lacking the required data for the state. An example would be passing in a system ManagedIdentityCredential but the resource isn't configured to use system-managed identity.
+Something is incorrect about the credential you're using and the client isn't able to obtain a token for authorization. It's because the required data is lacking for the state. An example would be passing in a system `ManagedIdentityCredential` but the resource isn't configured to use system-managed identity.
#### Error starts with "authentication error" (with no status code)
-Client failed to authenticate with the given credential. Usually occurs when the credential used doesn't have correct role assignments.
+The client failed to authenticate with the given credential. This error usually occurs when the credential used doesn't have the correct role assignments.
#### I'm getting a status code 400 in my error logs
You're probably missing a credential or your credential is set to `None`, but yo
#### I'm getting a status code 403 in my error logs
-Usually occurs when the provided credentials don't grant access to ingest telemetry for the Application Insights resource. Make sure your AI resource has the correct role assignments.
+This error usually occurs when the provided credentials don't grant access to ingest telemetry for the Application Insights resource. Make sure your Application Insights resource has the correct role assignments.
## Next steps
-* [Monitor your telemetry in the portal](overview-dashboard.md).
-* [Diagnose with Live Metrics Stream](live-stream.md).
+* [Monitor your telemetry in the portal](overview-dashboard.md)
+* [Diagnose with Live Metrics Stream](live-stream.md)
azure-monitor Data Model Request Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-request-telemetry.md
Title: Data model for request telemetry - Azure Application Insights
-description: Application Insights data model for request telemetry
+ Title: Data model for request telemetry - Application Insights
+description: This article describes the Application Insights data model for request telemetry.
Last updated 01/07/2019
# Request telemetry: Application Insights data model
-A request telemetry item (in [Application Insights](./app-insights-overview.md)) represents the logical sequence of execution triggered by an external request to your application. Every request execution is identified by unique `ID` and `url` containing all the execution parameters. You can group requests by logical `name` and define the `source` of this request. Code execution can result in `success` or `fail` and has a certain `duration`. Both success and failure executions may be grouped further by `resultCode`. Start time for the request telemetry defined on the envelope level.
+A request telemetry item in [Application Insights](./app-insights-overview.md) represents the logical sequence of execution triggered by an external request to your application. Every request execution is identified by a unique `id` and `url` that contain all the execution parameters.
-Request telemetry supports the standard extensibility model using custom `properties` and `measurements`.
+You can group requests by logical `name` and define the `source` of this request. Code execution can result in `success` or `fail` and has a certain `duration`. Both success and failure executions can be grouped further by `resultCode`. Start time for the request telemetry is defined on the envelope level.
+
+Request telemetry supports the standard extensibility model by using custom `properties` and `measurements`.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## Name
-Name of the request represents code path taken to process the request. Low cardinality value to allow better grouping of requests. For HTTP requests it represents the HTTP method and URL path template like `GET /values/{id}` without the actual `id` value.
+The name of the request represents the code path taken to process the request. A low cardinality value allows for better grouping of requests. For HTTP requests, it represents the HTTP method and URL path template like `GET /values/{id}` without the actual `id` value.
-Application Insights web SDK sends request name "as is" with regards to letter case. Grouping on UI is case-sensitive so `GET /Home/Index` is counted separately from `GET /home/INDEX` even though often they result in the same controller and action execution. The reason for that is that urls in general are [case-sensitive](https://www.w3.org/TR/WD-html40-970708/htmlweb.html). You may want to see if all `404` happened for the urls typed in uppercase. You can read more on request name collection by ASP.NET Web SDK in the [blog post](https://apmtips.com/posts/2015-02-23-request-name-and-url/).
+The Application Insights web SDK sends a request name "as is" about letter case. Grouping on the UI is case sensitive, so `GET /Home/Index` is counted separately from `GET /home/INDEX` even though often they result in the same controller and action execution. The reason for that is that URLs in general are [case sensitive](https://www.w3.org/TR/WD-html40-970708/htmlweb.html). You might want to see if all `404` errors happened for URLs typed in uppercase. You can read more about request name collection by the ASP.NET web SDK in the [blog post](https://apmtips.com/posts/2015-02-23-request-name-and-url/).
-Max length: 1024 characters
+**Maximum length**: 1,024 characters
## ID
-Identifier of a request call instance. Used for correlation between request and other telemetry items. ID should be globally unique. For more information, see [correlation](./correlation.md) page.
+ID is the identifier of a request call instance. It's used for correlation between the request and other telemetry items. The ID should be globally unique. For more information, see [Telemetry correlation in Application Insights](./correlation.md).
-Max length: 128 characters
+**Maximum length**: 128 characters
-## Url
+## URL
-Request URL with all query string parameters.
+URL is the request URL with all query string parameters.
-Max length: 2048 characters
+**Maximum length**: 2,048 characters
## Source
-Source of the request. Examples are the instrumentation key of the caller or the ip address of the caller. For more information, see [correlation](./correlation.md) page.
+Source is the source of the request. Examples are the instrumentation key of the caller or the IP address of the caller. For more information, see [Telemetry correlation in Application Insights](./correlation.md).
-Max length: 1024 characters
+**Maximum length**: 1,024 characters
## Duration
-Request duration in format: `DD.HH:MM:SS.MMMMMM`. Must be positive and less than `1000` days. This field is required as request telemetry represents the operation with the beginning and the end.
+The request duration is formatted as `DD.HH:MM:SS.MMMMMM`. It must be positive and less than `1000` days. This field is required because request telemetry represents the operation with the beginning and the end.
## Response code
-Result of a request execution. HTTP status code for HTTP requests. It may be `HRESULT` value or exception type for other request types.
+The response code is the result of a request execution. It's the HTTP status code for HTTP requests. It might be an `HRESULT` value or an exception type for other request types.
-Max length: 1024 characters
+**Maximum length**: 1,024 characters
## Success
-Indication of successful or unsuccessful call. This field is required. When not set explicitly to `false` - a request is considered to be successful. Set this value to `false` if operation was interrupted by exception or returned error result code.
+Success indicates whether a call was successful or unsuccessful. This field is required. When a request isn't set explicitly to `false`, it's considered to be successful. Set this value to `false` if the operation was interrupted by an exception or a returned error result code.
+
+For web applications, Application Insights defines a request as successful when the response code is less than `400` or equal to `401`. However, there are cases when this default mapping doesn't match the semantics of the application.
-For the web applications, Application Insights define a request as successful when the response code is less than `400` or equal to `401`. However there are cases when this default mapping does not match the semantic of the application. Response code `404` may indicate "no records", which can be part of regular flow. It also may indicate a broken link. For the broken links, you can even implement more advanced logic. You can mark broken links as failures only when those links are located on the same site by analyzing url referrer. Or mark them as failures when accessed from the company's mobile application. Similarly `301` and `302` indicates failure when accessed from the client that doesn't support redirect.
+Response code `404` might indicate "no records," which can be part of regular flow. It also might indicate a broken link. For broken links, you can implement more advanced logic. You can mark broken links as failures only when those links are located on the same site by analyzing the URL referrer. Or you can mark them as failures when they're accessed from the company's mobile application. Similarly, `301` and `302` indicate failure when they're accessed from the client that doesn't support redirect.
-Partially accepted content `206` may indicate a failure of an overall request. For instance, Application Insights endpoint receives a batch of telemetry items as a single request. It returns `206` when some items in the batch were not processed successfully. Increasing rate of `206` indicates a problem that needs to be investigated. Similar logic applies to `207` Multi-Status where the success may be the worst of separate response codes.
+Partially accepted content `206` might indicate a failure of an overall request. For instance, an Application Insights endpoint might receive a batch of telemetry items as a single request. It returns `206` when some items in the batch weren't processed successfully. An increasing rate of `206` indicates a problem that needs to be investigated. Similar logic applies to `207` Multi-Status where the success might be the worst of separate response codes.
-You can read more on request result code and status code in the [blog post](https://apmtips.com/posts/2016-12-03-request-success-and-response-code/).
+You can read more about the request result code and status code in the [blog post](https://apmtips.com/posts/2016-12-03-request-success-and-response-code/).
## Custom properties
You can read more on request result code and status code in the [blog post](http
## Next steps -- [Write custom request telemetry](./api-custom-events-metrics.md#trackrequest)-- See [data model](data-model.md) for Application Insights types and data model.-- Learn how to [configure ASP.NET Core](./asp-net.md) application with Application Insights.-- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.-
+- [Write custom request telemetry](./api-custom-events-metrics.md#trackrequest).
+- See the [data model](data-model.md) for Application Insights types and data models.
+- Learn how to [configure an ASP.NET Core](./asp-net.md) application with Application Insights.
+- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
If you want to monitor a particular server role instance, you can filter by serv
Live Metrics custom filters allow you to control which of your application's telemetry is streamed to the Live Metrics view in the Azure portal. The filters criteria are sent to the apps that are instrumented with the Application Insights SDK. The filter value could potentially contain sensitive information, such as the customer ID. To keep this value secured and prevent potential disclosure to unauthorized applications, you have two options: -- **Recommended:** Secure the Live Metrics channel by using [Azure Active Directory (Azure AD) authentication](./azure-ad-authentication.md#configuring-and-enabling-azure-ad-based-authentication).
+- **Recommended:** Secure the Live Metrics channel by using [Azure Active Directory (Azure AD) authentication](./azure-ad-authentication.md#configure-and-enable-azure-ad-based-authentication).
- **Legacy (no longer recommended):** Set up an authenticated channel by configuring a secret API key as explained in the "Legacy option" section. > [!NOTE]
azure-monitor Tutorial Runtime Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-runtime-exceptions.md
Title: Diagnose run-time exceptions using Azure Application Insights | Microsoft Docs
-description: Tutorial to find and diagnose run-time exceptions in your application using Azure Application Insights.
+ Title: Diagnose runtime exceptions by using Application Insights | Microsoft Docs
+description: Tutorial to find and diagnose runtime exceptions in your application by using Application Insights.
Last updated 09/19/2017
-# Find and diagnose run-time exceptions with Azure Application Insights
+# Find and diagnose runtime exceptions with Application Insights
-Azure Application Insights collects telemetry from your application to help identify and diagnose run-time exceptions. This tutorial takes you through this process with your application. You learn how to:
+Application Insights collects telemetry from your application to help identify and diagnose runtime exceptions. This tutorial takes you through this process with your application. You learn how to:
> [!div class="checklist"]
-> * Modify your project to enable exception tracking
-> * Identify exceptions for different components of your application
-> * View details of an exception
-> * Download a snapshot of the exception to Visual Studio for debugging
-> * Analyze details of failed requests using query language
-> * Create a new work item to correct the faulty code
-
+> * Modify your project to enable exception tracking.
+> * Identify exceptions for different components of your application.
+> * View details of an exception.
+> * Download a snapshot of the exception to Visual Studio for debugging.
+> * Analyze details of failed requests by using query language.
+> * Create a new work item to correct the faulty code.
## Prerequisites
To complete this tutorial:
- ASP.NET and web development - Azure development - Download and install the [Visual Studio Snapshot Debugger](https://aka.ms/snapshotdebugger).-- Enable [Visual Studio Snapshot Debugger](../app/snapshot-debugger.md)-- Deploy a .NET application to Azure and [enable the Application Insights SDK](../app/asp-net.md). -- The tutorial tracks the identification of an exception in your application, so modify your code in your development or test environment to generate an exception. -
-## Log in to Azure
-Log in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+- Enable the [Visual Studio Snapshot Debugger](../app/snapshot-debugger.md).
+- Deploy a .NET application to Azure and [enable the Application Insights SDK](../app/asp-net.md).
+- Modify your code in your development or test environment to generate an exception because the tutorial tracks the identification of an exception in your application.
+## Sign in to Azure
+Sign in to the [Azure portal](https://portal.azure.com).
## Analyze failures
-Application Insights collects any failures in your application and lets you view their frequency across different operations to help you focus your efforts on those with the highest impact. You can then drill down on details of these failures to identify root cause.
+Application Insights collects any failures in your application. It lets you view their frequency across different operations to help you focus your efforts on those issues with the highest impact. You can then drill down on details of these failures to identify the root cause.
-1. Select **Application Insights** and then your subscription.
-2. To open the **Failures** panel either select **Failures** under the **Investigate** menu or click the **Failed requests** graph.
+1. Select **Application Insights** and then select your subscription.
+1. To open the **Failures** pane, either select **Failures** under the **Investigate** menu or select the **Failed requests** graph.
- ![Failed requests](media/tutorial-runtime-exceptions/failed-requests.png)
+ ![Screenshot that shows failed requests.](media/tutorial-runtime-exceptions/failed-requests.png)
-3. The **Failed requests** panel shows the count of failed requests and the number of users affected for each operation for the application. By sorting this information by user you can identify those failures that most impact users. In this example, the **GET Employees/Create** and **GET Customers/Details** are likely candidates to investigate because of their large number of failures and impacted users. Selecting an operation shows further information about this operation in the right panel.
+1. The **Failed requests** pane shows the count of failed requests and the number of users affected for each operation for the application. By sorting this information by user, you can identify those failures that most affect users. In this example, **GET Employees/Create** and **GET Customers/Details** are likely candidates to investigate because of their large number of failures and affected users. Selecting an operation shows more information about this operation in the right pane.
- ![Failed requests panel](media/tutorial-runtime-exceptions/failed-requests-blade.png)
+ ![Screenshot that shows the Failed requests pane.](media/tutorial-runtime-exceptions/failed-requests-blade.png)
-4. Reduce the time window to zoom in on the period where the failure rate shows a spike.
+1. Reduce the time window to zoom in on the period where the failure rate shows a spike.
- ![Failed requests window](media/tutorial-runtime-exceptions/failed-requests-window.png)
+ ![Screenshot that shows the Failed requests window.](media/tutorial-runtime-exceptions/failed-requests-window.png)
-5. See the related samples by clicking on the button with the number of filtered results. The "suggested" samples have related telemetry from all components, even if sampling may have been in effect in any of them. Click on a search result to see the details of the failure.
+1. See the related samples by selecting the button with the number of filtered results. The **Suggested** samples have related telemetry from all components, even if sampling might have been in effect in any of them. Select a search result to see the details of the failure.
- ![Failed request samples](media/tutorial-runtime-exceptions/failed-requests-search.png)
+ ![Screenshot that shows the Failed request samples.](media/tutorial-runtime-exceptions/failed-requests-search.png)
-6. The details of the failed request shows the Gantt chart which shows that there were two dependency failures in this transaction, which also attributed to over 50% of the total duration of the transaction. This experience presents all telemetry, across components of a distributed application that are related to this operation ID. [Learn more about the new experience](../app/transaction-diagnostics.md). You can select any of the items to see its details on the right side.
+1. The details of the failed request show the Gantt chart that shows that there were two dependency failures in this transaction, which also contributed to more than 50% of the total duration of the transaction. This experience presents all telemetry across components of a distributed application that are related to this operation ID. To learn more about the new experience, see [Unified cross-component transaction diagnostics](../app/transaction-diagnostics.md). You can select any of the items to see their details on the right side.
- ![Failed request details](media/tutorial-runtime-exceptions/failed-request-details.png)
+ ![Screenshot that shows Failed request details.](media/tutorial-runtime-exceptions/failed-request-details.png)
-7. The operations detail also shows a FormatException which appears to have caused the failure. You can see that it's due to an invalid zip code. You can open the debug snapshot to see code level debug information in Visual Studio.
+1. The operations detail also shows a format exception, which appears to have caused the failure. You can see that it's because of an invalid Zip Code. You can open the debug snapshot to see code-level debug information in Visual Studio.
- ![Exception details](media/tutorial-runtime-exceptions/failed-requests-exception.png)
+ ![Screenshot that shows exception details.](media/tutorial-runtime-exceptions/failed-requests-exception.png)
## Identify failing code
-The Snapshot Debugger collects snapshots of the most frequent exceptions in your application to assist you in diagnosing its root cause in production. You can view debug snapshots in the portal to see the call stack and inspect variables at each call stack frame. Afterwards, you have the option to debug the source code by downloading the snapshot and opening it in Visual Studio 2019 Enterprise.
-
-1. In the properties of the exception, click **Open debug snapshot**.
-2. The **Debug Snapshot** panel opens with the call stack for the request. Click any method to view the values of all local variables at the time of the request. Starting from the top method in this example, we can see local variables that have no value.
+The Snapshot Debugger collects snapshots of the most frequent exceptions in your application to assist you in diagnosing its root cause in production. You can view debug snapshots in the portal to see the call stack and inspect variables at each call stack frame. Afterward, you can debug the source code by downloading the snapshot and opening it in Visual Studio 2019 Enterprise.
- ![Debug snapshot](media/tutorial-runtime-exceptions/debug-snapshot-01.png)
+1. In the properties of the exception, select **Open debug snapshot**.
+1. The **Debug Snapshot** pane opens with the call stack for the request. Select any method to view the values of all local variables at the time of the request. Starting from the top method in this example, you can see local variables that have no value.
-3. The first call that has valid values is **ValidZipCode**, and we can see that a zip code was provided with letters that isn't able to be translated into an integer. This appears to be the error in the code that needs to be corrected.
+ ![Screenshot that shows the Debug Snapshot pane.](media/tutorial-runtime-exceptions/debug-snapshot-01.png)
- ![Screenshot that shows an error in the code that needs to be corrected. ](media/tutorial-runtime-exceptions/debug-snapshot-02.png)
+1. The first call that has valid values is **ValidZipCode**. You can see that a Zip Code was provided with letters that can't be translated into an integer. This issue appears to be the error in the code that must be corrected.
-4. You then have the option to download this snapshot into Visual Studio where we can locate the actual code that needs to be corrected. To do so, click **Download Snapshot**.
-5. The snapshot is loaded into Visual Studio.
-6. You can now run a debug session in Visual Studio Enterprise that quickly identifies the line of code that caused the exception.
+ ![Screenshot that shows an error in the code that must be corrected.](media/tutorial-runtime-exceptions/debug-snapshot-02.png)
- ![Exception in code](media/tutorial-runtime-exceptions/exception-code.png)
+1. You can then download this snapshot into Visual Studio where you can locate the actual code that must be corrected. To do so, select **Download Snapshot**.
+1. The snapshot is loaded into Visual Studio.
+1. You can now run a debug session in Visual Studio Enterprise that quickly identifies the line of code that caused the exception.
+ ![Screenshot that shows an exception in the code.](media/tutorial-runtime-exceptions/exception-code.png)
## Use analytics data
-All data collected by Application Insights is stored in Azure Log Analytics, which provides a rich query language that allows you to analyze the data in a variety of ways. We can use this data to analyze the requests that generated the exception we're researching.
+All data collected by Application Insights is stored in Azure Log Analytics, which provides a rich query language that you can use to analyze the data in various ways. You can use this data to analyze the requests that generated the exception you're researching.
-1. Click the CodeLens information above the code to view telemetry provided by Application Insights.
+1. Select the CodeLens information above the code to view telemetry provided by Application Insights.
- ![Code](media/tutorial-runtime-exceptions/codelens.png)
+ ![Screenshot that shows code in CodeLens.](media/tutorial-runtime-exceptions/codelens.png)
-1. Click **Analyze impact** to open Application Insights Analytics. It's populated with several queries that provide details on failed requests such as impacted users, browsers, and regions.<br><br>![Screenshot shows Application Insights window which includes several queries.](media/tutorial-runtime-exceptions/analytics.png)<br>
+1. Select **Analyze impact** to open Application Insights Analytics. It's populated with several queries that provide details on failed requests, such as affected users, browsers, and regions.<br><br>
-## Add work item
-If you connect Application Insights to a tracking system such as Azure DevOps or GitHub, you can create a work item directly from Application Insights.
+ ![Screenshot that shows Application Insights window that includes several queries.](media/tutorial-runtime-exceptions/analytics.png)<br>
-1. Return to the **Exception Properties** panel in Application Insights.
-2. Click **New Work Item**.
-3. The **New Work Item** panel opens with details about the exception already populated. You can add any additional information before saving it.
+## Add a work item
+If you connect Application Insights to a tracking system, such as Azure DevOps or GitHub, you can create a work item directly from Application Insights.
- ![New Work Item](media/tutorial-runtime-exceptions/new-work-item.png)
+1. Return to the **Exception Properties** pane in Application Insights.
+1. Select **New Work Item**.
+1. The **New Work Item** pane opens with details about the exception already populated. You can add more information before you save it.
+
+ ![Screenshot that shows the New Work Item pane.](media/tutorial-runtime-exceptions/new-work-item.png)
## Next steps
-Now that you've learned how to identify run-time exceptions, advance to the next tutorial to learn how to identify and diagnose performance issues.
+Now that you've learned how to identify runtime exceptions, advance to the next tutorial to learn how to identify and diagnose performance issues.
> [!div class="nextstepaction"] > [Identify performance issues](./tutorial-performance.md)-
azure-monitor Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/visual-studio.md
Title: Debug in Visual Studio with Azure Application Insights
-description: Web app performance analysis and diagnostics during debugging and in production.
+ Title: Debug in Visual Studio with Application Insights
+description: Learn about web app performance analysis and diagnostics during debugging and in production.
Last updated 03/17/2017
-# Debug your applications with Azure Application Insights in Visual Studio
-In Visual Studio (2015 and later), you can analyze performance and diagnose issues in your ASP.NET web app both in debugging and in production, using telemetry from [Azure Application Insights](./app-insights-overview.md).
+# Debug your applications with Application Insights in Visual Studio
+In Visual Studio 2015 and later, you can analyze performance and diagnose issues in your ASP.NET web app both in debugging and in production by using telemetry from [Application Insights](./app-insights-overview.md).
-If you created your ASP.NET web app using Visual Studio 2017 or later, it already has the Application Insights SDK. Otherwise, if you haven't done so already, [add Application Insights to your app](./asp-net.md).
+If you created your ASP.NET web app by using Visual Studio 2017 or later, it already has the Application Insights SDK. Otherwise, if you haven't done so already, [add Application Insights to your app](./asp-net.md).
-To monitor your app when it's in live production, you normally view the Application Insights telemetry in the [Azure portal](https://portal.azure.com), where you can set alerts and apply powerful monitoring tools. But for debugging, you can also search and analyze the telemetry in Visual Studio. You can use Visual Studio to analyze telemetry both from your production site and from debugging runs on your development machine. In the latter case, you can analyze debugging runs even if you haven't yet configured the SDK to send telemetry to the Azure portal.
+To monitor your app when it's in live production, you normally view the Application Insights telemetry in the [Azure portal](https://portal.azure.com), where you can set alerts and apply powerful monitoring tools. But for debugging, you can also search and analyze the telemetry in Visual Studio.
+
+You can use Visual Studio to analyze telemetry both from your production site and from debugging runs on your development machine. In the latter case, you can analyze debugging runs even if you haven't yet configured the SDK to send telemetry to the Azure portal.
## <a name="run"></a> Debug your project Run your web app in local debug mode by using F5. Open different pages to generate some telemetry.
-In Visual Studio, you see a count of the events that have been logged by the Application Insights module in your project.
+In Visual Studio, you see a count of the events that were logged by the Application Insights module in your project.
-![In Visual Studio, the Application Insights button shows during debugging.](./media/visual-studio/appinsights-09eventcount.png)
+![Screenshot that shows the Application Insights button in Visual Studio during debugging.](./media/visual-studio/appinsights-09eventcount.png)
-Click this button to search your telemetry.
+Select the **Application Insights** button to search your telemetry.
-## Application Insights search
-The Application Insights Search window shows events that have been logged. (If you signed in to Azure when you set up Application Insights, you can search the same events in the Azure portal.)
+## Application Insights Search
+The **Application Insights Search** window shows logged events. If you signed in to Azure when you set up Application Insights, you can search the same events in the Azure portal. Right-click the project and select **Application Insights** > **Search**.
-![Right-click the project and choose Application Insights, Search](./media/visual-studio/34.png)
+![Screenshot that shows the Application Insights Search window.](./media/visual-studio/34.png)
-> [!NOTE]
-> After you select or deselect filters, click the Search button at the end of the text search field.
+> [!NOTE]
+> After you select or clear filters, select **Search** at the end of the text search field.
>
+The free text search works on any fields in the events. For example, you can search for part of the URL of a page. You can also search for the value of a property, such as a client's city, or specific words in a trace log.
-The free text search works on any fields in the events. For example, search for part of the URL of a page; or the value of a property such as client city; or specific words in a trace log.
-
-Click any event to see its detailed properties.
+Select any event to see its detailed properties.
For requests to your web app, you can click through to the code.
-![Under Request Details, click through to the code](./media/visual-studio/31.png)
+![Screenshot that shows clicking through to the code under Request Details.](./media/visual-studio/31.png).
You can also open related items to help diagnose failed requests or exceptions.
-![Under Request Details, scroll down to related items](./media/visual-studio/41.png)
+![Screenshot that shows scrolling down to related items under Request Details](./media/visual-studio/41.png)
## View exceptions and failed requests
-Exception reports show in the Search window. (In some older types of ASP.NET application, you have to [set up exception monitoring](./asp-net-exceptions.md) to see exceptions that are handled by the framework.)
+Exception reports show in the **Search** window. In some older types of ASP.NET application, you have to [set up exception monitoring](./asp-net-exceptions.md) to see exceptions that are handled by the framework.
-Click an exception to get a stack trace. If the code of the app is open in Visual Studio, you can click through from the stack trace to the relevant line of the code.
+Select an exception to get a stack trace. If the code of the app is open in Visual Studio, you can click through from the stack trace to the relevant line of the code.
-![Screenshot shows the About object in a stack trace.](./media/visual-studio/17.png)
+![Screenshot that shows the About object in a stack trace.](./media/visual-studio/17.png)
## View request and exception summaries in the code
-In the Code Lens line above each handler method, you see a count of the requests and exceptions logged by Application Insights in the past 24 h.
+In the CodeLens line above each handler method, you see a count of the requests and exceptions logged by Application Insights in the past 24 hours.
-![Screenshot shows an exception in a context dialog box.](./media/visual-studio/21.png)
+![Screenshot that shows an exception in a context dialog.](./media/visual-studio/21.png)
-> [!NOTE]
-> Code Lens shows Application Insights data only if you have [configured your app to send telemetry to the Application Insights portal](./asp-net.md).
+> [!NOTE]
+> CodeLens shows Application Insights data only if you've [configured your app to send telemetry to the Application Insights portal](./asp-net.md).
>
-[More about Application Insights in Code Lens](./visual-studio-codelens.md)
+For more information, see [Application Insights telemetry in Visual Studio CodeLens.](./visual-studio-codelens.md)
## Local monitoring
-(From Visual Studio 2015 Update 2) If you haven't configured the SDK to send telemetry to the Application Insights portal (so that there is no instrumentation key in ApplicationInsights.config) then the diagnostics window displays telemetry from your latest debugging session.
+From Visual Studio 2015 Update 2: If you haven't configured the SDK to send telemetry to the Application Insights portal so that there's no instrumentation key in ApplicationInsights.config, the diagnostics window displays telemetry from your latest debugging session.
-This is desirable if you have already published a previous version of your app. You don't want the telemetry from your debugging sessions to be mixed up with the telemetry on the Application Insights portal from the published app.
+This is desirable if you've already published a previous version of your app. You don't want the telemetry from your debugging sessions to be mixed up with the telemetry on the Application Insights portal from the published app.
-It's also useful if you have some [custom telemetry](./api-custom-events-metrics.md) that you want to debug before sending telemetry to the portal.
+It's also useful if you have some [custom telemetry](./api-custom-events-metrics.md) that you want to debug before you send telemetry to the portal.
-* *At first, I fully configured Application Insights to send telemetry to the portal. But now I'd like to see the telemetry only in Visual Studio.*
-
- * In the Search window's Settings, there's an option to search local diagnostics even if your app sends telemetry to the portal.
+For example, at first you might have fully configured Application Insights to send telemetry to the portal. But now you want to see the telemetry only in Visual Studio:
+
+ * In the **Search** window's settings, there's an option to search local diagnostics even if your app sends telemetry to the portal.
* To stop telemetry being sent to the portal, comment out the line `<instrumentationkey>...` from ApplicationInsights.config. When you're ready to send telemetry to the portal again, uncomment it. [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## Next steps
- * **[Working with the Application Insights portal](./overview-dashboard.md)**. View dashboards, powerful diagnostic and analytic tools, alerts, a live dependency map of your application, and exported telemetry data.
-
+ [Work with the Application Insights portal](./overview-dashboard.md) where you can view dashboards, use powerful diagnostic and analytic tools, get alerts, see a live dependency map of your application, and view exported telemetry data.
azure-monitor Container Insights Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-syslog.md
Container Insights offers the ability to collect Syslog events from Linux nodes
Use the following command in Azure CLI to enable syslog collection when you create a new AKS cluster.
+### Using Azure CLI commands
+ ```azurecli az aks create -g syslog-rg -n new-cluster --enable-managed-identity --node-count 1 --enable-addons monitoring --enable-msi-auth-for-monitoring --enable-syslog --generate-ssh-key ```
Use the following command in Azure CLI to enable syslog collection on an existin
az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring --enable-syslog -g syslog-rg -n existing-cluster ```
+### Using ARM templates
+
+You can also use ARM templates for enabling syslog collection
+
+1. Download the template in the [GitHub content file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-file) and save it as **existingClusterOnboarding.json**.
+
+1. Download the parameter file in the [GitHub content file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file) and save it as **existingClusterParam.json**.
+
+1. Edit the values in the parameter file:
+
+ - `aksResourceId`: Use the values on the **AKS Overview** page for the AKS cluster.
+ - `aksResourceLocation`: Use the values on the **AKS Overview** page for the AKS cluster.
+ - `workspaceResourceId`: Use the resource ID of your Log Analytics workspace.
+ - `resourceTagValues`: Match the existing tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will be *MSCI-\<clusterName\>-\<clusterRegion\>* and this resource created in an AKS clusters resource group. If this is the first time onboarding, you can set the arbitrary tag values.
+ - `enableSyslog`: Set to true
+ - `syslogLevels`: Array of syslog levels to collect. Default collects all levels.
+ - `syslogFacilities`: Array of syslog facilities to collect. Default collects all facilities
+
+> [!NOTE]
+> Syslog level and facilities customization is currently only available via ARM templates.
+
+### Deploy the template
+
+Deploy the template with the parameter file by using any valid method for deploying Resource Manager templates. For examples of different methods, see [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates).
+
+#### Deploy with Azure PowerShell
+
+```powershell
+New-AzResourceGroupDeployment -Name OnboardCluster -ResourceGroupName <ResourceGroupName> -TemplateFile .\existingClusterOnboarding.json -TemplateParameterFile .\existingClusterParam.json
+```
+
+The configuration change can take a few minutes to complete. When it's finished, a message similar to the following example includes this result:
+
+```output
+provisioningState : Succeeded
+```
+
+#### Deploy with Azure CLI
+
+```azurecli
+az login
+az account set --subscription "Subscription Name"
+az deployment group create --resource-group <ResourceGroupName> --template-file ./existingClusterOnboarding.json --parameters @./existingClusterParam.json
+```
+
+The configuration change can take a few minutes to complete. When it's finished, a message similar to the following example includes this result:
+
+```output
+provisioningState : Succeeded
+```
## How to access Syslog data
Select the minimum log level for each facility that you want to collect.
- Read more about [Syslog record properties](/azure/azure-monitor/reference/tables/syslog) -
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md
Grant access to all tables except the _SecurityAlert_ table:
> [!NOTE] > Tables created by the [Logs ingestion API](../essentials/../logs/logs-ingestion-api-overview.md) don't yet support table-level RBAC.
- You can't grant access to individual custom log tables, but you can grant access to all custom logs. To create a role with access to all custom log tables, create a custom role by using the following actions:
+ You can't grant access to individual custom log table level, but you can grant access to all custom log tables. To create a role with access to all custom log tables, create a custom role by using the following actions:
``` "Actions": [
azure-monitor Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler.md
To enable Azure AD for profiles ingestion:
b. [User-Assigned Managed identity documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-user-assigned-identity)
-1. [Configure and enable Azure AD](../app/azure-ad-authentication.md?tabs=net#configuring-and-enabling-azure-ad-based-authentication) in your Application Insights resource.
+1. [Configure and enable Azure AD](../app/azure-ad-authentication.md?tabs=net#configure-and-enable-azure-ad-based-authentication) in your Application Insights resource.
1. Add the following application setting to let the Profiler agent know which managed identity to use:
azure-monitor Snapshot Debugger App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-app-service.md
To turn-on Azure AD for snapshot ingestion:
1. For User-Assigned Managed identity, see the following [documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-user-assigned-identity).
-1. Configure and turn on Azure AD in your Application Insights resource. For more information, see the following [documentation](../app/azure-ad-authentication.md?tabs=net#configuring-and-enabling-azure-ad-based-authentication)
+1. Configure and turn on Azure AD in your Application Insights resource. For more information, see the following [documentation](../app/azure-ad-authentication.md?tabs=net#configure-and-enable-azure-ad-based-authentication)
1. Add the following application setting, used to let Snapshot Debugger agent know which managed identity to use: For System-Assigned Identity:
azure-vmware Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-networking.md
Title: Concepts - Network interconnectivity
description: Learn about key aspects and use cases of networking and interconnectivity in Azure VMware Solution. Previously updated : 2/4/2023 Last updated : 2/7/2023
The diagram below shows the basic network interconnectivity established at the t
- Outbound access from VMs on the private cloud to Azure services. - Inbound access of workloads running in the private cloud.
-When connecting **production** Azure VMware Solution private clouds to an Azure virtual network, an ExpressRoute virtual network gateway with the Ultra Performance Gateway SKU should be used with FastPath enabled to achieve 10Gbps connectivity. Less critical environments can use the Standard or High Performance Gateway SKUs for slower network performance.
+> [!IMPORTANT]
+> When connecting **production** Azure VMware Solution private clouds to an Azure virtual network, an ExpressRoute virtual network gateway with the Ultra Performance Gateway SKU should be used with FastPath enabled to achieve 10Gbps connectivity. Less critical environments can use the Standard or High Performance Gateway SKUs for slower network performance.
:::image type="content" source="media/concepts/adjacency-overview-drawing-single.png" alt-text="Diagram showing the basic network interconnectivity established at the time of an Azure VMware Solution private cloud deployment." border="false":::
center-sap-solutions Register Existing System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/register-existing-system.md
In this how-to guide, you'll learn how to register an existing SAP system with *
- Check that you're trying to register a [supported SAP system configuration](#supported-systems) - Check that your Azure account has **Contributor** role access on the subscription or resource groups where you have the SAP system resources. - Register the **Microsoft.Workloads** Resource Provider in the subscription where you have the SAP system.-- A **User-assigned managed identity** which has **Virtual Machine Contributor** role access to the Compute resource group and **Reader** role access to the Network resource group of the SAP system. Azure Center for SAP solutions service uses this identity to discover your SAP system resources and register the system as a VIS resource.
+- A **User-assigned managed identity** which has **Virtual Machine Contributor** role access to the Compute resource group and **Reader** and **Tag Contributor** role access to the Network resource group of the SAP system. Azure Center for SAP solutions service uses this identity to discover your SAP system resources and register the system as a VIS resource.
- Make sure each virtual machine (VM) in the SAP system is currently running on Azure. These VMs include: - The ABAP SAP Central Services (ASCS) Server instance - The Application Server instance or instances
The following SAP system configurations aren't supported in Azure Center for SAP
## Enable resource permissions
-When you register an existing SAP system as a VIS, Azure Center for SAP solutions service needs a **User-assigned managed identity** which has **Virtual Machine Contributor** role access to the Compute resource groups and **Reader** role access to the Network resource groups of the SAP system. Before you register an SAP system with Azure Center for SAP solutions, either [create a new user-assigned managed identity or update role access for an existing managed identity](#setup-user-assigned-managed-identity).
+When you register an existing SAP system as a VIS, Azure Center for SAP solutions service needs a **User-assigned managed identity** which has **Virtual Machine Contributor** role access to the Compute resource groups and **Reader** and **Tag Contributor** role access to the Network resource groups of the SAP system. Before you register an SAP system with Azure Center for SAP solutions, either [create a new user-assigned managed identity or update role access for an existing managed identity](#setup-user-assigned-managed-identity).
Azure Center for SAP solutions uses this user-assigned managed identity to install VM extensions on the ASCS, Application Server and DB VMs. This step allows Azure Center for SAP solutions to discover the SAP system components, and other SAP system metadata. Azure Center for SAP solutions also needs this user-assigned managed identity to enable SAP system monitoring and management capabilities.
Azure Center for SAP solutions uses this user-assigned managed identity to insta
To provide permissions to the SAP system resources to a user-assigned managed identity: 1. [Create a new user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) if needed or use an existing one.
-1. [Assign **Virtual Machine Contributor** role access](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#manage-access-to-user-assigned-managed-identities) to the user-assigned managed identity on the resource group(s) which have the Virtual Machines of the SAP system and **Reader** role on the resource group(s) which have the Network components on the SAP system resources exist.
+1. [Assign **Virtual Machine Contributor** role access](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#manage-access-to-user-assigned-managed-identities) to the user-assigned managed identity on the resource group(s) which have the Virtual Machines of the SAP system and **Reader** and **Tag Contributor** role on the resource group(s) which have the Network components of the SAP system.
1. Once the permissions are assigned, this managed identity can be used in Azure Center for SAP solutions to register and manage SAP systems.
+> [!NOTE]
+> User-assigned managed identity requires **Tag Contributor** role on Network resources of the SAP system to enable [Cost Analysis](view-cost-analysis.md) at SAP SID level.
+ ## Register SAP system To register an existing SAP system in Azure Center for SAP solutions:
To register an existing SAP system in Azure Center for SAP solutions:
1. For **SAP product**, select the SAP system product from the drop-down menu. 1. For **Environment**, select the environment type from the drop-down menu. For example, production or non-production environments. 1. For **Managed identity source**, select **Use existing user-assigned managed identity** option.
- 1. For **Managed identity name**, select a **User-assigned managed identity** which has **Virtual Machine Contributor** and **Reader** role access to the [respective resources of this SAP system.](#enable-resource-permissions)
+ 1. For **Managed identity name**, select a **User-assigned managed identity** which has **Virtual Machine Contributor**, **Reader** and **Tag Contributor** role access to the [respective resources of this SAP system.](#enable-resource-permissions)
1. Select **Review + register** to discover the SAP system and begin the registration process. :::image type="content" source="media/register-existing-system/registration-page.png" alt-text="Screenshot of Azure Center for SAP solutions registration page, highlighting mandatory fields to identify the existing SAP system." lightbox="media/register-existing-system/registration-page.png":::
The process of registering an SAP system in Azure Center for SAP solutions might
- Command to start up sapstartsrv process on SAP VMs: /usr/sap/hostctrl/exe/hostexecstart -start - At least one Application Server and the Database aren't running for the SAP system that you chose. Make sure the Application Servers and Database VMs are in the **Running** state. - The user trying to register the SAP system doesn't have **Contributor** role permissions. For more information, see the [prerequisites for registering an SAP system](#prerequisites).-- The user-assigned managed identity doesn't have **Virtual Machine Contributor** role access to the Compute resources and **Reader** role access to the Network resource groups of the SAP system. For more information, see [how to enable Azure Center for SAP solutions resource permissions](#enable-resource-permissions).
+- The user-assigned managed identity doesn't have **Virtual Machine Contributor** role access to the Compute resources and **Reader** and **Tag Contributor** role access to the Network resource groups of the SAP system. For more information, see [how to enable Azure Center for SAP solutions resource permissions](#enable-resource-permissions).
There's also a known issue with registering *S/4HANA 2021* version SAP systems. You might receive the error message: **Failed to discover details from the Db VM**. This error happens when the Database identifier is incorrectly configured on the SAP system. One possible cause is that the Application Server profile parameter `rsdb/dbid` has an incorrect identifier for the HANA Database. To fix the error:
cognitive-services Understand Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/understand-embeddings.md
An embedding is a special format of data representation that can be easily utili
## Embedding models
-Different Azure OpenAI embedding models are specifically created to be good at a particular task. **Similarity embeddings** are good at capturing semantic similarity between two or more pieces of text. **Text search embeddings** help measure long documents are relevant to a short query. **Code search embeddings** are useful for embedding code snippets and embedding nature language search queries.
+Different Azure OpenAI embedding models are specifically created to be good at a particular task. **Similarity embeddings** are good at capturing semantic similarity between two or more pieces of text. **Text search embeddings** help measure long documents are relevant to a short query. **Code search embeddings** are useful for embedding code snippets and embedding natural language search queries.
Embeddings make it easier to do machine learning on large inputs representing words by capturing the semantic similarities in a vector space. Therefore, we can use embeddings to determine if two text chunks are semantically related or similar, and provide a score to assess similarity.
Azure OpenAI embeddings rely on cosine similarity to compute similarity between
## Next steps
-Learn more about using Azure OpenAI and embeddings to perform document search with our [embeddings tutorial](../tutorials/embeddings.md).
+Learn more about using Azure OpenAI and embeddings to perform document search with our [embeddings tutorial](../tutorials/embeddings.md).
communication-services Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/security.md
In this article, you'll learn about the security measures and frameworks implemented by Microsoft Teams and Azure Communication Services to provide a secure collaboration environment. The products implement data encryption, secure real-time communication, two-factor authentication, user authentication, and authorization to prevent common security threats. The security frameworks for these services are based on industry standards and best practices. ## Microsoft Teams
-Microsoft Teams handles security using a combination of technologies and processes to mitigate common security threats and provide a secure collaboration environment. Teams implement multiple layers of security, including data encryption in transit and at rest, secure real-time communication through Microsoft's global network, and two-factor authentication for added protection. The security framework for Teams is built on the Microsoft Security Development Lifecycle (SDL), a comprehensive and standardized approach to software security covering all stages of development. Teams also undergo regular security assessments and audits to ensure that the platform meets industry standards for security and privacy. Additionally, Teams integrates with Microsoft's suite of security products and services, such as Azure Active Directory, to provide customers with a comprehensive security solution. You can learn here more about [security in Microsoft Teams](/microsoftteams/teams-security-guide.md).
+Microsoft Teams handles security using a combination of technologies and processes to mitigate common security threats and provide a secure collaboration environment. Teams implement multiple layers of security, including data encryption in transit and at rest, secure real-time communication through Microsoft's global network, and two-factor authentication for added protection. The security framework for Teams is built on the Microsoft Security Development Lifecycle (SDL), a comprehensive and standardized approach to software security covering all stages of development. Teams also undergo regular security assessments and audits to ensure that the platform meets industry standards for security and privacy. Additionally, Teams integrates with Microsoft's suite of security products and services, such as Azure Active Directory, to provide customers with a comprehensive security solution. You can learn here more about [security in Microsoft Teams](/microsoftteams/teams-security-guide). Additionally, you can find more about Microsoft's [resiliency and continuity here](/compliance/assurance/assurance-data-resiliency-overview).
Additionally, Microsoft Teams provides several policies and tenant configurations to control Teams external users joining and in-meeting experience. Teams administrators can use settings in the Microsoft Teams admin center or PowerShell to control whether Teams external users can join Teams meetings, bypass lobby, start a meeting, participate in chat, or default role assignment. You can learn more about the [policies here](./teams-administration.md).
communication-services Teams Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/teams-administration.md
Teams administrators have the following policies to control the experience for T
| [Let anonymous people join a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings) | per-organizer | If disabled, Teams external users can't join Teams meeting | ✔️ | | [Let anonymous people start a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings)| per-organizer | If enabled, Teams external users can start a Teams meeting without Teams user | ✔️ | | [Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people) | per-organizer | If set to "Everyone", Teams external users can bypass lobby. Otherwise, Teams external users have to wait in the lobby until an authenticated user admits them.| ✔️ |
-| [Who can present in meetings](/microsoftteams/meeting-policies-in-teams-general#designated-presenter-role-mode) | per-user | Controls who in the Teams meeting can share screen | ❌ |
+| [Who can present in meetings](/microsoftteams/meeting-policies-in-teams-general#designated-presenter-role-mode) | per-user | If set to "Everyone", Teams external users join Teams meeting as presenters. Otherwise, they join as attendees. | ✔️ |
| [Blocked anonymous join client types](/powershell/module/skype/set-csteamsmeetingpolicy) | per-organizer | If property "BlockedAnonymousJoinClientTypes" is set to "Teams" or "Null", the Teams external users via Azure Communication Services can join Teams meeting | ✔️ | Your custom application should consider user authentication and other security measures to protect Teams meetings. Be mindful of the security implications of enabling anonymous users to join meetings. Use the [Teams security guide](/microsoftteams/teams-security-guide#addressing-threats-to-teams-meetings) to configure capabilities available to anonymous users.
communication-services Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/security.md
In this article, you'll learn about the security measures and frameworks implemented by Microsoft Teams, Azure Communication Services, and Azure Active Directory to provide a secure collaboration environment. The products implement data encryption, secure real-time communication, two-factor authentication, user authentication, and authorization to prevent common security threats. The security frameworks for these services are based on industry standards and best practices. ## Microsoft Teams
-Microsoft Teams handles security using a combination of technologies and processes to mitigate common security threats and provide a secure collaboration environment. Teams implement multiple layers of security, including data encryption in transit and at rest, secure real-time communication through Microsoft's global network, and two-factor authentication for added protection. The security framework for Teams is built on the Microsoft Security Development Lifecycle (SDL), a comprehensive and standardized approach to software security covering all stages of development. Teams also undergo regular security assessments and audits to ensure that the platform meets industry standards for security and privacy. Additionally, Teams integrates with Microsoft's suite of security products and services, such as Azure Active Directory, to provide customers with a comprehensive security solution. You can learn here more about [security in Microsoft Teams](/microsoftteams/teams-security-guide.md).
+Microsoft Teams handles security using a combination of technologies and processes to mitigate common security threats and provide a secure collaboration environment. Teams implement multiple layers of security, including data encryption in transit and at rest, secure real-time communication through Microsoft's global network, and two-factor authentication for added protection. The security framework for Teams is built on the Microsoft Security Development Lifecycle (SDL), a comprehensive and standardized approach to software security covering all stages of development. Teams also undergo regular security assessments and audits to ensure that the platform meets industry standards for security and privacy. Additionally, Teams integrates with Microsoft's suite of security products and services, such as Azure Active Directory, to provide customers with a comprehensive security solution. You can learn here more about [security in Microsoft Teams](/microsoftteams/teams-security-guide). Additionally, you can find more about Microsoft's [resiliency and continuity here](/compliance/assurance/assurance-data-resiliency-overview).
## Azure Communication Services Azure Communication Services handles security by implementing various security measures to prevent and mitigate common security threats. These measures include data encryption in transit and at rest, secure real-time communication through Microsoft's global network, and authentication mechanisms to verify the identity of users. The security framework for Azure Communication Services is based on industry standards and best practices. Azure also undergoes regular security assessments and audits to ensure that the platform meets industry standards for security and privacy. Additionally, Azure Communication Services integrates with other Azure security services, such as Azure Active Directory, to provide customers with a comprehensive security solution. Customers can also control access to the services and manage their security settings through the Azure portal. You can learn here more about [Azure security baseline](/security/benchmark/azure/baselines/azure-communication-services-security-baseline?toc=/azure/communication-services/toc.json), about security of [call flows](../../call-flows.md) and [call flow topologies](../../detailed-call-flows.md). ## Azure Active Directory
-Azure Active Directory provides a range of security features for Microsoft Teams to help handle common security threats and provide a secure collaboration environment. Azure AD helps to secure user authentication and authorization, allowing administrators to manage user access to Teams and other applications through a single, centralized platform. Azure AD also integrates with Teams to provide multi-factor authentication and conditional access policies, which can be used to enforce security policies and control access to sensitive information. The security framework for Azure Active Directory is based on the Microsoft Security Development Lifecycle (SDL), a comprehensive and standardized approach to software security that covers all stages of development. Azure AD undergoes regular security assessments and audits to ensure that the platform meets industry standards for security and privacy. Additionally, Azure AD integrates with other Azure security services, such as Azure Information Protection, to provide customers with a comprehensive security solution. You can learn here more about [Azure identity management security](/azure/security/fundamentals/identity-management-overview.md).
+Azure Active Directory provides a range of security features for Microsoft Teams to help handle common security threats and provide a secure collaboration environment. Azure AD helps to secure user authentication and authorization, allowing administrators to manage user access to Teams and other applications through a single, centralized platform. Azure AD also integrates with Teams to provide multi-factor authentication and conditional access policies, which can be used to enforce security policies and control access to sensitive information. The security framework for Azure Active Directory is based on the Microsoft Security Development Lifecycle (SDL), a comprehensive and standardized approach to software security that covers all stages of development. Azure AD undergoes regular security assessments and audits to ensure that the platform meets industry standards for security and privacy. Additionally, Azure AD integrates with other Azure security services, such as Azure Information Protection, to provide customers with a comprehensive security solution. You can learn here more about [Azure identity management security](/azure/security/fundamentals/identity-management-overview).
communication-services Emergency Calling Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/emergency-calling-concept.md
## Overview
-Azure Communication Calling SDK can be used to add Enhanced 911 dialing and Public Safety Answering Point (PSAP) call-back support to your applications in the United States (US) & Puerto Rico. The capability to dial 911 and receive a call-back may be a requirement for your application. Verify the E911 requirements with your legal counsel.
+Azure Communication Calling SDK can be used to add Enhanced Emergency dialing and Public Safety Answering Point (PSAP) call-back support to your applications in the United States (US), Puerto Rico (PR), the United Kingdom (GB), and Canada (CA). The capability to dial 911 (in US, PR, and CA) and 999 or 112 (in GB) and receive a call-back may be a requirement for your application. Verify the Emergency Calling requirements with your legal counsel.
-Calls to 911 are routed over the Microsoft network. Microsoft assigns a temporary phone number as the Call Line Identity (CLI) when 911 calls from the US & Puerto Rico are placed. Microsoft temporarily maintains a mapping of the phone number to the caller's identity. If there's a call-back from the PSAP, we route the call directly to the originating 911 caller. The caller can accept incoming PSAP call even if inbound calling is disabled.
+Calls to an emergency number are routed over the Microsoft network. Microsoft assigns a temporary phone number as the Call Line Identity (CLI) when an emergency call from the US, PR, GB, or CA are placed. Microsoft temporarily maintains a mapping of the phone number to the caller's identity. If there is a call-back from the PSAP, we route the call directly to the originating caller. The caller can accept incoming PSAP call even if inbound calling is disabled.
-The service is available for Microsoft phone numbers. It requires that the Azure resource from where the 911 call originates has a Microsoft-issued phone number enabled with outbound dialing (also referred to as ΓÇÿmake calls').
+The service is available for Microsoft phone numbers. It requires that the Azure resource from where the emergency call originates has a Microsoft-issued phone number enabled with outbound dialing (also referred to as ΓÇÿmake calls').
-Azure Communication Services direct routing is currently in public preview and not intended for production workloads. So E911 dialing is out of scope for Azure Communication Services direct routing.
+Azure Communication Services direct routing is currently in public preview and not intended for production workloads, so emergency dialing is out of scope for Azure Communication Services direct routing.
## The call flow
-1. An Azure Communication Services user identity dials 911 using the Calling SDK from the USA or Puerto Rico
+1. An Azure Communication Services user identity dials emergency number using the Calling SDK from the USA or Puerto Rico
1. Microsoft validates the Azure resource has a Microsoft phone number enabled for outbound dialing
-1. Microsoft Azure Communication Services 911 service replaces the userΓÇÖs phone number `alternateCallerId` with a temporary unique phone number. This number allocation remains in place for at least 60 minutes from the time that 911 is first dialed
+1. Microsoft Azure Communication Services emergency service replaces the userΓÇÖs phone number `alternateCallerId` with a temporary unique phone number. This number allocation remains in place for at least 60 minutes from the time that emergency number is first dialed
1. Microsoft maintains a temporary record (for approximately 60 minutes) of the userΓÇÖs identity to the unique phone number
-1. The 911 call will be first routed to a call center where an agent will request the callerΓÇÖs address
-1. The call center will then route the call to the appropriate PSAP in the USA or Puerto Rico
-1. If the 911 call is unexpectedly dropped, the PSAP then makes a call-back to the user
-1. On receiving the call-back within 60 minutes, Microsoft will route the inbound call directly to the user identity, which initiated the 911 call
+1. The emergency call will be first routed to a call center where an agent will request the callerΓÇÖs address
+1. The call center will then route the call to the appropriate PSAP in a proper region
+1. If the emergency call is unexpectedly dropped, the PSAP then makes a call-back to the user
+1. On receiving the call-back within 60 minutes, Microsoft will route the inbound call directly to the user identity, which initiated the emergency call
## Enabling Emergency calling
-Emergency dialing is automatically enabled for all users of the Azure Communication Client Calling SDK with an acquired Microsoft telephone number that is enabled for outbound dialing in the Azure resource. To use E911 with Microsoft phone numbers, follow the steps:
+Emergency dialing is automatically enabled for all users of the Azure Communication Client Calling SDK with an acquired Microsoft telephone number that is enabled for outbound dialing in the Azure resource. To use emergency calling with Microsoft phone numbers, follow the steps:
1. Acquire a Microsoft phone number in the Azure resource of the client application (at least one of the numbers in the Azure resource must have the ability to ΓÇÿMake CallsΓÇÖ)
Emergency dialing is automatically enabled for all users of the Azure Communicat
1. Microsoft uses the ISO 3166-1 alpha-2 standard
- 1. Microsoft supports a country US and Puerto Rico ISO codes for 911 dialing
+ 1. Microsoft supports a country US, PR, GB, and CA ISO codes for emergency number dialing
1. If the country code isn't provided to the SDK, the IP address is used to determine the country of the caller
- 1. If the IP address can't provide reliable geo-location, for example the user is on a Virtual Private Network, it's required to set the ISO Code of the calling country using the API in the Azure Communication Services Calling SDK. See example in the E911 quick start
+ 1. If the IP address can't provide reliable geo-location, for example the user is on a Virtual Private Network, it's required to set the ISO Code of the calling country using the API in the Azure Communication Services Calling SDK. See example in the emergency calling quick start
1. If users are dialing from a US territory (for example Guam, US Virgin Islands, Northern Marianas, or American Samoa), it's required to set the ISO code to the US
- 1. If the caller is outside of the US and Puerto Rico, the call to 911 won't be permitted
+ 1. If the caller is outside of the supported countries, the call to 911 won't be permitted
-1. When testing your application dial 933 instead of 911. 933 is enabled for testing purposes; the recorded message will confirm the phone number the emergency call originates from. You should hear a temporary number assigned by Microsoft, which isn't the `alternateCallerId` provided by the application
+1. When testing your application in the US, dial 933 instead of 911. 933 is enabled for testing purposes; the recorded message will confirm the phone number the emergency call originates from. You should hear a temporary number assigned by Microsoft, which isn't the `alternateCallerId` provided by the application
-1. Ensure your application supports [receiving an incoming call](../../how-tos/calling-sdk/manage-calls.md#receive-an-incoming-call) so call-backs from the PSAP are appropriately routed to the originator of the 911 call. To test inbound calling is working correctly, place inbound VoIP calls to the user of the Calling SDK
+1. Ensure your application supports [receiving an incoming call](../../how-tos/calling-sdk/manage-calls.md#receive-an-incoming-call) so call-backs from the PSAP are appropriately routed to the originator of the emergency call. To test inbound calling is working correctly, place inbound VoIP calls to the user of the Calling SDK
The Emergency service is temporarily free to use for Azure Communication Services customers within reasonable use, however, billing for the service will be enabled in 2022. Calls to 911 are capped at 10 concurrent calls per Azure resource.
cosmos-db Bulk Executor Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/bulk-executor-dotnet.md
Title: Ingest data in bulk in Azure Cosmos DB for Gremlin by using a bulk executor library description: Learn how to use a bulk executor library to massively import graph data into an Azure Cosmos DB for Gremlin container. -+ Last updated 05/10/2022
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/cli-samples.md
Title: Azure CLI Samples for Azure Cosmos DB for Gremlin
description: Azure CLI Samples for Azure Cosmos DB for Gremlin -+ Last updated 08/19/2022
cosmos-db Execution Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/execution-profile.md
Title: Use the execution profile to evaluate queries in Azure Cosmos DB for Grem
description: Learn how to troubleshoot and improve your Gremlin queries using the execution profile step. -+ Last updated 03/27/2019
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/find-request-unit-charge.md
Title: Find request unit (RU) charge for Gremlin API queries in Azure Cosmos DB description: Learn how to find the request unit (RU) charge for Gremlin queries executed against an Azure Cosmos container. You can use the Azure portal, .NET, Java drivers to find the RU charge. -+ Last updated 10/14/2020
cosmos-db Modeling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/modeling.md
Title: 'Graph data modeling for Azure Cosmos DB for Gremlin' description: Learn how to model a graph database by using Azure Cosmos DB for Gremlin. This article describes when to use a graph database and best practices to model entities and relationships. -+ Last updated 12/02/2019
cosmos-db Quickstart Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-console.md
Title: 'Query with Azure Cosmos DB for Gremlin using TinkerPop Gremlin Console: Tutorial' description: An Azure Cosmos DB quickstart to creates vertices, edges, and queries using the Azure Cosmos DB for Gremlin. -+ Last updated 07/10/2020
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-dotnet.md
description: Presents a .NET Framework/Core code sample you can use to connect t
-+ ms.devlang: csharp Last updated 05/02/2020
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-java.md
Title: Build a graph database with Java in Azure Cosmos DB description: Presents a Java code sample you can use to connect to and query graph data in Azure Cosmos DB using Gremlin. -+ ms.devlang: java Last updated 03/26/2019
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-nodejs.md
Title: Build an Azure Cosmos DB Node.js application by using Gremlin API description: Presents a Node.js code sample you can use to connect to and query Azure Cosmos DB -+ ms.devlang: javascript Last updated 06/05/2019
cosmos-db Quickstart Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-php.md
Title: 'Quickstart: Gremlin API with PHP - Azure Cosmos DB' description: Follow this quickstart to run a PHP console application that populates an Azure Cosmos DB for Gremlin database in the Azure portal. -+ ms.devlang: php Last updated 06/29/2022
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-python.md
Title: 'Quickstart: Gremlin API with Python - Azure Cosmos DB' description: This quickstart shows how to use the Azure Cosmos DB for Gremlin to create a console application with the Azure portal and Python -+ ms.devlang: python Last updated 03/29/2021
cosmos-db How To Dotnet Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-manage-collections.md
Here are some quick rules when naming a collection:
Use an instance of the **Collection** class to access the collection on the server. -- [MongoClient.Database.Collection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html)
+- [MongoClient.Database.Collection](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/T_MongoDB_Driver_MongoCollection.htm)
The following code snippets assume you've already created your [client connection](how-to-dotnet-get-started.md#create-mongoclient-with-connection-string).
The following code snippets assume you've already created your [client connectio
To create a collection, insert a document into the collection. -- [MongoClient.Database.Collection](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html#collection)-- [MongoClient.Database.Collection.InsertOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertOne)-- [MongoClient.Database.Collection.InsertMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertMany)
+- [MongoClient.Database.Collection](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/T_MongoDB_Driver_MongoCollection.htm)
+- [MongoClient.Database.Collection.InsertOne](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_InsertOne_1.htm)
+- [MongoClient.Database.Collection.InsertMany](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoCollection_1_InsertMany_1.htm)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/110-manage-collections/program.cs" id="create_collection"::: ## Drop a collection -- [MongoClient.Db.dropCollection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#dropCollection)
+- [MongoClient.Db.dropCollection](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/M_MongoDB_Driver_IMongoDatabase_DropCollection_3.htm)
Drop the collection from the database to remove it permanently. However, the next insert or update operation that accesses the collection will create a new collection with that name.
Drop the collection from the database to remove it permanently. However, the nex
An index is used by the MongoDB query engine to improve performance to database queries. -- [MongoClient.Database.Collection.indexes](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#indexes)
+- [MongoClient.Database.Collection.indexes](https://mongodb.github.io/mongo-csharp-driver/2.17/apidocs/html/P_MongoDB_Driver_IMongoCollection_1_Indexes.htm)
:::code language="csharp" source="~/azure-cosmos-mongodb-dotnet/110-manage-collections/program.cs" id="get_indexes":::
cosmos-db Tutorial Aggregation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-aggregation.md
+
+ Title: Getting Started with Aggregation Pipeline
+description: Learn how to get started with Cosmos DB for MongoDB aggregation pipeline for advanced data analysis and manipulation.
+++++ Last updated : 01/24/2023+++
+# Getting Started with Aggregation Pipeline
+
+The aggregation pipeline is a powerful tool that allows developers to perform advanced data analysis and manipulation on their collections. The pipeline is a sequence of data processing operations, which are performed on the input documents to produce a computed output. The pipeline stages process the input documents and pass the result to the next stage. Each stage performs a specific operation on the data, such as filtering, grouping, sorting, and transforming.
+
+## Basic Syntax
+
+The basic syntax for an aggregation pipeline is as follows:
+
+```javascript
+db.collection.aggregate([ { stage1 }, { stage2 }, ... { stageN }])
+```
+
+Where db.collection is the MongoDB collection you want to perform the aggregation on, and stage1, stage2, ..., stageN are the pipeline stages you want to apply.
+
+## Sample Stages
+
+Cosmos DB for MongoDB provides a wide range of stages that you can use in your pipeline, including:
+
+* $match: Filters the documents to pass only the documents that match the specified condition.
+* $project: Transforms the documents to a new form by adding, removing, or updating fields.
+* $group: Groups documents by one or more fields and performs various aggregate functions on the grouped data.
+* $sort: Sorts the documents based on the specified fields.
+* $skip: Skips the specified number of documents.
+* $limit: Limits the number of documents passed to the next stage.
+* $unwind: Deconstructs an array field from the input documents to output a document for each element.
+
+To view all available stages, see [supported features](feature-support-42.md)
+
+## Examples
+
+Here are some examples of how you can use the aggregation pipeline to perform various operations on your data:
+
+Filtering: To filter documents that have a "quantity" field greater than 20, you can use the following pipeline:
+```javascript
+db.collection.aggregate([
+ { $match: { quantity: { $gt: 20 } } }
+])
+```
+
+Grouping: To group documents by the "category" field and calculate the total "quantity" for each group, you can use the following pipeline:
+```javascript
+db.collection.aggregate([
+ { $group: { _id: "$category", totalQuantity: { $sum: "$quantity" } } }
+])
+```
+
+Sorting: To sort documents by the "price" field in descending order, you can use the following pipeline:
+```javascript
+db.collection.aggregate([
+ { $sort: { price: -1 } }
+])
+```
+
+Transforming: To add a new field "discount" to documents that have a "price" greater than 100, you can use the following pipeline:
+
+```javascript
+db.collection.aggregate([
+ { $project: { item: 1, price: 1, discount: { $cond: [{ $gt: ["$price", 100] }, 10, 0 ] } } }
+])
+```
+
+Unwinding: To separate all the subdocuments from the array field 'tags' and create a new document for each value, you can use the following pipeline:
+```javascript
+db.collection.aggregate([
+ { $unwind: "$tags" }
+])
+```
+
+## Example with multiple stages
+
+```javascript
+db.sales.aggregate([
+ { $match: { date: { $gte: "2021-01-01", $lt: "2021-03-01" } } },
+ { $group: { _id: "$category", totalSales: { $sum: "$sales" } } },
+ { $sort: { totalSales: -1 } },
+ { $limit: 5 }
+])
+```
+
+In this example, we are using a sample collection called "sales" which has documents with the following fields: "date", "category", and "sales".
+
+The first stage { $match: { date: { $gte: "2021-01-01", $lt: "2021-03-01" } } } filters the documents by the "date" field, only passing documents with a date between January 1st, 2021 and February 28th, 2021. We are using a string date format with the format "YYYY-MM-DD".
+
+The second stage { $group: { _id: "$category", totalSales: { $sum: "$sales" } } } groups the documents by the "category" field and calculates the total sales for each group.
+
+The third stage { $sort: { totalSales: -1 } } sorts the documents by the "totalSales" field in descending order.
+
+The fourth stage { $limit: 5 } limits the number of documents passed to the next stage to only the top 5.
+
+As a result, the pipeline will return the top 5 categories by total sales for the specified date range.
+
+## Next steps
+
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB for MongoDB.
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB.
+- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
cosmos-db Tutorial Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-delete.md
+
+ Title: Deleting Data into Cosmos DB for MongoDB
+description: Learn how to get started with deleting data in Cosmos DB for MongoDB.
+++++ Last updated : 01/24/2023+++
+# Deleting Data into Cosmos DB for MongoDB
+
+One of the most basic operations is deleting data in a collection. In this guide, we will cover everything you need to know about deleting data using the Mongo Shell (Mongosh).
+
+## Understanding the deleteOne() and deleteMany() Methods
+
+The most common way to delete data in MongoDB is to delete individual documents from a collection. You can do this using the deleteOne() or deleteMany() method.
+
+The deleteOne() method is used to delete a single document from a collection that matches a specific filter. For example, if you wanted to delete a user with the name "John Doe" from the "users" collection, you would use the following command:
+
+```javascript
+db.users.deleteOne({ "name": "John Doe" })
+```
+
+The deleteMany() method, on the other hand, is used to delete multiple documents from a collection that match a specific filter. For example, if you wanted to delete all users with an age less than 30 from the "users" collection, you would use the following command:
+
+```javascript
+db.users.deleteMany({ "age": { $lt: 30 } })
+```
+
+It's important to note that both of these methods return an object with the following properties:
+
+deletedCount: The number of documents deleted.
+acknowledged: This property will be true.
+
+## Deleting a Collection
+
+To delete an entire collection, use the drop() method. For example, if you wanted to delete the "users" collection, you would use the following command:
+
+```javascript
+db.users.drop()
+This will delete the "users" collection and all of its documents permanently.
+```
+
+## Next steps
+
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB for MongoDB.
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB.
+- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
cosmos-db Tutorial Insert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-insert.md
+
+ Title: Getting Started with Inserting data into Cosmos DB for MongoDB
+description: Learn how to get started with inserting data into Cosmos DB for MongoDB.
+++++ Last updated : 01/24/2023+++
+# Inserting Data into Cosmos DB for MongoDB
+One of the most basic operations is inserting data into a collection. In this guide, we will cover everything you need to know about inserting data using the Mongo Shell (Mongosh).
+
+## Inserting a Single Document
+
+The most basic way to insert data into MongoDB is to insert a single document. To do this, you can use the db.collection.insertOne() method. The insertOne() method takes a single document as its argument and inserts it into the specified collection. Here's an example of how you might use this method:
+
+```javascript
+db.myCollection.insertOne({
+ name: "John Smith",
+ age: 30,
+ address: "123 Main St"
+});
+```
+
+In this example, we're inserting a document into the "myCollection" collection with the following fields: "name", "age", and "address". Once the command is executed, you will see the acknowledged: true and insertedId: ObjectId("5f5d5f5f5f5f5f5f5f5f5f5f") in the output, where the insertedId is the unique identifier generated by MongoDB for the inserted document.
+
+## Inserting Multiple Documents
+
+In many cases, you'll need to insert multiple documents at once. To do this, you can use the db.collection.insertMany() method. The insertMany() method takes an array of documents as its argument and inserts them into the specified collection. Here's an example:
+
+```javascript
+db.myCollection.insertMany([
+ {name: "Jane Doe", age: 25, address: "456 Park Ave"},
+ {name: "Bob Smith", age: 35, address: "789 Elm St"},
+ {name: "Sally Johnson", age: 40, address: "111 Oak St"}
+]);
+```
+
+In this example, we're inserting three documents into the "myCollection" collection. Each document has the same fields as the previous example: "name", "age", and "address". The insertMany() method returns an acknowledged: true and insertedIds: [ObjectId("5f5d5f5f5f5f5f5f5f5f5f5f"), ObjectId("5f5d5f5f5f5f5f5f5f5f5f5f"), ObjectId("5f5d5f5f5f5f5f5f5f5f5f5f")] where insertedIds is an array of unique identifiers generated by MongoDB for each inserted document.
+
+## Inserting with Options
+
+Both insertOne() and insertMany() accept an optional second argument, which can be used to specify options for the insert operation. For example, to set the "ordered" option to false, you can use the following code:
+
+```javascript
+db.myCollection.insertMany([
+ {name: "Jane Doe", age: 25, address: "456 Park Ave"},
+ {name: "Bob Smith", age: 35, address: "789 Elm St"},
+ {name: "Sally Johnson", age: 40, address: "111 Oak St"}
+], {ordered: false});
+```
+
+This tells MongoDB to insert the documents in an unordered fashion, meaning that if one document fails to be inserted, it will continue with the next one. This is recommended for write performance in Cosmos DB for MongoDB
+
+## Next steps
+
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB for MongoDB.
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB.
+- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
cosmos-db Tutorial Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-update.md
+
+ Title: Updating Data into Cosmos DB for MongoDB
+description: Learn how to get started with updating data in Cosmos DB for MongoDB.
+++++ Last updated : 01/24/2023+++
+# Updating Data into Cosmos DB for MongoDB
+
+One of the most basic operations is updating data into a collection. In this guide, we will cover everything you need to know about updating data using the Mongo Shell (Mongosh).
+
+## Using updateOne() Method
+
+The updateOne() method updates the first document that matches a specified filter. The method takes two parameters:
+
+filter: A document that specifies the criteria for the update. The filter is used to match the documents in the collection that should be updated. The filter document must be a valid query document.
+
+update: A document that specifies the update operations to perform on the matching documents. The update document must be a valid update document.
+
+```javascript
+db.collection.updateOne(
+ <filter>,
+ <update>
+)
+```
+
+For example, to update the name of a customer with _id equal to 1, you can use the following command:
+
+```javascript
+db.customers.updateOne(
+ { _id: 1 },
+ { $set: { name: "Jane Smith" } }
+)
+```
+
+In the above example, db.customers is the collection name, { _id: 1 } is the filter which matches the first document that has _id equal to 1 and { $set: { name: "Jane Smith" } } is the update operation which sets the name field of the matched document to "Jane Smith".
+
+You can also use other update operators like $inc, $mul, $rename, $unset etc. to update the data.
+
+## updateMany() Method
+
+The updateMany() method updates all documents that match a specified filter. The method takes two parameters:
+
+filter: A document that specifies the criteria for the update. The filter is used to match the documents in the collection that should be updated. The filter document must be a valid query document.
+update: A document that specifies the update operations to perform on the matching documents. The update document must be a valid update document.
+
+```javascript
+db.collection.updateMany(
+ <filter>,
+ <update>
+)
+```
+
+For example, to update the name of all customers that live in "New York", you can use the following command:
+
+```javascript
+db.customers.updateMany(
+ { city: "New York" },
+ { $set: { name: "Jane Smith" } }
+)
+```
+
+In the above example, db.customers is the collection name, { city: "New York" } is the filter which matches all the documents that have city field equal to "New York" and { $set: { name: "Jane Smith" } } is the update operation which sets the name field of all the matched documents to "Jane Smith".
+
+You can also use other update operators like $inc, $mul, $rename, $unset, etc. to update the data.
+
+## Next steps
+
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB for MongoDB.
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB for MongoDB.
+- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
cosmos-db Manage With Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/manage-with-terraform.md
Title: Create and manage Azure Cosmos DB with terraform
description: Use terraform to create and configure Azure Cosmos DB for NoSQL -+ Last updated 09/16/2022
cosmos-db Quickstart Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-terraform.md
tags: azure-resource-manager, terraform -+ Last updated 09/22/2022
cosmos-db Samples Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-terraform.md
Title: Terraform samples for Azure Cosmos DB for NoSQL
description: Use Terraform to create and configure Azure Cosmos DB for NoSQL. -+ Last updated 09/16/2022
data-factory Data Flow Flowlet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-flowlet.md
Last updated 08/04/2022
[!INCLUDE[data-flow-preamble](includes/data-flow-preamble.md)]
-Use the flowlet transformation to run a previously create mapping data flow flowlet. For an overview of flowlets see [Flowlets in mapping data flow | Microsoft Docs](concepts-data-flow-flowlet.md)
+Use the flowlet transformation to run a previously created mapping data flow flowlet. For an overview of flowlets see [Flowlets in mapping data flow | Microsoft Docs](concepts-data-flow-flowlet.md)
> [!NOTE] > The flowlet transformation in Azure Data Factory and Synapse Analytics pipelines is currently in public preview
databox-online Azure Stack Edge Mini R Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-overview.md
For a discussion of considerations for choosing a region for the Azure Stack Edg
## Next steps -- Review the [Azure Stack Edge Mini R system requirements](azure-stack-edge-gpu-system-requirements.md).
+- Review the [Azure Stack Edge Mini R system requirements](azure-stack-edge-mini-r-system-requirements.md).
defender-for-cloud Iac Vulnerabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/iac-vulnerabilities.md
Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps
## View details and remediation information on IaC rules included with Microsoft Security DevOps
-### PowerShell-based rules
+The IaC scanning tools that are included with Microsoft Security DevOps, are [Template Analyzer](https://github.com/Azure/template-analyzer) (which contains [PSRule](https://aka.ms/ps-rule-azure)) and [Terrascan](https://github.com/tenable/terrascan).
-Information about the PowerShell-based rules included by our integration with [PSRule for Azure](https://aka.ms/ps-rule-azure/rules). The tool will only evaluate the rules under the [Security pillar](https://azure.github.io/PSRule.Rules.Azure/en/rules/module/#security) unless the option `--include-non-security-rules` is used.
+Template Analyzer runs rules on ARM and Bicep templates. You can learn more about [Template Analyzer's rules and remediation details](https://github.com/Azure/template-analyzer/blob/main/docs/built-in-bpa-rules.md#built-in-rules).
-> [!NOTE]
-> PowerShell-based rules are included by our integration with [PSRule for Azure](https://aka.ms/ps-rule-azure/rules). The tool will evaluate all rules under the [Security pillar](https://azure.github.io/PSRule.Rules.Azure/en/rules/module/#security).
-
-### JSON-Based Rules:
-
-JSON-based rules for ARM templates and bicep files are provided by [Template-Analyzer](https://github.com/Azure/template-analyzer#template-best-practice-analyzer-bpa). Below are details on template-analyzer's rules and remediation details.
-
-> [!NOTE]
-> Severity levels are scaled from 1 to 3. Where 1 = High, 2 = Medium, 3 = Low.
-
-#### TA-000001: Diagnostic logs in App Services should be enabled
-
-Audits the enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised.
-
-**Recommendation**: To [enable diagnostic logging](../app-service/troubleshoot-diagnostic-logs.md), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json), add (or update) the *detailedErrorLoggingEnabled*, *httpLoggingEnabled*, and *requestTracingEnabled* properties, setting their values to `true`.
-
-**Severity level**: 2
-
-#### TA-000002: Remote debugging should be turned off for API Apps
-
-Remote debugging requires inbound ports to be opened on an API app. These ports become easy targets for compromise from various internet-based attacks. If you no longer need to use remote debugging, it should be turned off.
-
-**Recommendation**: To disable remote debugging, in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), remove the *remoteDebuggingEnabled* property or update its value to `false`.
-
-**Severity level**: 3
-
-#### TA-000003: FTPS only should be required in your API App
-
-Enable FTPS enforcement for enhanced security.
-
-**Recommendation**: To [enforce FTPS](../app-service/deploy-ftp.md?tabs=portal#enforce-ftps) in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *ftpsState* property, setting its value to `"FtpsOnly"` or `"Disabled"` if you don't need FTPS enabled.
-
-**Severity level**: 1
-
-#### TA-000004: API App Should Only Be Accessible Over HTTPS
-
-API apps should require HTTPS to ensure connections are made to the expected server and data in transit is protected from network layer eavesdropping attacks.
-
-**Recommendation**: To [use HTTPS to ensure, server/service authentication and protect data in transit from network layer eavesdropping attacks](../app-service/configure-ssl-bindings.md#enforce-https) in the [Microsoft.Web/Sites resource properties](/azure/templates/microsoft.web/sites?tabs=json#siteproperties-object), add (or update) the *httpsOnly* property, setting its value to `true`.
-
-**Severity level**: 2
-
-#### TA-000005: Latest TLS version should be used in your API App
-
-API apps should require the latest TLS version.
-
-**Recommendation**: To [enforce the latest TLS version](../app-service/configure-ssl-bindings.md#enforce-tls-versions) in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *minTlsVersion* property, setting its value to `1.2`.
-
-**Severity level**: 1
-
-#### TA-000006: CORS shouldn't allow every resource to access your API App
-
-Cross-Origin Resource Sharing (CORS) shouldn't allow all domains to access your API app. Allow only required domains to interact with your API app.
-
-**Recommendation**: To allow only required domains to interact with your API app, in the [Microsoft.Web/sites/config resource cors settings object](/azure/templates/microsoft.web/sites/config-web?tabs=json#corssettings-object), add (or update) the *allowedOrigins* property, setting its value to an array of allowed origins. Ensure it isn't* set to "*" (asterisks allows all origins).
-
-**Severity level**: 3
-
-#### TA-000007: Managed identity should be used in your API App
-
-For enhanced authentication security, use a managed identity. On Azure, managed identities eliminate the need for developers to have to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens.
-
-**Recommendation**: To [use Managed Identity](../app-service/overview-managed-identity.md?tabs=dotnet), in the [Microsoft.Web/sites resource managed identity property](/azure/templates/microsoft.web/sites?tabs=json#ManagedServiceIdentity), add (or update) the *type* property, setting its value to `"SystemAssigned"` or `"UserAssigned"` and providing any necessary identifiers for the identity if necessary.
-
-**Severity level**: 2
-
-#### TA-000008: Remote debugging should be turned off for Function Apps
-
-Remote debugging requires inbound ports to be opened on a function app. These ports become easy targets for compromise from various internet-based attacks. If you no longer need to use remote debugging, it should be turned off.
-
-**Recommendation**: To disable remote debugging, in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), remove the *remoteDebuggingEnabled* property or update its value to `false`.
-
-**Severity level**: 3
-
-#### TA-000009: FTPS only should be required in your Function App
-
-Enable FTPS enforcement for enhanced security.
-
-**Recommendation**: To [enforce FTPS](../app-service/deploy-ftp.md?tabs=portal#enforce-ftps), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *ftpsState* property, setting its value to `"FtpsOnly"` or `"Disabled"` if you don't need FTPS enabled.
-
-**Severity level**: 1
-
-#### TA-000010: Function App Should Only Be Accessible Over HTTPS
-
-Function apps should require HTTPS to ensure connections are made to the expected server and data in transit is protected from network layer eavesdropping attacks.
-
-**Recommendation**: To [use HTTPS to ensure, server/service authentication and protect data in transit from network layer eavesdropping attacks](../app-service/configure-ssl-bindings.md#enforce-https), in the [Microsoft.Web/Sites resource properties](/azure/templates/microsoft.web/sites?tabs=json#siteproperties-object), add (or update) the *httpsOnly* property, setting its value to `true`.
-
-**Severity level**: 2
-
-#### TA-000011: Latest TLS version should be used in your Function App
-
-Function apps should require the latest TLS version.
-
-**Recommendation**: To [enforce the latest TLS version](../app-service/configure-ssl-bindings.md#enforce-tls-versions), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *minTlsVersion* property, setting its value to `1.2`.
-
-**Severity level**: 1
-
-#### TA-000012: CORS shouldn't allow every resource to access your Function Apps
-
-Cross-Origin Resource Sharing (CORS) shouldn't allow all domains to access your function app. Allow only required domains to interact with your function app.
-
-**Recommendation**: To allow only required domains to interact with your function app, in the [Microsoft.Web/sites/config resource cors settings object](/azure/templates/microsoft.web/sites/config-web?tabs=json#corssettings-object), add (or update) the *allowedOrigins* property, setting its value to an array of allowed origins. Ensure it isn't* set to "*" (asterisks allows all origins).
-
-**Severity level**: 3
-
-#### TA-000013: Managed identity should be used in your Function App
-
-For enhanced authentication security, use a managed identity. On Azure, managed identities eliminate the need for developers to have to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens.
-
-**Recommendation**: To [use Managed Identity](../app-service/overview-managed-identity.md?tabs=dotnet), in the [Microsoft.Web/sites resource managed identity property](/azure/templates/microsoft.web/sites?tabs=json#ManagedServiceIdentity), add (or update) the *type* property, setting its value to `"SystemAssigned"` or `"UserAssigned"` and providing any necessary identifiers for the identity if necessary.
-
-**Severity level**: 2
-
-#### TA-000014: Remote debugging should be turned off for Web Applications
-
-Remote debugging requires inbound ports to be opened on a web application. These ports become easy targets for compromise from various internet-based attacks. If you no longer need to use remote debugging, it should be turned off.
-
-**Recommendation**: To disable remote debugging, in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), remove the *remoteDebuggingEnabled* property or update its value to `false`.
-
-**Severity level**: 3
-
-#### TA-000015: FTPS only should be required in your Web App
-
-Enable FTPS enforcement for enhanced security.
-
-**Recommendation**: To [enforce FTPS](../app-service/deploy-ftp.md?tabs=portal#enforce-ftps), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *ftpsState* property, setting its value to `"FtpsOnly"` or `"Disabled"` if you don't need FTPS enabled.
-
-**Severity level**: 1
-
-#### TA-000016: Web Application Should Only Be Accessible Over HTTPS
-
-Web apps should require HTTPS to ensure connections are made to the expected server and data in transit is protected from network layer eavesdropping attacks.
-
-**Recommendation**: To [use HTTPS to ensure server/service authentication and protect data in transit from network layer eavesdropping attacks](../app-service/configure-ssl-bindings.md#enforce-https), in the [Microsoft.Web/Sites resource properties](/azure/templates/microsoft.web/sites?tabs=json#siteproperties-object), add (or update) the *httpsOnly* property, setting its value to `true`.
-
-**Severity level**: 2
-
-#### TA-000017: Latest TLS version should be used in your Web App
-
-Web apps should require the latest TLS version.
-
-**Recommendation**:
-To [enforce the latest TLS version](../app-service/configure-ssl-bindings.md#enforce-tls-versions), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *minTlsVersion* property, setting its value to `1.2`.
-
-**Severity level**: 1
-
-#### TA-000018: CORS shouldn't allow every resource to access your Web Applications
-
-Cross-Origin Resource Sharing (CORS) shouldn't allow all domains to access your Web application. Allow only required domains to interact with your web app.
-
-**Recommendation**: To allow only required domains to interact with your web app, in the [Microsoft.Web/sites/config resource cors settings object](/azure/templates/microsoft.web/sites/config-web?tabs=json#corssettings-object), add (or update) the *allowedOrigins* property, setting its value to an array of allowed origins. Ensure it isn't* set to "*" (asterisks allows all origins).
-
-**Severity level**: 3
-
-#### TA-000019: Managed identity should be used in your Web App
-
-For enhanced authentication security, use a managed identity. On Azure, managed identities eliminate the need for developers to have to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens.
-
-**Recommendation**: To [use Managed Identity](../app-service/overview-managed-identity.md?tabs=dotnet), in the [Microsoft.Web/sites resource managed identity property](/azure/templates/microsoft.web/sites?tabs=json#ManagedServiceIdentity), add (or update) the *type* property, setting its value to `"SystemAssigned"` or `"UserAssigned"` and providing any necessary identifiers for the identity if necessary.
-
-**Severity level**: 2
-
-#### TA-000020: Audit usage of custom RBAC roles
-
-Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling.
-
-**Recommendation**: [Use built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles](../role-based-access-control/built-in-roles.md)
-
-**Severity level**: 3
-
-#### TA-000021: Automation account variables should be encrypted
-
-It's important to enable encryption of Automation account variable assets when storing sensitive data. This step can only be taken at creation time. If you have Automation Account Variables storing sensitive data that aren't already encrypted, then you'll need to delete them and recreate them as encrypted variables. To apply encryption of the Automation account variable assets, in Azure PowerShell - run [the following command](/powershell/module/az.automation/set-azautomationvariable?view=azps-5.4.0&viewFallbackFrom=azps-1.4.0): `Set-AzAutomationVariable -AutomationAccountName '{AutomationAccountName}' -Encrypted $true -Name '{VariableName}' -ResourceGroupName '{ResourceGroupName}' -Value '{Value}'`
-
-**Recommendation**: [Enable encryption of Automation account variable assets](../automation/shared-resources/variables.md?tabs=azure-powershell)
-
-**Severity level**: 1
-
-#### TA-000022: Only secure connections to your Azure Cache for Redis should be enabled
-
-Enable only connections via SSL to Redis Cache. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking.
-
-**Recommendation**: To [enable only connections via SSL to Redis Cache](/security/benchmark/azure/baselines/azure-cache-for-redis-security-baseline?toc=/azure/azure-cache-for-redis/TOC.json#44-encrypt-all-sensitive-information-in-transit), in the [Microsoft.Cache/Redis resource properties](/azure/templates/microsoft.cache/redis?tabs=json#rediscreateproperties-object), update the value of the *enableNonSslPort* property from `true` to `false` or remove the property from the template as the default value is `false`.
-
-**Severity level**: 1
-
-#### TA-000023: Authorized IP ranges should be defined on Kubernetes Services
-
-To ensure that only applications from allowed networks, machines, or subnets can access your cluster, restrict access to your Kubernetes Service Management API server. It's recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster.
-
-**Recommendation**: [Restrict access by defining authorized IP ranges](../aks/api-server-authorized-ip-ranges.md) or [set up your API servers as private clusters](../aks/private-clusters.md)
-
-**Severity level**: 1
-
-#### TA-000024: Role-Based Access Control (RBAC) should be used on Kubernetes Services
-
-To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. To Use Role-Based Access Control (RBAC) you must recreate your Kubernetes Service cluster and enable RBAC during the creation process.
-
-**Recommendation**: [Enable RBAC in Kubernetes clusters](../aks/operator-best-practices-identity.md#use-azure-rbac)
-
-**Severity level**: 1
-
-#### TA-000025: Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version
-
-Upgrade your Kubernetes service cluster to a later Kubernetes version to protect against known vulnerabilities in your current Kubernetes version. [Vulnerability CVE-2019-9946](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9946) has been patched in Kubernetes versions 1.11.9+, 1.12.7+, 1.13.5+, and 1.14.0+. Running on older versions could mean you aren't using latest security classes. Usage of such old classes and types can make your application vulnerable.
-
-**Recommendation**: To [upgrade Kubernetes service clusters](../aks/upgrade-cluster.md), in the [Microsoft.ContainerService/managedClusters resource properties](/azure/templates/microsoft.containerservice/managedclusters?tabs=json#managedclusterproperties-object), update the *kubernetesVersion* property, setting its value to one of the following versions (making sure to specify the minor version number): 1.11.9+, 1.12.7+, 1.13.5+, or 1.14.0+.
-
-**Severity level**: 1
-
-#### TA-000026: Service Fabric clusters should only use Azure Active Directory for client authentication
-
-Service Fabric clusters should only use Azure Active Directory for client authentication. A Service Fabric cluster offers several entry points to its management functionality, including the web-based Service Fabric Explorer, Visual Studio and PowerShell. Access to the cluster must be controlled using AAD.
-
-**Recommendation**: [Enable AAD client authentication on your Service Fabric clusters](../service-fabric/service-fabric-cluster-creation-setup-aad.md)
-
-**Severity level**: 1
-
-#### TA-000027: Transparent Data Encryption on SQL databases should be enabled
-
-Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements.
-
-**Recommendation**: To [enable transparent data encryption](/azure/azure-sql/database/transparent-data-encryption-tde-overview?tabs=azure-portal), in the [Microsoft.Sql/servers/databases/transparentDataEncryption resource properties](/azure/templates/microsoft.sql/servers/databases/transparentdataencryption?tabs=json), add (or update) the value of the *state* property to `enabled`.
-
-**Severity level**: 3
-
-#### TA-000028: SQL servers with auditing to storage account destination should be configured with 90 days retention or higher
-
-Set the data retention for your SQL Server's auditing to storage account destination to at least 90 days.
-
-**Recommendation**: For incident investigation purposes, we recommend setting the data retention for your SQL Server's auditing to storage account destination to at least 90 days, in the [Microsoft.Sql/servers/auditingSettings resource properties](/azure/templates/microsoft.sql/2020-11-01-preview/servers/auditingsettings?tabs=json#serverblobauditingpolicyproperties-object), using the *retentionDays* property. Confirm that you're meeting the necessary retention rules for the regions in which you're operating. This is sometimes required for compliance with regulatory standards.
-
-**Severity level**: 3
-
-#### TA-000029: Azure API Management APIs should use encrypted protocols only
-
-Set the protocols property to only include HTTPS.
-
-**Recommendation**: To use encrypted protocols only, add (or update) the *protocols* property in the [Microsoft.ApiManagement/service/apis resource properties](/azure/templates/microsoft.apimanagement/service/apis?tabs=json), to only include HTTPS. Allowing any other protocols (for example, HTTP, WS) is insecure.
-
-**Severity level**: 1
+Terrascan runs rules on ARM, CloudFormation, Docker, Helm, Kubernetes, Kustomize, and Terraform templates. You can learn more about the [Terrascan rules](https://runterrascan.io/docs/policies/).
## Learn more -- Learn more about the [Template Best Practice Analyzer](https://github.com/Azure/template-analyzer).
+- Learn more about [Template Analyzer](https://github.com/Azure/template-analyzer).
+- Learn more about [PSRule](https://aka.ms/ps-rule-azure).
+- Learn more about [Terrascan](https://runterrascan.io/).
In this tutorial you learned how to configure the Microsoft Security DevOps GitHub Action and Azure DevOps Extension to scan for Infrastructure as Code (IaC) security misconfigurations and how to view the results.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 02/01/2023 Last updated : 02/07/2023 # What's new in Microsoft Defender for Cloud?
The policy [`Vulnerability Assessment settings for SQL server should contain an
The Defender for SQL vulnerability assessment email report is still available and existing email configurations haven't changed.
-## Recommendation to enable diagnostic logs for Virtual Machine Scale Sets has been deprecated
+### Recommendation to enable diagnostic logs for Virtual Machine Scale Sets has been deprecated
The recommendation `Diagnostic logs in Virtual Machine Scale Sets should be enabled` has been deprecated.
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| Third-party vulnerability assessment | Γ£ö | Γ£ö | No | | [Network security assessment](protect-network-resources.md) | Γ£ö | - | No | - ### [**Linux machines**](#tab/features-linux) | **Feature** | **Azure Virtual Machines and [Virtual Machine Scale Sets with Flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | | Third-party vulnerability assessment | - | - | | [Network security assessment](protect-network-resources.md) | - | - |-
+| [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | Γ£ö | - |
For information about when recommendations are generated for each of these solut
| - [Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md) | Public Preview | Not Available | Not Available | | - [Kubernetes workload protection](./kubernetes-workload-protections.md) | GA | GA | GA | | - [Bi-directional alert synchronization with Sentinel](../sentinel/connect-azure-security-center.md) | Public Preview | Not Available | Not Available |
+| - [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | Preview | Not Available | Not Available |
| **Microsoft Defender for Servers features** <sup>[7](#footnote7)</sup> | | | | | - [Just-in-time VM access](./just-in-time-access-usage.md) | GA | GA | GA | | - [File Integrity Monitoring](./file-integrity-monitoring-overview.md) | GA | GA | GA |
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
This section describes how to ensure connection between the sensor and the on-pr
1. Sign in to the on-premises management console.
-2. Select **System Settings**.
+1. Select **System Settings**.
-3. In the **Sensor Setup ΓÇô Connection String** section, copy the automatically generated connection string.
+1. In the **Sensor Setup ΓÇô Connection String** section, copy the automatically generated connection string.
:::image type="content" source="media/how-to-manage-individual-sensors/connection-string-screen.png" alt-text="Screenshot of the Connection string screen.":::
-4. Sign in to the sensor console.
+1. Sign in to the sensor console.
-5. On the left pane, select **System Settings**.
+1. On the left pane, select **System Settings**.
-6. Select **Management Console Connection**.
+1. Select **Management Console Connection**.
:::image type="content" source="media/how-to-manage-individual-sensors/management-console-connection-screen.png" alt-text="Screenshot of the Management Console Connection dialog box.":::
-7. Paste the connection string in the **Connection string** box and select **Connect**.
+1. Paste the connection string in the **Connection string** box and select **Connect**.
-8. In the on-premises management console, in the **Site Management** window, assign the sensor to a site and zone.
+1. In the on-premises management console, in the **Site Management** window, assign the sensor to a site and zone.
Continue with additional settings, such as [adding users](how-to-create-and-manage-users.md), [setting up an SMTP server](how-to-manage-individual-sensors.md#configure-smtp-settings), [forwarding alert rules](how-to-forward-alert-information-to-partners.md), and more. For more information, see [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md).
If you create a new IP address, you might be required to sign in again.
1. On the side menu, select **System Settings**.
-2. In the **System Settings** window, select **Network**.
+1. In the **System Settings** window, select **Network**.
-3. Set the parameters:
+1. Set the parameters:
| Parameter | Description | |--|--|
If you create a new IP address, you might be required to sign in again.
| Hostname | The sensor hostname | | Proxy | Proxy host and port name |
-4. Select **Save**.
+1. Select **Save**.
## Synchronize time zones on the sensor
You can configure the sensor's time and region so that all the users see the sam
1. On the side menu, select **System settings** > **Basic**, > **Time & Region**.
-3. Set the parameters and select **Save**.
+1. Set the parameters and select **Save**.
## Set up backup and restore files
Sensor backup files are automatically named through the following format: `<sens
Get the folder path, username, and password required to access the SMB server.
-2. In the sensor, make a directory for the backups:
+1. In the sensor, make a directory for the backups:
- `sudo mkdir /<backup_folder_name_on_cyberx_server>` - `sudo chmod 777 /<backup_folder_name_on_cyberx_server>/`
-3. Edit `fstab`:
+1. Edit `fstab`:
- `sudo nano /etc/fstab` - `add - //<server_IP>/<folder_path> /<backup_folder_name_on_cyberx_server> cifsrw,credentials=/etc/samba/user,vers=X.X,uid=cyberx,gid=cyberx,file_mode=0777,dir_mode=0777 0 0`
-4. Edit and create credentials to share for the SMB server:
+1. Edit and create credentials to share for the SMB server:
`sudo nano /etc/samba/user`
-5. Add:
+1. Add:
- `username=&gt:user name&lt:` - `password=<password>`
-6. Mount the directory:
+1. Mount the directory:
`sudo mount -a`
-7. Configure a backup directory to the shared folder on the Defender for IoT sensor:ΓÇ»
+1. Configure a backup directory to the shared folder on the Defender for IoT sensor:ΓÇ»
- `sudo nano /var/cyberx/properties/backup.properties`
For more information about forwarding rules, see [Forward alert information](how
When troubleshooting, you may want to examine data recorded by a specific PCAP file. To do so, you can upload a PCAP file to your sensor console and replay the data recorded.
-To view the PCAP player in your sensor console, you'll first need to configure the relevant advanced configuration option.
+The **Play PCAP** option is enabled by default in the sensor console's settings.
Maximum size for uploaded files is 2 GB.
-**To show the PCAP player in your sensor console**:
-
-1. On your sensor console, go to **System settings > Sensor management > Advanced Configurations**.
-
-1. In the **Advanced configurations** pane, select the **Pcaps** category.
-
-1. In the configurations displayed, change `enabled=0` to `enabled=1`, and select **Save**.
-
-The **Play PCAP** option is now available in the sensor console's settings, under: **System settings > Basic > Play PCAP**.
- **To upload and play a PCAP file**: 1. On your sensor console, select **System settings > Basic > Play PCAP**.
-1. In the **PCAP PLAYER** pane, select **Upload** and then navigate to and select the file you want to upload.
+1. In the **PCAP PLAYER** pane, select **Upload** and then navigate to and select the file or multiple files you want to upload.
1. Select **Play** to play your PCAP file, or **Play All** to play all PCAP files currently loaded. + > [!TIP] > Select **Clear All** to clear the sensor of all PCAP files loaded.
To access system properties:
1. Sign in to the on-premises management console or the sensor.
-2. Select **System Settings**.
+1. Select **System Settings**.
-3. Select **System Properties** from the **General** section.
+1. Select **System Properties** from the **General** section.
## Download a diagnostics log for support
Use Defender for IoT data mining reports on an OT network sensor to retrieve for
- Event timeline data - Log files
-Each type of data has a different retention period and maximum capacity. For more information see [Create data mining queries](how-to-create-data-mining-queries.md) and [Data retention across Microsoft Defender for IoT](references-data-retention.md).
+Each type of data has a different retention period and maximum capacity. For more information, see [Create data mining queries](how-to-create-data-mining-queries.md) and [Data retention across Microsoft Defender for IoT](references-data-retention.md).
## Clearing sensor data
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
Sensors that you've on-boarded to Defender for IoT are listed on the Defender fo
Use the options on the **Sites and sensor** page and a sensor details page to do any of the following tasks. If you're on the **Sites and sensors** page, select multiple sensors to apply your actions in bulk using toolbar options. For individual sensors, use the **Sites and sensors** toolbar options, the **...** options menu at the right of a sensor row, or the options on a sensor details page.
+### OT sensor updates
+ |Task |Description | ||| |:::image type="icon" source="medi). |
+|:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit automatic threat intelligence updates** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. <br><br>Select **Edit** and then toggle the **Automatic Threat Intelligence Updates (Preview)** option on or off as needed. Select **Submit** to save your changes. |
|:::image type="icon" source="medi#download-and-apply-a-new-activation-file) |
-|:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-recover.png" border="false"::: **Recover a password** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. Enter the secret identifier obtained on the sensor's sign-in screen. |
-|:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-export.png" border="false"::: **Export sensor data** | Available from the **Sites and sensors** toolbar only, to download a CSV file with details about all the sensors listed. |
+
+### Sensor deployment and access
+
+|Task |Description |
+|||
+|:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-recover.png" border="false"::: **Recover a OT sensor password** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. Enter the secret identifier obtained on the sensor's sign-in screen. |
+| **Recover an on-premises management console password** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md). |
|:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-export.png" border="false"::: **Download an activation file** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit a sensor zone** | For individual sensors only, from the **...** options menu or a sensor details page. <br><br>Select **Edit**, and then select a new zone from the **Zone** menu or select **Create new zone**. Select **Submit** to save your changes. |
+| **Download SNMP MIB file** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Set up OT sensor health monitoring via SNMP](how-to-set-up-snmp-mib-monitoring.md).|
|:::image type="icon" source="medi#install-enterprise-iot-sensor-software). |
-|:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit automatic threat intelligence updates** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. <br><br>Select **Edit** and then toggle the **Automatic Threat Intelligence Updates (Preview)** option on or off as needed. Select **Submit** to save your changes. |
-|:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-delete.png" border="false"::: **Delete a sensor** | For individual sensors only, from the **...** options menu or a sensor details page. |
-| :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-diagnostics.png" border="false"::: **Send diagnostic files to support** | Individual, locally managed OT sensors only. <br><br>Available from the **...** options menu. <br><br>For more information, see [Upload a diagnostics log for support](#upload-a-diagnostics-log-for-support).|
-| **Download SNMP MIB file** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Set up SNMP MIB monitoring](how-to-set-up-snmp-mib-monitoring.md).|
-| **Recover an on-premises management console password** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md). |
-| **Define OT network sensor settings** (Preview) | Define selected sensor settings for one or more cloud-connected OT network sensors. For more information, see [Define and view OT sensor settings from the Azure portal (Public preview)](configure-sensor-settings-portal.md). <br><br>Other settings are also available directly from the [OT sensor console](how-to-manage-individual-sensors.md), or the [on-premises management console](how-to-manage-sensors-from-the-on-premises-management-console.md).|
|<a name="endpoint"></a> **Download endpoint details** (Public preview) | Available from the **Sites and sensors** toolbar **More actions** menu, for OT sensor versions 22.x only. <br><br>Download the list of endpoints that must be enabled as secure endpoints from OT network sensors. Make sure that HTTPS traffic is enabled over port 443 to the listed endpoints for your sensor to connect to Azure. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.<br><br>To enable this option, select a sensor with a supported software version, or a site with one or more sensors with supported versions. |
+| **Define OT network sensor settings** (Preview) | Define selected sensor settings for one or more cloud-connected OT network sensors. For more information, see [Define and view OT sensor settings from the Azure portal (Public preview)](configure-sensor-settings-portal.md). <br><br>Other settings are also available directly from the [OT sensor console](how-to-manage-individual-sensors.md), or the [on-premises management console](how-to-manage-sensors-from-the-on-premises-management-console.md).|
+
+### Sensor maintenance and troubleshooting
+|Task |Description |
+|||
+|:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-export.png" border="false"::: **Export sensor data** | Available from the **Sites and sensors** toolbar only, to download a CSV file with details about all the sensors listed. |
+|:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-delete.png" border="false"::: **Delete a sensor** | For individual sensors only, from the **...** options menu or a sensor details page. |
+| :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-diagnostics.png" border="false"::: **Send diagnostic files to support** | Individual, locally managed OT sensors only. <br><br>Available from the **...** options menu. <br><br>For more information, see [Upload a diagnostics log for support](#upload-a-diagnostics-log-for-support).|
## Retrieve forensics data stored on the sensor
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
On-premises management software is backwards compatible, and can connect to sens
For more information, see [Update an on-premises management console](#update-an-on-premises-management-console).
+### Select an update method
+
+Select one of the following tabs, depending on how you've chosen to update your OT sensor software.
+ # [From the Azure portal (Public preview)](#tab/portal) This procedure describes how to send a software version update to one or more OT sensors, and then run the updates remotely from the Azure portal. Bulk updates are supported for up to 10 sensors at a time.
deployment-environments How To Create Access Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-access-environments.md
Complete the following steps in the Azure CLI to create an environment and confi
```azurecli az devcenter dev environment create --dev-center-name <devcenter-name>
- --project-name <project-name> --environment-type <environment-type-name>
+ --project-name <project-name> --environment-name <name> --environment-type <environment-type-name>
--catalog-item-name <catalog-item-name> --catalog-name <catalog-name> ```
Complete the following steps in the Azure CLI to create an environment and confi
```json $params = "{ 'name': 'firstMsi', 'location': 'northeurope' }" az devcenter dev environment create --dev-center-name <devcenter-name>
- --project-name <project-name> -n <name> --environment-type <environment-type-name>
+ --project-name <project-name> --environment-name <name> --environment-type <environment-type-name>
--catalog-item-name <catalog-item-name> --catalog-name <catalog-name> --parameters $params ```
energy-data-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md
Microsoft Energy Data Services is updated on an ongoing basis. To stay up to dat
You can use a managed identity to authenticate to any [service that supports Azure AD (Active Directory) authentication](../active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md) with Microsoft Energy Data Services. For example, you can write a script in Azure Function to ingest data in Microsoft Energy Data Services. Now, you can use managed identity to connect to Microsoft Energy Data Services using system or user assigned managed identity from other Azure services. [Learn more.]( ../energy-data-services/how-to-use-managed-identity.md)
+### Availability zone support
+
+Availability Zones are physically separate locations within an Azure region made up of one or more datacenters equipped with independent power, cooling, and networking. Availability Zones provide in-region High Availability and protection against local disasters. Microsoft Energy Data Services Preview supports zone-redundant instance by default and there's no setup required by the Customer. [Learn more.](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=energy-data-services&regions=all)
<hr width=100%>
frontdoor Apex Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/apex-domain.md
+
+ Title: 'Apex domains in Azure Front Door'
+description: Learn about apex domains when using Azure Front Door.
+++++ Last updated : 02/07/2023+++
+# Apex domains in Azure Front Door
+
+Apex domains, also called *root domains* or *naked domains*, are at the root of a DNS zone and don't contain subdomains. For example, `contoso.com` is an apex domain.
+
+Azure Front Door supports apex domains, but requires special considerations. This article describes how apex domains work in Azure Front Door.
+
+To add a root or apex domain to your Azure Front Door profile, see [Onboard a root or apex domain on your Azure Front Door profile](front-door-how-to-onboard-apex-domain.md).
+
+## DNS CNAME flattening
+
+The DNS protocol prevents the assignment of CNAME records at the zone apex. For example, if your domain is `contoso.com`, you can create a CNAME record for `myappliation.contoso.com`, but you can't create a CNAME record for `contoso.com` itself.
+
+Azure Front Door doesn't expose the frontend public IP address associated with your Azure Front Door endpoint. So, you can't map an apex domain to an Azure Front Door IP address.
+
+> [!WARNING]
+> Don't create an A record with the public IP address of your Azure Front Door endpoint. Your Azure Front Door endpoint's public IP address might change and we don't provide any guarantees that it will remain the same.
+
+However, this problem can be resolved by using alias records in Azure DNS. Unlike CNAME records, alias records are created at the zone apex. You can point a zone apex record to an Azure Front Door profile that has public endpoints. Multiple application owners can point to the same Azure Front Door endpoint that's used for any other domain within their DNS zone. For example, `contoso.com` and `www.contoso.com` can point to the same Azure Front Door endpoint.
+
+Mapping your apex or root domain to your Azure Front Door profile uses *CNAME flattening*, sometimes called *DNS chasing*. CNAME flattening is where a DNS provider recursively resolves CNAME entries until it resolves an IP address. This functionality is supported by Azure DNS for Azure Front Door endpoints.
+
+> [!NOTE]
+> Other DNS providers support CNAME flattening or DNS chasing. However, Azure Front Door recommends using Azure DNS for hosting your apex domains.
+
+## TXT record validation
+
+To validate a domain, you need to create a DNS TXT record. The name of the TXT record must be of the form `_dnsauth.{subdomain}`. Azure Front Door provides a unique value for your TXT record when you start to add the domain to Azure Front Door.
+
+For example, suppose you want to use the apex domain `contoso.com` with Azure Front Door. First, you should add the domain to your Azure Front Door profile, and note the TXT record value that you need to use. Then, you should configure a DNS record with the following properties:
+
+| Property | Value |
+|-|-|
+| Record name | `_dnsauth` |
+| Record value | *use the value provided by Azure Front Door* |
+| Time to live (TTL) | 1 hour |
+
+## Azure Front Door-managed TLS certificate rotation
+
+When you use an Azure Front Door-managed certificate, Azure Front Door attempts to automatically rotate (renew) the certificate. Before it does so, Azure Front Door checks whether the DNS CNAME record is still pointed to the Azure Front Door endpoint. Apex domains don't have a CNAME record pointing to an Azure Front Door endpoint, so the auto-rotation for managed certificate fails until the domain ownership is revalidated.
+
+Select the **Pending revalidation** link and then select the **Regenerate** button to regenerate the TXT token. After that, add the TXT token to the DNS provider settings.
+
+> [!NOTE]
+> Azure Front Door's DNS TXT records for domain name validation need to be updated when the certificate is renewed. When you see the *Pending revalidation* domain validation state, ensure that you generate a new TXT record and update your DNS server.
+
+## Next steps
+
+To add a root or apex domain to your Azure Front Door profile, see [Onboard a root or apex domain on your Azure Front Door profile](front-door-how-to-onboard-apex-domain.md).
frontdoor Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/domain.md
+
+ Title: 'Domains in Azure Front Door'
+description: Learn about custom domains when using Azure Front Door.
+++++ Last updated : 02/07/2023+++
+# Domains in Azure Front Door
+
+A *domain* represents a custom domain name that Azure Front Door uses to receive your application's traffic. Azure Front Door supports adding three types of domain names:
+
+- **Subdomains** are the most common type of custom domain name. An example subdomain is `myapplication.contoso.com`.
+- **Apex domains** don't contain a subdomain. An example apex domain is `contoso.com`. For more information about using apex domains with Azure Front Door, see [Apex domains](./apex-domain.md).
+- **Wildcard domains** allow traffic to be received for any subdomain. An example wildcard domain is `*.contoso.com`. For more information about using wildcard domains with Azure Front Door, see [Wildcard domains](./front-door-wildcard-domain.md).
+
+Domains are added to your Azure Front Door profile. You can use a domain in multiple routes within an endpoint, if you use different paths in each route.
+
+To learn how to add a custom domain to your Azure Front Door profile, see [Configure a custom domain on Azure Front Door using the Azure portal](standard-premium/how-to-add-custom-domain.md).
+
+## DNS configuration
+
+When you add a domain to your Azure Front Door profile, you configure two records in your DNS server:
+
+* A DNS TXT record, which is required to validate ownership of your domain name. For more information on the DNS TXT records, see [Domain validation](#domain-validation).
+* A DNS CNAME record, which controls the flow of internet traffic to Azure Front Door.
+
+> [!TIP]
+> You can add a domain name to your Azure Front Door profile before making any DNS changes. This approach can be helpful if you need to set your Azure Front Door configuration together, or if you have a separate team that changes your DNS records.
+>
+> You can also add your DNS TXT record to validate your domain ownership before you add the CNAME record to control traffic flow. This approach can be useful to avoid experiencing migration downtime if you have an application already in production.
+
+## Domain validation
+
+All domains added to Azure Front Door must be validated. Validation helps to protect you from accidental misconfiguration, and also helps to protect other people from domain spoofing. In some situation, domains can be *pre-validated* by another Azure service. Otherwise, you need to follow the Azure Front Door domain validation process to prove your ownership of the domain name.
+
+* **Azure pre-validated domains** are domains that have been validated by another supported Azure service. If you onboard and validate a domain to another Azure service, and then configure Azure Front Door later, you might work with a pre-validated domain. You don't need to validate the domain through Azure Front Door when you use this type of domain.
+
+ > [!NOTE]
+ > Azure Front Door currently only accepts pre-validated domains that have been configured with [Azure Static Web Apps](https://azure.microsoft.com/products/app-service/static/).
+
+* **Non-Azure validated domains** are domains that aren't validated by a supported Azure service. This domain type can be hosted with any DNS service, including [Azure DNS](https://azure.microsoft.com/products/dns/), and requires that domain ownership is validated by Azure Front Door.
+
+### TXT record validation
+
+To validate a domain, you need to create a DNS TXT record. The name of the TXT record must be of the form `_dnsauth.{subdomain}`. Azure Front Door provides a unique value for your TXT record when you start to add the domain to Azure Front Door.
+
+For example, suppose you want to use the custom subdomain `myapplication.contoso.com` with Azure Front Door. First, you should add the domain to your Azure Front Door profile, and note the TXT record value that you need to use. Then, you should configure a DNS record with the following properties:
+
+| Property | Value |
+|-|-|
+| Record name | `_dnsauth.myapplication` |
+| Record value | *use the value provided by Azure Front Door* |
+| Time to live (TTL) | 1 hour |
+
+After your domain has been validated successfully, you can safely delete the TXT record from your DNS server.
+
+For more information on adding a DNS TXT record for a custom domain, see [Configure a custom domain on Azure Front Door using the Azure portal](standard-premium/how-to-add-custom-domain.md).
+
+### Domain validation states
+
+The following table lists the validation states that a domain might show.
+
+| Domain validation state | Description and actions |
+|--|--|
+| Submitting | The custom domain is being created. <br /><br /> Wait until the domain resource is ready. |
+| Pending | The DNS TXT record value has been generated, and Azure Front Door is ready for you to add the DNS TXT record. <br /><br /> Add the DNS TXT record to your DNS provider and wait for the validation to complete. If the status remains **Pending** even after the TXT record has been updated with the DNS provider, select **Regenerate** to refresh the TXT record then add the TXT record to your DNS provider again. |
+| Pending re-validation | The managed certificate is less than 45 days from expiring. <br /><br /> If you have a CNAME record already pointing to the Azure Front Door endpoint, no action is required for certificate renewal. If the custom domain is pointed to another CNAME record, select the **Pending re-validation** status, and then select **Regenerate** on the *Validate the custom domain* page. Lastly, select **Add** if you're using Azure DNS or manually add the TXT record with your own DNS providerΓÇÖs DNS management. |
+| Refreshing validation token | A domain goes into a *Refreshing Validation Token* state for a brief period after the **Regenerate** button is selected. Once a new TXT record value is issued, the state will change to **Pending**. <br /> No action is required. |
+| Approved | The domain has been successfully validated, and Azure Front Door can accept traffic that uses this domain. <br /><br /> No action is required. |
+| Rejected | The certificate provider/authority has rejected the issuance for the managed certificate. For example, the domain name might be invalid. <br /><br /> Select the **Rejected** link and then select **Regenerate** on the *Validate the custom domain* page, as shown in the screenshots below this table. Then, select **Add** to add the TXT record in the DNS provider. |
+| Timeout | The TXT record wasn't added to your DNS provider within seven days, or an invalid DNS TXT record was added. <br /><br /> Select the **Timeout** link and then select **Regenerate** on the *Validate the custom domain* page. Then select **Add** to add a new TXT record to the DNS provider. Ensure that you use the updated value. |
+| Internal error | An unknown error occurred. <br /><br /> Retry validation by selecting the **Refresh** or **Regenerate** button. If you're still experiencing issues, submit a support request to Azure support. |
+
+> [!NOTE]
+> - The default TTL for TXT records is 1 hour. When you need to regenerate the TXT record for re-validation, please pay attention to the TTL for the previous TXT record. If it doesn't expire, the validation will fail until the previous TXT record expires.
+> - If the **Regenerate** button doesn't work, delete and recreate the domain.
+> - If the domain state doesn't reflect as expected, select the **Refresh** button.
+
+## HTTPS for custom domains
+
+By using the HTTPS protocol on your custom domain, you ensure your sensitive data is delivered securely with TLS/SSL encryption when it's sent across the internet. When a client, like a web browser, is connected to a website by using HTTPS, the client validates the website's security certificate, and ensures it was issued by a legitimate certificate authority. This process provides security and protects your web applications from attacks.
+
+Azure Front Door supports using HTTPS with your own domains, and offloads transport layer security (TLS) certificate management from your origin servers. When you use custom domains, you can either use Azure-managed TLS certificates (recommended), or you can purchase and use your own TLS certificates.
+
+For more information on how Azure Front Door works with TLS, see [End-to-end TLS with Azure Front Door](end-to-end-tls.md).
+
+### Azure Front Door-managed TLS certificates
+
+Azure Front Door can automatically manage TLS certificates for subdomains and apex domains. When you use managed certificates, you don't need to create keys or certificate signing requests, and you don't need to upload, store, or install the certificates. Additionally, Azure Front Door can automatically rotate (renew) managed certificates without any human intervention. This process avoids downtime caused by a failure to renew your TLS certificates in time.
+
+Azure Front Door's certificates are issued by our partner certification authority, DigiCert.
+
+The process of generating, issuing, and installing a managed TLS certificate can take from several minutes to an hour to complete, and occasionally it can take longer.
+
+#### Domain types
+
+The following table summarizes the features available with managed TLS certificates when you use different types of domains:
+
+| Consideration | Subdomain | Apex domain | Wildcard domain |
+|-|-|-|-|
+| Managed TLS certificates available | Yes | Yes | No |
+| Managed TLS certificates are rotated automatically | Yes | See below | No |
+
+When you use Azure Front Door-managed TLS certificates with apex domains, the automated certificate rotation might require you to revalidate your domain ownership. For more information, see [Apex domains in Azure Front Door](apex-domain.md#azure-front-door-managed-tls-certificate-rotation).
+
+### Customer-managed TLS certificates
+
+Sometimes, you might need to provide your own TLS certificates. Common scenarios for providing your own certificates include:
+
+* Your organization requires you to use certificates issued by a specific certification authority.
+* You want Azure Key Vault to issue your certificate by using a partner certification authority.
+* You need to use a TLS certificate that a client application recognizes.
+* You need to use the same TLS certificate on multiple systems.
+* You use [wildcard domains](front-door-wildcard-domain.md). Azure Front Door doesn't provide managed certificates for wildcard domains.
+
+#### Certificate requirements
+
+When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a non-allowed CA, your request will be rejected. The root CA must be part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If a certificate without complete chain is presented, the requests that involve that certificate aren't guaranteed to work as expected.
+
+The common name (CN) of the certificate must match the domain configured in Azure Front Door.
+
+Azure Front Door doesn't support certificates with elliptic curve (EC) cryptography algorithms.
+
+#### Import a certificate to Azure Key Vault
+
+Custom TLS certificates must be imported into Azure Key Vault before you can use it with Azure Front Door. To learn how to import a certificate to a key vault, see [Tutorial: Import a certificate in Azure Key Vault](../key-vault/certificates/tutorial-import-certificate.md).
+
+The key vault must be in the same Azure subscription as your Azure Front Door profile.
+
+> [!WARNING]
+> Azure Front Door only supports key vaults in the same subscription as the Front Door profile. Choosing a key vault under a different subscription than your Azure Front Door profile will result in a failure.
+
+Certificates must be uploaded as a **certificate** object, rather than a **secret**.
+
+#### Grant access to Azure Front Door
+
+Azure Front Door needs to access your key vault to read your certificate. You need to configure both the key vault's network firewall and the vault's access control.
+
+If your key vault has network access restrictions enabled, you must configure your key vault to allow trusted Microsoft services to bypass the firewall.
+
+There are two ways you can configure access control on your key vault:
+
+* Azure Front Door can use a managed identity to access your key vault. You can use this approach when your key vault uses Azure Active Directory (Azure AD) authentication. For more information, see [Use managed identities with Azure Front Door Standard/Premium](managed-identity.md).
+* Alternatively you can grant Azure Front Door's service principal access to your key vault. You can use this approach when you use vault access policies.
+
+#### Add your custom certificate to Azure Front Door
+
+After you've imported your certificate to a key vault, create an Azure Front Door secret resource, which is a reference to the certificate you added to your key vault.
+
+Then, configure your domain to use the Azure Front Door secret for its TLS certificate.
+
+For a guided walkthrough of these steps, see [Configure HTTPS on an Azure Front Door custom domain using the Azure portal](standard-premium/how-to-configure-https-custom-domain.md#using-your-own-certificate).
+
+### Switch between certificate types
+
+You can change a domain between using an Azure Front Door-managed certificate and a user-managed certificate.
+
+* It might take up to an hour for the new certificate to be deployed when you switch between certificate types.
+* If your domain state is *Approved*, switching the certificate type between a user-managed and a managed certificate won't cause any downtime.
+* When switching to a managed certificate, Azure Front Door continues to use the previous certificate until the domain ownership is re-validated and the domain state becomes *Approved*.
+* If you switch from BYOC to managed certificate, domain re-validation is required. If you switch from managed certificate to BYOC, you're not required to re-validate the domain.
+
+### Certificate renewal
+
+#### Renew Azure Front Door-managed certificates
+
+For most custom domains, Azure Front Door automatically renews (rotates) managed certificates when they're close to expiry, and you don't need to do anything.
+
+However, Azure Front Door won't automatically rotate certificates in the following scenarios:
+
+* The custom domain's CNAME record is pointing to other DNS records.
+* The custom domain points to the Azure Front Door endpoint through a chain. For example, if your DNS record points to Azure Traffic Manager, which in turn resolves to Azure Front Door, the CNAME chain is `contoso.com` CNAME in `contoso.trafficmanager.net` CNAME in `contoso.z01.azurefd.net`. Azure Front Door can't verify the whole chain.
+* The custom domain uses an A record. We recommend you always use a CNAME record to point to Azure Front Door.
+* The custom domain is an [apex domain](apex-domain.md) and uses CNAME flattening.
+
+If one of the scenarios above applies to your custom domain, then 45 days before the managed certificate expires, the domain validation state becomes *Pending Revalidation*. The *Pending Revalidation* state indicates that you need to create a new DNS TXT record to revalidate your domain ownership.
+
+> [!NOTE]
+> DNS TXT records expire after seven days. If you previously added a domain validation TXT record to your DNS server, you need to replace it with a new TXT record. Ensure you use the new value, otherwise the domain validation process will fail.
+
+If your domain can't be validated, the domain validation state becomes *Rejected*. This state indicates that the certificate authority has rejected the request for reissuing a managed certificate.
+
+For more information on the domain validation states, see [Domain validation states](#domain-validation-states).
+
+#### Renew Azure-managed certificates for domains pre-validated by other Azure services
+
+Azure-managed certificates are automatically rotated by the Azure service that validates the domain.
+
+#### <a name="rotate-own-certificate"></a>Renew customer-managed TLS certificates
+
+When you update the certificate in your key vault, Azure Front Door can automatically detect and use the updated certificate. For this functionality to work, set the secret version to 'Latest' when you configure your certificate in Azure Front Door.
+
+If you select a specific version of your certificate, you have to reselect the new version manually when you update your certificate.
+
+It takes up to 72 hours for the new version of the certificate/secret to be automatically deployed.
+
+If you want to change the secret version from ΓÇÿLatestΓÇÖ to a specified version or vice versa, add a new certificate.
+
+## Security policies
+
+You can use Azure Front Door's web application firewall (WAF) to scan requests to your application for threats, and to enforce other security requirements.
+
+To use the WAF with a custom domain, use an Azure Front Door security policy resource. A security policy associates a domain with a WAF policy. You can optionally create multiple security policies so that you can use different WAF policies with different domains.
+
+## Next steps
+
+To learn how to add a custom domain to your Azure Front Door profile, see [Configure a custom domain on Azure Front Door using the Azure portal](standard-premium/how-to-add-custom-domain.md).
frontdoor End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/end-to-end-tls.md
Previously updated : 03/14/2022 Last updated : 02/07/2023
+zone_pivot_groups: front-door-tiers
# End-to-end TLS with Azure Front Door
-Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), is the standard security technology for establishing an encrypted link between a web server and a browser. This link ensures that all data passed between the web server and the web browser remain private and encrypted.
+Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), is the standard security technology for establishing an encrypted link between a web server and a client, like a web browser. This link ensures that all data passed between the server and the client remain private and encrypted.
-To meet your security or compliance requirements, Azure Front Door (AFD) supports end-to-end TLS encryption. Front Door TLS/SSL offload terminates the TLS connection, decrypts the traffic at the Azure Front Door, and re-encrypts the traffic before forwarding it to the backend. Since connections to the backend happen over the public IP, it is highly recommended you configure HTTPS as the forwarding protocol on your Azure Front Door to enforce end-to-end TLS encryption from the client to the backend. TLS/SSL offload is also supported if you deploy a private backend with AFD Premium using the [PrivateLink](private-link.md) feature.
+To meet your security or compliance requirements, Azure Front Door supports end-to-end TLS encryption. Front Door TLS/SSL offload terminates the TLS connection, decrypts the traffic at the Azure Front Door, and re-encrypts the traffic before forwarding it to the origin. When connections to the origin use the origin's public IP address, it's a good security practice to configure HTTPS as the forwarding protocol on your Azure Front Door. By using HTTPS as the forwarding protocol, you can enforce end-to-end TLS encryption for the entire processing of the request from the client to the origin. TLS/SSL offload is also supported if you deploy a private origin with Azure Front Door Premium using the [Private Link](private-link.md) feature.
++
+This article explains how Azure Front Door works with TLS connections. For more information about how to use TLS certificates with your own custom domains, see [HTTPS for custom domains](domain.md#https-for-custom-domains). To learn how to configure a TLS certificate on your own custom domain, see [Configure a custom domain on Azure Front Door using the Azure portal](standard-premium/how-to-add-custom-domain.md).
+ ## End-to-end TLS encryption
-End-to-end TLS allows you to secure sensitive data while in transit to the backend while benefiting from Azure Front Door features like global load balancing and caching. Some of the features also include URL-based routing, TCP split, caching on edge location closest to the clients, and customizing HTTP requests at the edge.
+End-to-end TLS allows you to secure sensitive data while in transit to the origin while benefiting from Azure Front Door features like global load balancing and caching. Some of the features also include URL-based routing, TCP split, caching on edge location closest to the clients, and customizing HTTP requests at the edge.
-Azure Front Door offloads the TLS sessions at the edge and decrypts client requests. It then applies the configured routing rules to route the requests to the appropriate backend in the backend pool. Azure Front Door then starts a new TLS connection to the backend and re-encrypts all data using the backendΓÇÖs certificate before transmitting the request to the backend. Any response from the backend is encrypted through the same process back to the end user. You can configure your Azure Front Door to use HTTPS as the forwarding protocol to enable end-to-end TLS.
+Azure Front Door offloads the TLS sessions at the edge and decrypts client requests. It then applies the configured routing rules to route the requests to the appropriate origin in the origin group. Azure Front Door then starts a new TLS connection to the origin and re-encrypts all data using the origin's certificate before transmitting the request to the origin. Any response from the origin is encrypted through the same process back to the end user. You can configure your Azure Front Door to use HTTPS as the forwarding protocol to enable end-to-end TLS.
## Supported TLS versions
Azure Front Door supports three versions of the TLS protocol: TLS versions 1.0
Although Azure Front Door supports TLS 1.2, which introduced client/mutual authentication in RFC 5246, currently, Azure Front Door doesn't support client/mutual authentication.
-You can configure the minimum TLS version in Azure Front Door in the custom domain HTTPS settings using the Azure portal or theΓÇ»[Azure REST API](/rest/api/frontdoorservice/frontdoor/frontdoors/createorupdate#minimumtlsversion). Currently, you can choose between 1.0 and 1.2. As such, specifying TLS 1.2 as the minimum version controls the minimum acceptable TLS version Azure Front Door will accept from a client. When Azure Front Door initiates TLS traffic to the backend, it will attempt to negotiate the best TLS version that the backend can reliably and consistently accept.
+You can configure the minimum TLS version in Azure Front Door in the custom domain HTTPS settings using the Azure portal or theΓÇ»[Azure REST API](/rest/api/frontdoorservice/frontdoor/frontdoors/createorupdate#minimumtlsversion). Currently, you can choose between 1.0 and 1.2. As such, specifying TLS 1.2 as the minimum version controls the minimum acceptable TLS version Azure Front Door will accept from a client. When Azure Front Door initiates TLS traffic to the origin, it will attempt to negotiate the best TLS version that the origin can reliably and consistently accept.
## Supported certificates
Certificates from internal CAs or self-signed certificates aren't allowed.
OCSP stapling is supported by default in Azure Front Door and no configuration is required.
-## Backend TLS connection (Azure Front Door to backend)
+## <a name="backend-tls-connection-azure-front-door-to-origin"></a> Origin TLS connection (Azure Front Door to origin)
-For HTTPS connections, Azure Front Door expects that your backend presents a certificate from a valid Certificate Authority (CA) with subject name(s) matching the backend *hostname*. As an example, if your backend hostname is set to `myapp-centralus.contosonews.net` and the certificate that your backend presents during the TLS handshake doesn't have `myapp-centralus.contosonews.net` or `*.contosonews.net` in the subject name, then Azure Front Door will refuse the connection and as a result an error.
+For HTTPS connections, Azure Front Door expects that your origin presents a certificate from a valid certificate authority (CA) with a subject name matching the origin *hostname*. As an example, if your origin hostname is set to `myapp-centralus.contosonews.net` and the certificate that your origin presents during the TLS handshake doesn't have `myapp-centralus.contosonews.net` or `*.contosonews.net` in the subject name, then Azure Front Door refuses the connection and the client sees an error.
> [!NOTE] > The certificate must have a complete certificate chain with leaf and intermediate certificates. The root CA must be part of theΓÇ»[Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If a certificate without complete chain is presented, the requests which involve that certificate are not guaranteed to work as expected.
-From a security standpoint, Microsoft doesn't recommend disabling certificate subject name check. In certain use cases such as for testing, as a work-around to resolve failing HTTPS connection, you can disable certificate subject name check for your Azure Front Door. Note that the origin still needs to present a certificate with a valid trusted chain, but doesn't have to match the origin host name. The option to disable this feature is different for each Azure Front Door tier:
+In certain use cases such as for testing, as a workaround to resolve failing HTTPS connection, you can disable certificate subject name check for your Azure Front Door. Note that the origin still needs to present a certificate with a valid trusted chain, but doesn't have to match the origin host name.
++
+In Azure Front Door Standard and Premium, you can configure an origin to disable the certificate subject name check.
+++
+In Azure Front Door (classic), you can disable the certificate subject name check by changing the Azure Front Door settings in the Azure portal. You can also configure the check by using the backend pool's settings in the Azure Front Door APIs.
++
+> [!NOTE]
+> From a security standpoint, Microsoft doesn't recommend disabling the certificate subject name check.
+
+## Frontend TLS connection (client to Azure Front Door)
+
+To enable the HTTPS protocol for secure delivery of contents on an Azure Front Door custom domain, you can choose to use a certificate that is managed by Azure Front Door or use your own certificate.
-* Azure Front Door Standard and Premium - it is present in the origin settings.
-* Azure Front Door (classic) - it is present under the Azure Front Door settings in the Azure portal and in the Backend PoolsSettings in the Azure Front Door API.
-## Frontend TLS connection (Client to Front Door)
+For more information, see [HTTPS for custom domains](domain.md#https-for-custom-domains).
-To enable the HTTPS protocol for secure delivery of contents on an Azure Front Door custom domain, you can choose to use a certificate that is managed by Azure Front Door or use your own certificate.
-* Azure Front Door managed certificate provides a standard TLS/SSL certificate via DigiCert and is stored in Azure Front Door's Key Vault.
-* If you choose to use your own certificate, you can onboard a certificate from a supported CA that can be a standard TLS, extended validation certificate, or even a wildcard certificate.
+Azure Front Door's managed certificate provides a standard TLS/SSL certificate via DigiCert and is stored in Azure Front Door's Key Vault.
-* Self-signed certificates aren't supported. LearnΓÇ»[how to enable HTTPS for a custom domain](front-door-custom-domain-https.md).
+If you choose to use your own certificate, you can onboard a certificate from a supported CA that can be a standard TLS, extended validation certificate, or even a wildcard certificate. Self-signed certificates aren't supported. LearnΓÇ»[how to enable HTTPS for a custom domain](front-door-custom-domain-https.md).
+ ### Certificate autorotation
For your own custom TLS/SSL certificate:
1. If a specific version is selected, autorotation isnΓÇÖt supported. You've will have to reselect the new version manually to rotate certificate. It takes up to 24 hours for the new version of the certificate/secret to be deployed.
- You'll need to ensure that the service principal for Front Door has access to the key vault. Refer to how to grant access to your key vault. The updated certificate rollout operation by Azure Front Door won't cause any production down time provided the subject name or subject alternate name (SAN) for the certificate didn't changed.
+ You'll need to ensure that the service principal for Front Door has access to the key vault. Refer to how to grant access to your key vault. The updated certificate rollout operation by Azure Front Door won't cause any production downtime, as long as the subject name or subject alternate name (SAN) for the certificate hasn't changed.
## Supported cipher suites
Using custom domains with TLS1.0/1.1 enabled the following cipher suites are sup
* TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 Azure Front Door doesnΓÇÖt support configuring specific cipher suites.+ ## Next steps +
+* [Understand custom domains](domain.md) on Azure Front Door.
+* [Configure a custom domain on Azure Front Door using the Azure portal](standard-premium/how-to-add-custom-domain.md).
+++ * [Configure a custom domain](front-door-custom-domain.md) for Azure Front Door. * [Enable HTTPS for a custom domain](front-door-custom-domain-https.md).+
frontdoor Front Door Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-custom-domain-https.md
# Tutorial: Configure HTTPS on a Front Door (classic) custom domain
-This tutorial shows how to enable the HTTPS protocol for a custom domain that's associated with your Front Door (classic) under the frontend hosts section. By using the HTTPS protocol on your custom domain (for example, https:\//www.contoso.com), you ensure that your sensitive data is delivered securely via TLS/SSL encryption when it's sent across the internet. When your web browser is connected to a web site via HTTPS, it validates the web site's security certificate and verifies it's issued by a legitimate certificate authority. This process provides security and protects your web applications from attacks.
+This tutorial shows how to enable the HTTPS protocol for a custom domain that's associated with your Front Door (classic) under the frontend hosts section. By using the HTTPS protocol on your custom domain (for example, `https://www.contoso.com`), you ensure that your sensitive data is delivered securely via TLS/SSL encryption when it's sent across the internet. When your web browser is connected to a web site via HTTPS, it validates the web site's security certificate and verifies it's issued by a legitimate certificate authority. This process provides security and protects your web applications from attacks.
Azure Front Door supports HTTPS on a Front Door default hostname, by default. For example, if you create a Front Door (such as `https://contoso.azurefd.net`), HTTPS is automatically enabled for requests made to `https://contoso.azurefd.net`. However, once you onboard the custom domain 'www.contoso.com' you'll need to additionally enable HTTPS for this frontend host.
To enable HTTPS on a custom domain, follow these steps:
5. Continue to [Validate the domain](#validate-the-domain). > [!NOTE]
-> * For AFD managed certificates, DigiCertΓÇÖs 64 character limit is enforced. Validation will fail if that limit is exceeded.
+> * For Azure Front Door-managed certificates, DigiCertΓÇÖs 64 character limit is enforced. Validation will fail if that limit is exceeded.
> * Enabling HTTPS via Front Door managed certificate is not supported for apex/root domains (example: contoso.com). You can use your own certificate for this scenario. Please continue with Option 2 for further details. ### Option 2: Use your own certificate
-You can use your own certificate to enable the HTTPS feature. This process is done through an integration with Azure Key Vault, which allows you to store your certificates securely. Azure Front Door uses this secure mechanism to get your certificate and it requires a few extra steps. When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a non-allowed CA, your request will be rejected. If a certificate without complete chain is presented, the requests which involve that certificate are not guaranteed to work as expected.
+You can use your own certificate to enable the HTTPS feature. This process is done through an integration with Azure Key Vault, which allows you to store your certificates securely. Azure Front Door uses this secure mechanism to get your certificate and it requires a few extra steps. When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a non-allowed CA, your request will be rejected. If a certificate without complete chain is presented, the requests that involve that certificate are not guaranteed to work as expected.
#### Prepare your key vault and certificate
frontdoor Front Door How To Onboard Apex Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-how-to-onboard-apex-domain.md
Title: Onboard a root or apex domain to an existing Front Door - Azure portal
+ Title: Onboard a root or apex domain to Azure Front Door
description: Learn how to onboard a root or apex domain to an existing Front Door using the Azure portal. Previously updated : 05/31/2022 Last updated : 02/07/2023 zone_pivot_groups: front-door-tiers
-# Onboard a root or apex domain on your Front Door
+# Onboard a root or apex domain to Azure Front Door
-Azure Front Door supports adding custom domain to Front Door profile. This is done by adding DNS TXT record for domain ownership validation and creating a CNAME record in your DNS configuration to route DNS queries for the custom domain to Azure Front Door endpoint. For apex domain, DNS TXT will continue to be used for domain validation. However, the DNS protocol prevents the assignment of CNAME records at the zone apex. For example, if your domain is `contoso.com`; you can create CNAME records for `somelabel.contoso.com`; but you can't create CNAME for `contoso.com` itself. Front Door doesn't expose the frontend IP address associated with your Front Door profile. So you can't map your apex domain to an IP address if your intent is to onboard it to Azure Front Door.
+Azure Front Door uses CNAME records to validate domain ownership for the onboarding of custom domains. Azure Front Door doesn't expose the frontend IP address associated with your Front Door profile. So you can't map your apex domain to an IP address if your intent is to onboard it to Azure Front Door.
+The DNS protocol prevents the assignment of CNAME records at the zone apex. For example, if your domain is `contoso.com`; you can create CNAME records for `somelabel.contoso.com`; but you can't create CNAME for `contoso.com` itself. This restriction presents a problem for application owners who have load-balanced applications behind Azure Front Door. Since using a Front Door profile requires creation of a CNAME record, it isn't possible to point at the Front Door profile from the zone apex.
+This problem can be resolved by using alias records in Azure DNS. Unlike CNAME records, alias records are created at the zone apex. Application owners can use it to point their zone apex record to a Front Door profile that has public endpoints. Application owners point to the same Front Door profile that's used for any other domain within their DNS zone. For example, `contoso.com` and `www.contoso.com` can point to the same Front Door profile.
-Azure Front Door uses CNAME records to validate domain ownership for onboarding of custom domains. Front Door doesn't expose the frontend IP address associated with your Front Door profile. So you can't map your apex domain to an IP address if your intent is to onboard it to Azure Front Door.
+Mapping your apex or root domain to your Front Door profile requires *CNAME flattening* or *DNS chasing*, which is where the DNS provider recursively resolves CNAME entries until it resolves an IP address. This functionality is supported by Azure DNS for Azure Front Door endpoints.
-The DNS protocol prevents the assignment of CNAME records at the zone apex. For example, if your domain is `contoso.com`; you can create CNAME records for `somelabel.contoso.com`; but you can't create CNAME for `contoso.com` itself. This restriction presents a problem for application owners who have load-balanced applications behind Azure Front Door. Since using a Front Door profile requires creation of a CNAME record, it isn't possible to point at the Front Door profile from the zone apex.
+> [!NOTE]
+> There are other DNS providers as well that support CNAME flattening or DNS chasing. However, Azure Front Door recommends using Azure DNS for its customers for hosting their domains.
+
+You can use the Azure portal to onboard an apex domain on your Azure Front Door and enable HTTPS on it by associating it with a TLS certificate. Apex domains are also referred as *root* or *naked* domains.
::: zone-end
-This problem can be resolved by using alias records in Azure DNS. Unlike CNAME records, alias records are created at the zone apex. Application owners can use it to point their zone apex record to a Front Door profile that has public endpoints. Application owners point to the same Front Door profile that's used for any other domain within their DNS zone. For example, `contoso.com` and `www.contoso.com` can point to the same Front Door profile.
-Mapping your apex or root domain to your Front Door profile basically requires CNAME flattening or DNS chasing. A mechanism where the DNS provider recursively resolves the CNAME entry until it hits an IP address. This functionality is supported by Azure DNS for Front Door endpoints.
+Apex domains are at the root of a DNS zone and don't contain subdomains. For example, `contoso.com` is an apex domain. Azure Front Door supports adding apex domains when you use Azure DNS. For more information about apex domains, see [Domains in Azure Front Door](domain.md).
-> [!NOTE]
-> There are other DNS providers as well that support CNAME flattening or DNS chasing, however, Azure Front Door recommends using Azure DNS for its customers for hosting their domains.
+You can use the Azure portal to onboard an apex domain on your Azure Front Door profile, and you can enable HTTPS on it by associating it with a TLS certificate.
-You can use the Azure portal to onboard an apex domain on your Front Door and enable HTTPS on it by associating it with a certificate for TLS termination. Apex domains are also referred as root or naked domains.
::: zone pivot="front-door-standard-premium"
-## Onboard the custom domain to your Front Door
+## Onboard the custom domain to your Azure Front Door profile
-1. Select **Domains** from under *Settings* on the left side pane for your Front Door profile and then select **+ Add** to add a new custom domain.
+1. Select **Domains** from under *Settings* on the left side pane for your Azure Front Door profile and then select **+ Add** to add a new custom domain.
:::image type="content" source="./media/front-door-apex-domain/add-domain.png" alt-text="Screenshot of adding a new domain to Front Door profile.":::
You can use the Azure portal to onboard an apex domain on your Front Door and en
- If you're using another DNS provider, manually create a new TXT record of name `_dnsauth.<your_subdomain>` with the record value as shown on the page.
-1. Close the *Validate the custom domain* page and return to the *Domains* page for the Front Door profile. You should see the *Validation state* change from **Pending** to **Approved**. If not, wait up to 10 minutes for changes to reflect. If your validation doesn't get approved make sure your TXT record is correct and name servers are configured correctly if you're using Azure DNS.
+1. Close the *Validate the custom domain* page and return to the *Domains* page for the Front Door profile. You should see the *Validation state* change from **Pending** to **Approved**. If not, wait up to 10 minutes for changes to reflect. If your validation doesn't get approved, make sure your TXT record is correct and name servers are configured correctly if you're using Azure DNS.
:::image type="content" source="./media/front-door-apex-domain/validation-approved.png" alt-text="Screenshot of new custom domain passing validation.":::
You can use the Azure portal to onboard an apex domain on your Front Door and en
Follow the guidance for [configuring HTTPS for your custom domain](standard-premium/how-to-configure-https-custom-domain.md) to enable HTTPS for your apex domain.
-## Managed certificate renewal for apex domain
-
-Front Door managed certificates will automatically rotate certificates only if the domain CNAME is pointed to Front Door endpoint. Since the APEX domain doesnΓÇÖt have a CNAME record pointing to Front Door endpoint, the auto-rotation for managed certificate will fail until domain ownership is re-validated. The validation column will become `Pending-revalidation` 45 days before the managed certificate expires. Select the **Pending-revalidation** link and then select the **Regenerate** button to regenerate the TXT token. After that, add the TXT token to the DNS provider settings.
- ::: zone-end ::: zone pivot="front-door-classic"
frontdoor Front Door Wildcard Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-wildcard-domain.md
na Previously updated : 03/17/2022 Last updated : 02/07/2023 zone_pivot_groups: front-door-tiers
-# Wildcard domains
+# Wildcard domains in Azure Front Door
-Besides apex domains and subdomains, you can also map a wildcard domain to your front-end hosts or custom domains for your Azure Front Door profile. Having wildcard domains in your Azure Front Door configuration simplifies traffic routing behavior for multiple subdomains for an API, application, or website from the same routing rule. You don't need to modify the configuration to add or specify each subdomain separately. As an example, you can define the routing for `customer1.contoso.com`, `customer2.contoso.com`, and `customerN.contoso.com` by using the same routing rule and adding the wildcard domain `*.contoso.com`.
+Wildcard domains allow Azure Front Door to receive traffic for any subdomain of a top-level domain. An example wildcard domain is `*.contoso.com`.
-Key scenarios that are improved with support for wildcard domains include:
+By using wildcard domains, you can simplify the configuration of your Azure Front Door profile. You don't need to modify the configuration to add or specify each subdomain separately. For example, you can define the routing for `customer1.contoso.com`, `customer2.contoso.com`, and `customerN.contoso.com` by using the same route and adding the wildcard domain `*.contoso.com`.
-- You don't need to onboard each subdomain in your Azure Front Door profile and then enable HTTPS to bind a certificate for each subdomain.-- You're no longer required to change your production Azure Front Door configuration if an application adds a new subdomain. Previously, you had to add the subdomain, bind a certificate to it, attach a web application firewall (WAF) policy, and then add the domain to different routing rules.
+Wildcard domains give you several advantages, including:
+
+- You don't need to onboard each subdomain in your Azure Front Door profile. For example, suppose you create new subdomains every customer, and route all customers' requests to a single origin group. Whenever you add a new customer, Azure Front Door understands how to route traffic to your origin group even though the subdomain hasn't been explicitly configured.
+- You don't need to generate a new TLS certificate, or manage any subdomain-specific HTTPS settings, to bind a certificate for each subdomain.
+- You can use a single web application firewall (WAF) policy for all of your subdomains.
+
+Commonly, wildcard domains are used to support software as a service (SaaS) solutions, and other multitenant applications. When you build these application types, you need to give special consideration to how you route traffic to your origin servers. For more information, see [Use Azure Front Door in a multitenant solution](/azure/architecture/guide/multitenant/service/front-door).
> [!NOTE]
-> Currently, adding wildcard domains through Azure DNS is only supported via API, PowerShell, and the Azure CLI. Support for adding and managing wildcard domains in the Azure portal isn't available.
+> When you use Azure DNS to manage your domain's DNS records, you need to configure wildcard domains by using the Azure Resource Manager API, Bicep, PowerShell, and the Azure CLI. Support for adding and managing Azure DNS wildcard domains in the Azure portal isn't available.
::: zone pivot="front-door-standard-premium" ## Add a wildcard domain and certificate binding
-You can add a wildcard domain following guidance in [add a custom domain](standard-premium/how-to-add-custom-domain.md) for subdomains.
+You can add a wildcard domain following similar steps to those for subdomains. For more information about adding a subdomain to Azure Front Door, see [Configure a custom domain on Azure Front Door using the Azure portal](standard-premium/how-to-add-custom-domain.md).
> [!NOTE] > * Azure DNS supports wildcard records.
-> * Cache purge for wildcard domain is not supported, you have to specify a subdomain for cache purge.
-
-You can add as many single-level subdomains of the wildcard as you would like. For example, for the wildcard domain *.contoso.com, you can add subdomains in the form of image.contosto.com, cart.contoso.com, etc. Subdomains like www.image.contoso.com aren't a single-level subdomain of *.contoso.com. This functionality might be required for:
-
-* Defining a different route for a subdomain than the rest of the domains (from the wildcard domain).
-
-* Set up a different WAF policy for a specific subdomain.
+> * You can't [purge the Azure Front Door cache](front-door-caching.md#cache-purge) for a wildcard domain. You must specify a subdomain when purging the cache.
For accepting HTTPS traffic on your wildcard domain, you must enable HTTPS on the wildcard domain. The certificate binding for a wildcard domain requires a wildcard certificate. That is, the subject name of the certificate should also have the wildcard domain. > [!NOTE] > * Currently, only using your own custom SSL certificate option is available for enabling HTTPS for wildcard domains. Azure Front Door managed certificates can't be used for wildcard domains. > * You can choose to use the same wildcard certificate from Azure Key Vault or from Azure Front Door managed certificates for subdomains.
-> * If you want to add a subdomain of the wildcard domain thatΓÇÖs already validated in the Azure Front Door Standard or Premium profile, the domain validation is automatically approved if it uses the same use your own custom SSL certificate.
+> * If you want to add a subdomain of the wildcard domain thatΓÇÖs already validated in the Azure Front Door Standard or Premium profile, the domain validation is automatically approved.
> * If a wildcard domain is validated and already added to one profile, a single-level subdomain can still be added to another profile as long as it is also validated.
+## Define a subdomain explicitly
+
+You can add as many single-level subdomains of the wildcard as you would like. For example, for the wildcard domain `*.contoso.com`, you can also add subdomains to your Azure Front Door profile for `image.contosto.com`, `cart.contoso.com`, and so forth. The configuration that you explicitly specify for the subdomain takes precedence over the configuration of the wildcard domain.
+
+You might need to explicitly add subdomains in these situations:
+
+* You need to define a different route for a subdomain than the rest of the domains (from the wildcard domain). For example, your customers might use subdomains like `customer1.contoso.com`, `customer2.contoso.com`, and so forth, and these subdomains should all be routed to your main application servers. However, you might also want to route `images.contoso.com` to an Azure Storage blob container.
+* You need to use a different WAF policy for a specific subdomain.
+
+Subdomains like `www.image.contoso.com` aren't a single-level subdomain of `*.contoso.com`.
+ ::: zone-end ::: zone pivot="front-door-classic"
WAF policies can be attached to wildcard domains, similar to other domains. A di
If you don't want a WAF policy to run for a subdomain, you can create an empty WAF policy with no managed or custom rulesets. +
+## Routes
+
+When configuring a route, you can select a wildcard domain as an origin. You can also have different route behavior for wildcard domains and subdomains. Azure Front Door chooses the most specific match for the domain across different routes. For more information, see [How requests are matched to a routing rule](front-door-route-matching.md).
+
+> [!IMPORTANT]
+> You must have matching path patterns across your routes, or your clients will see failures.
+>
+> For example, suppose you have two routing rule:
+> - Route 1 (`*.foo.com/*` mapped to origin group A)
+> - Route 2 (`bar.foo.com/somePath/*` mapped to origin group B)
+> If a request arrives for `bar.foo.com/anotherPath/*`, Azure Front Door selects route 2 based on a more specific domain match, only to find no matching path patterns across the routes.
+++ ## Routing rules
-When configuring a routing rule, you can select a wildcard domain as a front-end host. You can also have different route behavior for wildcard domains and subdomains. As described in [How Azure Front Door does route matching](front-door-route-matching.md), the most specific match for the domain across different routing rules is chosen at runtime.
+When configuring a routing rule, you can select a wildcard domain as a front-end host. You can also have different route behavior for wildcard domains and subdomains. Azure Front Door chooses the most specific match for the domain across different routes. For more information, see [How requests are matched to a routing rule](front-door-route-matching.md).
> [!IMPORTANT]
-> You must have matching path patterns across your routing rules, or your clients will see failures. For example, you have two routing rules like Route 1 (`*.foo.com/*` mapped to back-end pool A) and Route 2 (`/bar.foo.com/somePath/*` mapped to back-end pool B). Then, a request arrives for `bar.foo.com/anotherPath/*`. Azure Front Door selects Route 2 based on a more specific domain match, only to find no matching path patterns across the routes.
+> You must have matching path patterns across your routes, or your clients will see failures.
+>
+> For example, suppose you have two routing rule:
+> - Route 1 (`*.foo.com/*` mapped to backend pool A)
+> - Route 2 (`bar.foo.com/somePath/*` mapped to backend pool B)
+> If a request arrives for `bar.foo.com/anotherPath/*`, Azure Front Door selects route 2 based on a more specific domain match, only to find no matching path patterns across the routes.
+ ## Next steps
frontdoor Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/scenarios.md
na Previously updated : 12/08/2022 Last updated : 02/07/2023
Front Door provides several features to help to accelerate the performance of yo
Front Door's security capabilities help to protect your application servers from several different types of threats. - **End-to-end TLS:** Front Door supports end-to-end TLS encryption. Front Door TLS/SSL offload terminates the TLS connection, decrypts the traffic at the Azure Front Door, and re-encrypts the traffic before forwarding it to the backend.-- **Managed TLS certificates:** Front Door can [issue and manage certificates](standard-premium/how-to-configure-https-custom-domain.md#afd-managed-certificates-for-non-azure-pre-validated-domain), ensuring that your applications are protected by strong encryption and trust.
+- **Managed TLS certificates:** Front Door can [issue and manage certificates](domain.md#https-for-custom-domains), ensuring that your applications are protected by strong encryption and trust.
- **Custom TLS certificates:** If you need to bring your own TLS certificates, Front Door enables you to use a [managed identity to access the key vault](managed-identity.md) that contains the certificate. - **Web application firewall:** Front Door's web application firewall (WAF) provides a range of security capabilities to your application. [Managed rule sets](../web-application-firewall/afds/waf-front-door-drs.md) scan incoming requests for suspicious content. [Bot protection rules](../web-application-firewall/afds/afds-overview.md#bot-protection-rule-set) identify and respond to traffic from bots. [Geo-filtering](../web-application-firewall/afds/waf-front-door-geo-filtering.md) and [rate limiting](../web-application-firewall/afds/waf-front-door-rate-limit.md) features protect your application servers from unexpected traffic. - **DDoS protection:** Because of Front Door's architecture, it can also absorb large [distributed denial of service (DDoS) attacks](front-door-ddos.md) and prevent the traffic from reaching your application.
Front Door can help you to reduce the cost of running your Azure solution.
Front Door can help to reduce the operational burden of running a modern internet application, and enable you to make some kinds of changes to your solution without modifying your applications. -- **Managed TLS certificates:** Front Door can [issue and manage certificates](standard-premium/how-to-configure-https-custom-domain.md#afd-managed-certificates-for-non-azure-pre-validated-domain). This feature means you don't need to manage certificate renewals, and you reduce the likelihood of an outage that's caused by using an invalid or expired TLS certificate.
+- **Managed TLS certificates:** Front Door can [issue and manage certificates](domain.md#https-for-custom-domains). This feature means you don't need to manage certificate renewals, and you reduce the likelihood of an outage that's caused by using an invalid or expired TLS certificate.
- **Wildcard TLS certificates:** Front Door's support for [wildcard domains](front-door-wildcard-domain.md), including DNS and TLS certificates, enables you to use multiple hostnames without reconfiguring Front Door for each subdomain. - **HTTP/2:** Front Door can help you to modernize your legacy applications with [HTTP/2 support](front-door-http2.md) without modifying your application servers. - **Rules engine:** The Front Door [rules engine](front-door-rules-engine.md) enables you to change the internal architecture of your solution without affecting your clients.
frontdoor How To Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-add-custom-domain.md
Previously updated : 09/06/2022 Last updated : 02/07/2023 #Customer intent: As a website owner, I want to add a custom domain to my Front Door configuration so that my users can use my custom domain to access my content.
When you use Azure Front Door for application delivery, a custom domain is neces
After you create an Azure Front Door Standard/Premium profile, the default frontend host will have a subdomain of `azurefd.net`. This subdomain gets included in the URL when Azure Front Door Standard/Premium delivers content from your backend by default. For example, `https://contoso-frontend.azurefd.net/activeusers.htm`. For your convenience, Azure Front Door provides the option of associating a custom domain with the default host. With this option, you deliver your content with a custom domain in your URL instead of an Azure Front Door owned domain name. For example, `https://www.contoso.com/photo.png`.
-Azure Front Door supports two types of domains, non-Azure validated domain and Azure pre-validated domain. Azure managed certificate and customer certificate are supported for both types. For more information, see [Configure HTTPS on a custom domain](how-to-configure-https-custom-domain.md).
-
-* **Azure pre-validated domains** - are domains validated by another Azure service. This domain type is used when you onboard and validated a domain to an Azure service, and then configured the Azure service behind an Azure Front Door. You don't need to validate the domain through the Azure Front Door when you use this type of domain.
-
- > [!NOTE]
- > Currently Azure pre-validated domain only supports domain validated by Static Web App.
-
-* **Non-Azure validated domains** - are domains that aren't validated by any Azure service. This domain type can be hosted with any DNS service and requires domain ownership validation with Azure Front Door.
- ## Prerequisites
-* Before you can complete the steps in this tutorial, you must first create a Front Door. For more information, see [Quickstart: Create a Front Door Standard/Premium](create-front-door-portal.md).
+* Before you can complete the steps in this tutorial, you must first create an Azure Front Door profile. For more information, see [Quickstart: Create a Front Door Standard/Premium](create-front-door-portal.md).
* If you don't already have a custom domain, you must first purchase one with a domain provider. For example, see [Buy a custom domain name](../../app-service/manage-custom-dns-buy-domain.md).
-* If you're using Azure to host your [DNS domains](../../dns/dns-overview.md), you must delegate the domain provider's domain name system (DNS) to an Azure DNS. For more information, see [Delegate a domain to Azure DNS](../../dns/dns-delegate-domain-azure-dns.md). Otherwise, if you're using a domain provider to handle your DNS domain, you must manually validate the domain by entering prompted DNS TXT records.
+* If you're using Azure to host your [DNS domains](../../dns/dns-overview.md), you must delegate the domain provider's domain name system (DNS) to Azure DNS. For more information, see [Delegate a domain to Azure DNS](../../dns/dns-delegate-domain-azure-dns.md). Otherwise, if you're using a domain provider to handle your DNS domain, you must manually validate the domain by entering prompted DNS TXT records.
## Add a new custom domain > [!NOTE] > If a custom domain is validated in an Azure Front Door or a Microsoft CDN profile already, then it can't be added to another profile.
-A custom domain is configured on the **Domains** page of the Front Door profile. A custom domain can be set up and validated prior to endpoint association. A custom domain and its subdomains can only be associated with a single endpoint at a time. However, you can use different subdomains from the same custom domain for different Front Door profiles. You may also map custom domains with different subdomains to the same Front Door endpoint.
+A custom domain is configured on the **Domains** page of the Azure Front Door profile. A custom domain can be set up and validated prior to endpoint association. A custom domain and its subdomains can only be associated with a single endpoint at a time. However, you can use different subdomains from the same custom domain for different Azure Front Door profiles. You may also map custom domains with different subdomains to the same Azure Front Door endpoint.
1. Select **Domains** under settings for your Azure Front Door profile and then select **+ Add** button.
A custom domain is configured on the **Domains** page of the Front Door profile.
:::image type="content" source="../media/how-to-add-custom-domain/validation-state-submitting.png" alt-text="Screenshot of domain validation state submitting."::: > [!NOTE]
- > An Azure pre-validated domain will have a validation state of **Pending** and will automatically change to **Approved** after a few minutes. Once validation gets approved, skip to [**Associate the custom domain to your Front Door endpoint**](#associate-the-custom-domain-to-your-front-door-endpoint) and complete the remaining steps.
+ > An Azure pre-validated domain will have a validation state of **Pending** and will automatically change to **Approved** after a few minutes. Once validation gets approved, skip to [**Associate the custom domain to your Front Door endpoint**](#associate-the-custom-domain-with-your-azure-front-door-endpoint) and complete the remaining steps.
The validation state will change to **Pending** after a few minutes.
A custom domain is configured on the **Domains** page of the Front Door profile.
:::image type="content" source="../media/how-to-add-custom-domain/provisioned-approved-status.png" alt-text="Screenshot of provisioned and approved status.":::
-### Domain validation state
-
-| Domain validation state | Description and actions |
-|--|--|
-| Approved | This status means the domain has been successfully validated. |
-| Internal error | If you see this error, retry validation by selecting the **Refresh** or **Regenerate** button. If you're still experiencing issues, submit a support request to Azure support. |
-| Pending | A domain goes to pending state once the DNS TXT record challenge is generated. Add the DNS TXT record to your DNS provider and wait for the validation to complete. If the status remains **Pending** even after the TXT record has been updated with the DNS provider, select **Regenerate** to refresh the TXT record then add the TXT record to your DNS provider again. |
-| Pending re-validation | This status occurs when the managed certificate is less than 45 days from expiring. If you have a CNAME record already pointing to the Azure Front Door endpoint, no action is required for certificate renewal. If the custom domain is pointed to another CNAME record, select the **Pending re-validation**, and then select **Regenerate** on the *Validate the custom domain* page. Lastly, select **Add** if you're using Azure DNS or manually add the TXT record with your own DNS providerΓÇÖs DNS management. |
-| Refreshing validation token | A domain goes into a *Refreshing Validation Token* state for a brief period after the **Regenerate** button is selected. Once a new TXT record challenge is issued, the state will change to **Pending**. |
-| Rejected | This when the certificate provider/authority rejects the issuance for the managed certificate, for example, when the domain is invalid. Select the **Rejected** link and then select **Regenerate** on the *Validate the custom domain* page, as shown in the screenshots below this table. Then select **Add** to add the TXT record in the DNS provider. |
-| Submitting | When a new custom domain is added and being created, the validation state becomes Submitting. |
-| Timeout | The domain validation state will change from *Pending* to *Timeout* if the TXT record isn't added to your DNS provider within seven days. You'll also see a *Timeout* state if an invalid DNS TXT record has been added. Select the **Timeout** link and then select **Regenerate** on the *Validate the custom domain* page. Then select **Add** to add the TXT record to the DNS provider. |
-
-> [!NOTE]
-> 1. The default TTL for TXT record is 1 hour. When you need to regenerate the TXT record for re-validation, please pay attention to the TTL for the previous TXT record. If it doesn't expire, the validation will fail until the previous TXT record expires.
-> 2. If the **Regenerate** button doesn't work, delete and recreate the domain.
-> 3. If the domain state doesn't reflect as expected, select the **Refresh** button.
+For more information about domain validation states, see [Domains in Azure Front Door](../domain.md#domain-validation).
-## Associate the custom domain to your Front Door endpoint
+## Associate the custom domain with your Azure Front Door endpoint
After you validate your custom domain, you can associate it to your Azure Front Door Standard/Premium endpoint.
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
Previously updated : 06/06/2022 Last updated : 02/07/2023 #Customer intent: As a website owner, I want to add a custom domain to my Front Door configuration so that my users can use my custom domain to access my content.
# Configure HTTPS on an Azure Front Door custom domain using the Azure portal
+Azure Front Door enables secure TLS delivery to your applications by default when you use your own custom domains. To learn more about custom domains, including how custom domains work with HTTPS, see [Domains in Azure Front Door](../domain.md).
-Azure Front Door enables secure TLS delivery to your applications by default when a custom domain is added. By using the HTTPS protocol on your custom domain, you ensure your sensitive data get delivered securely with TLS/SSL encryption when it's sent across the internet. When your web browser is connected to a web site via HTTPS, it validates the web site's security certificate, and verifies it gets issued by a legitimate certificate authority. This process provides security and protects your web applications from attacks.
-
-Azure Front Door supports Azure managed certificate and customer-managed certificates.
-
-* A non-Azure validated domain requires domain ownership validation. The managed certificate (AFD managed) is issued and managed by Azure Front Door. Azure Front Door by default automatically enables HTTPS to all your custom domains using Azure managed certificates. No extra steps are required for getting an AFD managed certificate. A certificate is created during the domain validation process.
-
-* An Azure pre-validated domain doesn't require domain validation because it's already validated by another Azure service. The managed certificate (Azure managed) is issued and managed by the Azure service. No extra steps are required for getting an Azure managed certificate. Azure Front Door doesn't issue a new managed certificate for this scenario and instead will reuse the managed certificate issued by the Azure service. For supported Azure service for pre-validated domain, refer to [custom domain](how-to-add-custom-domain.md).
-
-* For both scenarios, you can bring your own certificate.
+Azure Front Door supports Azure-managed certificates and customer-managed certificates. In this article, you'll learn how to configure both types of certificates for your Azure Front Door custom domains.
## Prerequisites
Azure Front Door supports Azure managed certificate and customer-managed certifi
* If you're using Azure to host your [DNS domains](../../dns/dns-overview.md), you must delegate the domain provider's domain name system (DNS) to an Azure DNS. For more information, see [Delegate a domain to Azure DNS](../../dns/dns-delegate-domain-azure-dns.md). Otherwise, if you're using a domain provider to handle your DNS domain, you must manually validate the domain by entering prompted DNS TXT records.
-## AFD managed certificates for Non-Azure pre-validated domain
+## Azure Front Door-managed certificates for non-Azure pre-validated domains
+
+Follow the steps below if you have your own domain, and the domain is not already associated with [another Azure service that pre-validates domains for Azure Front Door](../domain.md#domain-validation).
1. Select **Domains** under settings for your Azure Front Door profile and then select **+ Add** to add a new domain.
Azure Front Door supports Azure managed certificate and customer-managed certifi
| DNS management | Select **Azure managed DNS (Recommended)** | | DNS zone | Select the **Azure DNS zone** that host the custom domain. | | Custom domain | Select an existing domain or add a new domain. |
- | HTTPS | Select **AFD managed (Recommended)** |
+ | HTTPS | Select **AFD Managed (Recommended)** |
1. Validate and associate the custom domain to an endpoint by following the steps in enabling [custom domain](how-to-add-custom-domain.md).
-1. Once the custom domain gets associated to an endpoint successfully, an AFD managed certificate gets deployed to Front Door. This process may take from several minutes to an hour to complete.
+1. After the custom domain is associated with an endpoint successfully, Azure Front Door generates a certificate and deploys it. This process may take from several minutes to an hour to complete.
+
+## Azure-managed certificates for Azure pre-validated domains
-## Azure managed certificates for Azure pre-validated domain
+Follow the steps below if you have your own domain, and the domain is associated with [another Azure service that pre-validates domains for Azure Front Door](../domain.md#domain-validation).
1. Select **Domains** under settings for your Azure Front Door profile and then select **+ Add** to add a new domain.
Azure Front Door supports Azure managed certificate and customer-managed certifi
## Using your own certificate
-You can also choose to use your own TLS certificate. When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a non-allowed CA, your request will be rejected. The root CA must be part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If a certificate without complete chain is presented, the requests that involve that certificate aren't guaranteed to work as expected. This certificate must be imported into an Azure Key Vault before you can use it with Azure Front Door Standard/Premium. See how to [import a certificate](../../key-vault/certificates/tutorial-import-certificate.md) to Azure Key Vault.
+You can also choose to use your own TLS certificate. Your TLS certificate must meet certain requirements. For more information, see [Certificate requirements](../domain.md?pivot=front-door-standard-premium#certificate-requirements).
#### Prepare your key vault and certificate -- You must have a key vault in the same Azure subscription as your Azure Front Door Standard/Premium profile. Create a key vault if you don't have one.-
- > [!WARNING]
- > Azure Front Door currently only supports key vaults in the same subscription as the Front Door profile. Choosing a key vault under a different subscription than your Azure Front Door Standard/Premium profile will result in a failure.
--- If your key vault has network access restrictions enabled, you must configure your key vault to allow trusted Microsoft services to bypass the firewall.--- Your key vault must be configured to use the *Key Vault access policy* permission model.--- If you already have a certificate, you can upload it to your key vault. Otherwise, create a new certificate directly through Azure Key Vault from one of the partner certificate authorities (CAs) that Azure Key Vault integrates with. Upload your certificate as a **certificate** object, rather than a **secret**.
+If you already have a certificate, you can upload it to your key vault. Otherwise, create a new certificate directly through Azure Key Vault from one of the partner certificate authorities (CAs) that Azure Key Vault integrates with.
> [!NOTE]
-> Front Door doesn't support certificates with elliptic curve (EC) cryptography algorithms. The certificate must have a complete certificate chain with leaf and intermediate certificates, and root CA must be part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT).
+> Front Door doesn't support certificates with elliptic curve (EC) cryptography algorithms. Also, your certificate must have a complete certificate chain with leaf and intermediate certificates, and the root certification authority (CA) must be part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT).
#### Register Azure Front Door Register the service principal for Azure Front Door as an app in your Azure Active Directory (Azure AD) by using Azure PowerShell or the Azure CLI. > [!NOTE]
-> * This action requires you to have Global Administrator permissions in Azure AD. The registration only needs to be performed **once per Azure AD tenant**.
+> * This action requires you to have *Global Administrator* permissions in Azure AD. The registration only needs to be performed **once per Azure AD tenant**.
> * The Application Id of **205478c0-bd83-4e1b-a9d6-db63a3e1e1c8** is predefined by Azure for Front Door Standard and Premium tier across all Azure tenants and subscriptions. Azure Front Door (Classic) has a different Application Id.
-##### Azure PowerShell
+# [Azure PowerShell](#tab/powershell)
1. If needed, install [Azure PowerShell](/powershell/azure/install-az-ps) in PowerShell on your local machine.
-2. In PowerShell, run the following command:
+1. Use PowerShell, run the following command:
```azurepowershell-interactive New-AzADServicePrincipal -ApplicationId '205478c0-bd83-4e1b-a9d6-db63a3e1e1c8' ```
-##### Azure CLI
+# [Azure CLI](#tab/cli)
-1. If need, install [Azure CLI](/cli/azure/install-azure-cli) on your local machine.
+1. If needed, install [Azure CLI](/cli/azure/install-azure-cli) on your local machine.
-2. In CLI, run the following command:
+1. Use the Azure CLI to run the following command:
```azurecli-interactive az ad sp create --id 205478c0-bd83-4e1b-a9d6-db63a3e1e1c8 ``` ++ #### Grant Azure Front Door access to your key vault Grant Azure Front Door permission to access the certificates in your Azure Key Vault account.
Azure Front Door can now access this key vault and the certificates it contains.
1. On the **Add certificate** page, select the checkbox for the certificate you want to add to Azure Front Door Standard/Premium.
-1. When you select a certificate, you must [select the certificate version](#rotate-own-certificate). If you select **Latest**, Azure Front Door will automatically update whenever the certificate is rotated (renewed). Alternatively, you can select a specific certificate version if you prefer to manage certificate rotation yourself.
+1. When you select a certificate, you must [select the certificate version](../domain.md#rotate-own-certificate). If you select **Latest**, Azure Front Door will automatically update whenever the certificate is rotated (renewed). Alternatively, you can select a specific certificate version if you prefer to manage certificate rotation yourself.
Leave the version selection as "Latest" and select **Add**.
Azure Front Door can now access this key vault and the certificates it contains.
1. Follow the on-screen steps to validate the certificate. Then associate the newly created custom domain to an endpoint as outlined in [creating a custom domain](how-to-add-custom-domain.md) guide.
-## Certificate renewal and changing certificate types
+## Switch between certificate types
-### AFD managed certificate for Non-Azure pre-validated domain
+You can change a domain between using an Azure Front Door-managed certificate and a customer-managed certificate. For more information, see [Domains in Azure Front Door](../domain.md#switch-between-certificate-types).
-AFD managed certificates are automatically rotated when your custom domain uses a CNAME record that points to an Azure Front Door Standard or Premium endpoint.
-
-Front Door won't automatically rotate certificates in the following scenarios:
-
-* The custom domain CNAME record is pointing to other DNS resources.
-* The custom domain points to the Azure Front Door through a long chain. For example, if you put Azure Traffic Manager before Azure Front Door, the CNAME chain is `contoso.com` CNAME in `contoso.trafficmanager.net` CNAME in `contoso.z01.azurefd.net`.
-
-The domain validation state will become *Pending Revalidation* 45 days before the managed certificate expires, or *Rejected* if the managed certificate issuance is rejected by the certificate authority. Refer to [Add a custom domain](how-to-add-custom-domain.md#domain-validation-state) for actions for each of the domain states.
-
-### Azure managed certificate for Azure pre-validated domain
-
-Azure managed certificates are automatically rotated by the Azure service that validates the domain.
-
-### <a name="rotate-own-certificate"></a>Use your own certificate
-
-In order for the certificate to automatically be rotated to the latest version when a newer version of the certificate is available in your key vault, set the secret version to 'Latest'. If a specific version is selected, you have to reselect the new version manually for certificate rotation. It takes up to 72 hours for the new version of the certificate/secret to be automatically deployed.
-
-If you want to change the secret version from ΓÇÿLatestΓÇÖ to a specified version or vice versa, add a new certificate.
-
-## How to switch between certificate types
-
-1. You can change an existing Azure managed certificate to a user-managed certificate by selecting the certificate state to open the **Certificate details** page.
+1. Select the certificate state to open the **Certificate details** page.
:::image type="content" source="../media/how-to-configure-https-custom-domain/domain-certificate.png" alt-text="Screenshot of certificate state on domains landing page.":::
-1. On the **Certificate details** page, you can change between *Azure managed* and
-*Bring Your Own Certificate (BYOC)*. Then follow the same steps as earlier to choose a certificate. Select **Update** to change the associated certificate with a domain.
+1. On the **Certificate details** page, you can change between *Azure managed* and *Bring Your Own Certificate (BYOC)*.
- > [!NOTE]
- > * It may take up to an hour for the new certificate to be deployed when you switch between certificate types.
- > * If your domain state is Approved, switching the certificate type between BYOC and managed certificate won't have any downtime. When switching to managed certificate, unless the domain ownership is re-validated and the domain state becomes Approved, you will continue to be served by the previous certificate.
- > * If you switch from BYOC to managed certificate, domain re-validation is required. If you switch from managed certificate to BYOC, you're not required to re-validate the domain.
- >
+ If you select *Bring Your Own Certificate (BYOC)*, follow the steps described above to select a certificate.
+
+1. Select **Update** to change the associated certificate with a domain.
:::image type="content" source="../media/how-to-configure-https-custom-domain/certificate-details-page.png" alt-text="Screenshot of certificate details page.":::
hdinsight Hdinsight Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-private-link.md
Previously updated : 08/30/2022 Last updated : 02/02/2023 # Enable Private Link on an HDInsight cluster
To create the private endpoints:
| Virtual network | hdi-privlink-client-vnet | | Subnet | default |
+ :::image type="content" source="media/hdinsight-private-link/basic-tab-private-endpoint.png" alt-text="Diagram of the Private Link basic tab.":::
+ :::image type="content" source="media/hdinsight-private-link/resource-tab-private-endpoint.png" alt-text="Diagram of the Private Link resource tab":::
+ :::image type="content" source="media/hdinsight-private-link/virtual-network-tab-private-endpoint.png" alt-text="Diagram of the Private Link virtual network tab.":::
+ :::image type="content" source="media/hdinsight-private-link/dns-tab-private-endpoint.png" alt-text="Diagram of the Private Link dns end point tab.":::
+ :::image type="content" source="media/hdinsight-private-link/tag-tab-private-endpoint.png" alt-text="Diagram of the Private Link tag tab.":::
+ :::image type="content" source="media/hdinsight-private-link/review-tab-private-endpoint.png" alt-text="Diagram of the Private Link review-tab.":::
+
4. Repeat the process to create another private endpoint for SSH access using the following configurations: | Config | Value |
To create the private endpoints:
> [!IMPORTANT] > If you're using KafkaRestProxy HDInsight cluster, then follow this extra steps to [Enable Private Endpoints](./enable-private-link-on-kafka-rest-proxy-hdi-cluster.md#create-private-endpoints). >
-
-
+ Once the private endpoints are created, youΓÇÖre done with this phase of the setup. If you didnΓÇÖt make a note of the private IP addresses assigned to the endpoints, follow the steps below: 1. Open the client VNET in the Azure portal.
-2. Click the 'Overview' tab.
-3. You should see both the Ambari and ssh Network interfaces listed and their private IP Addresses.
-4. Make a note of these IP addresses because they are required to connect to the cluster and properly configure DNS.
+1. Click the 'Overview' tab.
+1. You should see both the Ambari and ssh Network interfaces listed and their private IP Addresses.
+1. Make a note of these IP addresses because they are required to connect to the cluster and properly configure DNS.
## <a name="ConfigureDNS"></a>Step 6: Configure DNS to connect over private endpoints
To configure DNS resolution through a Private DNS zone:
| | -- | | Name | privatelink.azurehdinsight.net |
+ :::image type="content" source="media/hdinsight-private-link/private-dns-zone.png" alt-text="Diagram of the Private dns zone.":::
+
2. Add a Record set to the Private DNS zone for Ambari. | Config | Value |
To configure DNS resolution through a Private DNS zone:
| TTL | 1 | | TTL unit | Hours | | IP Address | Private IP of private endpoint for Ambari access |
-
+
+ :::image type="content" source="media/hdinsight-private-link/private-dns-zone-add-record.png" alt-text="Diagram of private dns zone add record.":::
+
3. Add a Record set to the Private DNS zone for SSH. | Config | Value |
To configure DNS resolution through a Private DNS zone:
| TTL | 1 | | TTL unit | Hours | | IP Address | Private IP of private endpoint for SSH access |
+
+ :::image type="content" source="media/hdinsight-private-link/private-dns-zone-add-ssh-record.png" alt-text="Diagram of private link dns zone add ssh record.":::
> [!IMPORTANT] > If you are using KafkaRestProxy HDInsight cluster, then follow this extra steps to [Configure DNS to connect over private endpoint](./enable-private-link-on-kafka-rest-proxy-hdi-cluster.md#configure-dns-to-connect-over-private-endpoints).
To configure DNS resolution through a Private DNS zone:
1. Click the 'Add' button. 1. Fill in the details: Link name, Subscription, and Virtual Network 1. Click **Save**.
+
+ :::image type="content" source="media/hdinsight-private-link/virtual-network-link.png" alt-text="Diagram of virtual-network-link.":::
## <a name="CheckConnectivity"></a>Step 7: Check cluster connectivity
iot-central Howto Query With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-query-with-rest-api.md
The following limits apply in the `SELECT` clause:
- There's no wildcard operator. - You can't have more than 15 items in the select list. - In a single query, you either select telemetry or properties but not both. A property query can include both reported properties and cloud properties.-- A property-based query will return a maximum of 1,000 records. A telemetry data-based query can return up to 10,000 records.
+- A property-based query returns a maximum of 1,000 records.
+- A telemetry-based query returns a maximum of 10,000 records.
### Aliases
Use the `TOP` to limit the number of results the query returns. For example, the
} ```
-If you don't use `TOP`, the query returns a maximum of 10,000 results for a telemetry data-based query and 1,000 for a property-based query.
+If you don't use `TOP`, the query returns a maximum of 10,000 results for a telemetry-based query and a maximum of 1,000 results for a property-based query.
To sort the results before `TOP` limits the number of results, use [ORDER BY](#order-by-clause).
The following limits apply in the `WHERE` clause:
- You can use a maximum of 10 operators in a single query. - In a telemetry query, the `WHERE` clause can only contain telemetry and device metadata filters. - In a property query, the `WHERE` clause can only contain reported properties, cloud properties, and device metadata filters.-- In a telemetry query, you can retrieve up to 10,000 records. In property query, you can retrieve up to 1,000 records.
+- In a telemetry query, you can retrieve up to 10,000 records.
+- In property query, you can retrieve up to 1,000 records.
## Aggregations and GROUP BY clause
The current limits for queries are:
- No more than 15 items in the `SELECT` clause list. - No more than 10 logical operations in the `WHERE` clause.-- Queries return a maximum of 10,000 records. - The maximum length of a query string is 350 characters. - You can't use the wildcard (`*`) in the `SELECT` clause list.-- Telemetry-based query can retrieve up to 10,000 records. -- Property-based query can retrieve up to 1,000 records.
+- Telemetry-based queries can retrieve up to 10,000 records.
+- Property-based queries can retrieve up to 1,000 records.
## Next steps
load-testing Reference Test Config Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/reference-test-config-yaml.md
A test configuration uses the following keys:
| Key | Type | Default value | Description | | -- | -- | -- | - | | `version` | string | | Version of the YAML configuration file that the service uses. Currently, the only valid value is `v0.1`. |
-| `testId` | string | | *Required*. Id of the test to run. For a new test, enter an Id with characters [a-z0-9_-]. For an existing test, you can get the test Id from the test details page in Azure portal. This field was called `testName` earlier, which has been deprecated. You can still run existing tests with `testName`field. |
+| `testId` | string | | *Required*. Id of the test to run. testId must be between 2 to 50 characters. For a new test, enter an Id with characters [a-z0-9_-]. For an existing test, you can get the test Id from the test details page in Azure portal. This field was called `testName` earlier, which has been deprecated. You can still run existing tests with `testName`field. |
| `displayName` | string | | Display name of the test. This will be shown in the list of tests in Azure portal. If not provided, testId is used as the display name. | | `testPlan` | string | | *Required*. Relative path to the Apache JMeter test script to run. | | `engineInstances` | integer | | *Required*. Number of parallel instances of the test engine to execute the provided test plan. You can update this property to increase the amount of load that the service can generate. | | `configurationFiles` | array | | List of relevant configuration files or other files that you reference in the Apache JMeter script. For example, a CSV data set file, images, or any other data file. These files will be uploaded to the Azure Load Testing resource alongside the test script. If the files are in a subfolder on your local machine, use file paths that are relative to the location of the test script. <BR><BR>Azure Load Testing currently doesn't support the use of file paths in the JMX file. When you reference an external file in the test script, make sure to only specify the file name. |
-| `description` | string | | Short description of the test. |
+| `description` | string | | Short description of the test. description must have a maximum length of 100 characters |
| `subnetId` | string | | Resource ID of the subnet for testing privately hosted endpoints (VNET injection). This subnet will host the injected test engine VMs. For more information, see [how to load test privately hosted endpoints](./how-to-test-private-endpoint.md). | | `failureCriteria` | object | | Criteria that indicate when a test should fail. The structure of a fail criterion is: `Request: Aggregate_function (client_metric) condition threshold`. For more information on the supported values, see [Define load test fail criteria](./how-to-define-test-criteria.md#load-test-fail-criteria). | | `properties` | object | | List of properties to configure the load test. |
logic-apps Secure Single Tenant Workflow Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md
To secure outbound traffic from your logic app, you can integrate your logic app
| Source port | Direction | Protocol | Source / Destination | Purpose | |-|--|-|-||
- | 443 | Outbound | TCP | Private endpoint / Storage account | Storage account |
- | 445 | Outbound | TCP | Private endpoint / Subnet integrated with Standard logic app | Server Message Block (SMB) File Share |
+ | 443 | Outbound | TCP | Subnet integrated with Standard logic app / Storage account | Storage account |
+ | 445 | Outbound | TCP | Subnet integrated with Standard logic app / Storage account | Server Message Block (SMB) File Share |
- For Azure-hosted managed connectors to work, you need to have an uninterrupted connection to the managed API service. With virtual network integration, make sure that no firewall or network security policy blocks these connections. If your virtual network uses a network security group (NSG), user-defined route table (UDR), or a firewall, make sure that the virtual network allows outbound connections to [all managed connector IP addresses](/connectors/common/outbound-ip-addresses#azure-logic-apps) in the corresponding region. Otherwise, Azure-managed connectors won't work.
machine-learning How To Identity Based Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md
The following steps outline how to set up data access with user identity for tra
1. Grant data access and create data store as described above for CLI.
-1. Submit a training job with identity parameter set to [azure.ai.ml.UserIdentity](/python/api/azure-ai-ml/azure.ai.ml.useridentity). This parameter setting enables the job to access data on behalf of user submitting the job.
+1. Submit a training job with identity parameter set to [azure.ai.ml.UserIdentityConfiguration](/python/api/azure-ai-ml/azure.ai.ml.useridentityconfiguration). This parameter setting enables the job to access data on behalf of user submitting the job.
```python from azure.ai.ml import command from azure.ai.ml.entities import Data, UriReference from azure.ai.ml import Input from azure.ai.ml.constants import AssetTypes
- from azure.ai.ml import UserIdentity
+ from azure.ai.ml import UserIdentityConfiguration
# Specify the data location my_job_inputs = {
The following steps outline how to set up data access with user identity for tra
inputs=my_job_inputs, environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9", compute="<my-compute-cluster-name>",
- identity= UserIdentity()
+ identity= UserIdentityConfiguration()
) # submit the command returned_job = ml_client.jobs.create_or_update(job)
marketplace View Acquisitions Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/view-acquisitions-report.md
Title: View acquisitions report
-description: Analyze your app or add-in performance and see funnel and acquisitions metrics.
+description: Analyze your app or add-in performance and see your funnel and acquisitions metrics.
Last updated 01/10/2022
# View the Acquisitions report in the dashboard
-<! [
-[image](https://user-images.githubusercontent.com/62076972/134597753-6fd281d2-9cdd-4ba0-b45d-645ca18e22d1.png)
-]() >
-The _Acquisitions report_ in the Partner Center dashboard lets you see who has acquired and installed your add-in, app, or visual, and shows info about how customers have arrived at your Microsoft AppSource listing.
+The Acquisitions report in the [Partner Center dashboard](https://partner.microsoft.com/dashboard/home) lets you see who has acquired and installed your add-in, app, or visual, and shows info about how customers have arrived at your Microsoft AppSource listing.
In this report, an acquisition means a new customer has obtained a license to your solution (whether you charged money or you've offered it for free). If your solution supports multi-seat acquisitions, such as site license purchases, these will also be detailed and displayed.
The SLA for Acquisitions data is currently 4 days.
:::image type="content" source="./media/office-store-workspaces/insights-tile.png" alt-text="Illustrates the Insights tile on the Partner Center home page.":::
-1. In the left-menu, select **Acquistions**.
+1. In the left-menu, select **Acquisitions**.
<a name="BKMK_Edit"> </a> ## Apply filters
In this chart, a channel refers to the method in which a customer arrived at y
- Other - The customer followed an external link (without any custom campaign ID) from a website to your app's listing or the customer followed a link from a search engine to your app's listing.
-A page view means that a customer viewed your solutions's Microsoft AppSource listing page. This includes views by people who aren't signed in. Some customers have opted out of providing this information to Microsoft.
+A page view means that a customer viewed your solution's Microsoft AppSource listing page. This includes views by people who aren't signed in. Some customers have opted out of providing this information to Microsoft.
> [!NOTE] > Customers can arrive at your app's listing by clicking a custom campaign not created by you. We stamp every page view within a session with the campaign ID from which the customer first landed on Microsoft AppSource. We then attribute conversions to that campaign ID for all acquisitions within 24 hours. Because of this, you might see a higher number of total conversions than the total conversions for your campaign IDs, and you might have conversions or add-on conversions that have zero page views.
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
> [!NOTE] > This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. +
+## February 2022
+
+- **Minor version upgrade for Azure Database for MySQL - Flexible server to 8.0.31**
+
+ After this month's deployment, Azure Database for MySQL - Flexible Server 8.0 will be running on minor version 8.0.31*, to learn more about changes coming in this minor version [visit Changes in MySQL 8.0.31 (2022-10-11, General Availability)](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-31.html)
+
+- **Known issues**
+
+ Upgrade Option Unavailable in Portal: Following technical issues after this month's deployment, the Major Version Upgrade feature has been temporarily disabled in the portal. We apologize for any inconvenience caused. Our team is working on a solution and the issue will be resolved in the next deployment cycle. If you require immediate assistance with the Major Version Upgrade, please open a [support ticket](https://azure.microsoft.com/support/create-ticket/) and we will assist you.
++ ## December 2022 - **New Replication Metrics**
mysql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity-architecture.md
The following table lists the gateway IP addresses of the Azure Database for MyS
| Germany North | 51.116.56.0 | | | | Germany North East | 51.5.144.179 | | | | Germany West Central | 51.116.152.0 | | |
-| India Central | 104.211.96.159 | | |
+| India Central | 20.192.96.33 | 104.211.96.159 | |
| India South | 104.211.224.146 | | | | India West | 104.211.160.80 | | | | Japan East | 40.79.192.23, 40.79.184.8 | 13.78.61.196 | |
The following table lists the gateway IP addresses of the Azure Database for MyS
| UAE Central | 20.37.72.64 | | | | UAE North | 65.52.248.0 | | | | UK South | 51.140.144.32, 51.105.64.0 | 51.140.184.11 | |
-| UK West | 51.141.8.11 | | |
+| UK West | 51.140.208.98 | 51.141.8.11 | |
| West Central US | 13.78.145.25, 52.161.100.158 | | | | West Europe | 13.69.105.208, 104.40.169.187 | 40.68.37.158 | 191.237.232.75 | | West US | 13.86.216.212, 13.86.217.212 | 104.42.238.205 | 23.99.34.75 |
network-watcher Azure Monitor Agent With Connection Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/azure-monitor-agent-with-connection-monitor.md
Title: Monitor network connectivity by using Azure Monitor Agent
+ Title: Monitor network connectivity using Azure Monitor Agent
description: This article describes how to monitor network connectivity in Connection Monitor by using Azure Monitor Agent.
#Customer intent: I need to monitor a connection by using Azure Monitor Agent.
-# Monitor network connectivity by using Azure Monitor Agent with Connection Monitor
+# Monitor network connectivity using Azure Monitor Agent with connection monitor
Connection Monitor supports the Azure Monitor Agent extension, which eliminates any dependency on the legacy Log Analytics agent.
network-watcher Connection Monitor Connected Machine Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-connected-machine-agent.md
Title: Install the Azure Connected Machine agent for Connection Monitor
+ Title: Install the Azure Connected Machine agent for connection monitor
description: This article describes how to install Azure Connected Machine agent
network-watcher Connection Monitor Create Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-create-using-portal.md
Title: Create a connection monitor - Azure portal
-description: This article describes how to create a monitor in Connection Monitor by using the Azure portal.
+description: Learn how to create a monitor in Azure Network Watcher connection monitor using the Azure portal.
Last updated 11/05/2022
#Customer intent: I need to create a connection monitor to monitor communication between one VM and another.
-# Create a monitor in Connection Monitor by using the Azure portal
+# Create an Azure Network Watcher connection monitor using the Azure portal
This article describes how to create a monitor in Connection Monitor by using the Azure portal. Connection Monitor supports hybrid and Azure cloud deployments.
network-watcher Connection Monitor Create Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-create-using-powershell.md
Title: Create a connection monitor - PowerShell
description: Learn how to create a connection monitor by using PowerShell. Last updated 01/07/2021 #Customer intent: I need to create a connection monitor by using PowerShell to monitor communication between one VM and another.
-# Create a connection monitor by using PowerShell
+
+# Create an Azure Network Watcher connection monitor using PowerShell
> [!IMPORTANT] > Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You will also not be able to add new connection monitors in Connection Monitor (classic). You can continue to use the tests and connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor ](migrate-to-connection-monitor-from-network-performance-monitor.md) or [migrate from Connection Monitor (classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before 29 February 2024.
network-watcher Connection Monitor Create Using Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-create-using-template.md
Title: Create Connection Monitor - ARM template
+ Title: Create connection monitor - ARM template
-description: Learn how to create Connection Monitor using the ARMClient.
+description: Learn how to create Azure Network Watcher connection monitor using the ARMClient.
-+ Last updated 02/08/2021 #Customer intent: I need to create a connection monitor to monitor communication between one VM and another.
-# Create a Connection Monitor using the ARM template
+
+# Create an Azure Network Watcher connection monitor using ARM template
> [!IMPORTANT] > Starting 1 July 2021, you'll not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You'll also not be able to add new connection monitors in Connection Monitor (classic). You can continue to use the tests and connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor ](migrate-to-connection-monitor-from-network-performance-monitor.md) or [migrate from Connection Monitor (classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before 29 February 2024.
network-watcher Connection Monitor Install Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-install-azure-monitor-agent.md
Title: Install Azure Monitor Agent for Connection Monitor
+ Title: Install Azure Monitor Agent for connection monitor
description: This article describes how to install Azure Monitor Agent.
network-watcher Connection Monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-overview.md
Title: Connection Monitor in Azure | Microsoft Docs
-description: Learn how to use Connection Monitor to monitor network communication in a distributed environment.
+ Title: Connection monitor
+
+description: Learn how to use Azure Network Watcher connection monitor to monitor network communication in a distributed environment.
tags: azure-resource-manager --++ Last updated 10/04/2022 #Customer intent: I need to monitor communication between one VM and another. If the communication fails, I need to know why so that I can resolve the problem.
-# Monitor network connectivity by using Connection Monitor
+
+# Azure Network Watcher connection monitor
> [!IMPORTANT] > As of July 1, 2021, you can no longer add new tests in an existing workspace or enable a new workspace in Network Performance Monitor (NPM). You're also no longer able to add new connection monitors in Connection Monitor (Classic). You can continue to use the tests and connection monitors that you've created prior to July 1, 2021.
network-watcher Connection Monitor Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-schema.md
Title: Azure Network Watcher Connection Monitor schemas | Microsoft Docs
-description: Understand the Tests data schema and the Path data schema of Azure Network Watcher Connection Monitor.
+ Title: Connection monitor schemas
+
+description: Understand the tests data schema and the path data schema of Azure Network Watcher connection monitor.
--++ Last updated 08/14/2021
network-watcher Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/data-residency.md
Title: Data residency for Azure Network Watcher | Microsoft Docs
+ Title: Data residency for Azure Network Watcher
description: This article will help you understand data residency for the Azure Network Watcher service. --++ Last updated 06/16/2021
network-watcher Diagnose Communication Problem Between Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-communication-problem-between-networks.md
Title: 'Tutorial - Diagnose communication problem between networks using the Azure portal'
+ Title: 'Tutorial: Diagnose communication problem between networks using the Azure portal'
description: In this tutorial, learn how to diagnose a communication problem between an Azure virtual network connected to an on-premises, or other virtual network, through an Azure virtual network gateway, using Network Watcher's VPN diagnostics capability. -
-# Customer intent: I need to determine why resources in a virtual network can't communicate with resources in a different network.
- -+ Last updated 01/07/2021
+# Customer intent: I need to determine why resources in a virtual network can't communicate with resources in a different network.
# Tutorial: Diagnose a communication problem between networks using the Azure portal
network-watcher Diagnose Vm Network Routing Problem Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem-cli.md
documentationcenter: network-watcher tags: azure-resource-manager
-# Customer intent: I need to diagnose virtual machine (VM) network routing problem that prevents communication to different destinations.
network-watcher
Last updated 03/18/2022
+# Customer intent: I need to diagnose virtual machine (VM) network routing problem that prevents communication to different destinations.
# Diagnose a virtual machine network routing problem - Azure CLI
network-watcher Diagnose Vm Network Routing Problem Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem-powershell.md
documentationcenter: network-watcher tags: azure-resource-manager
-# Customer intent: I need to diagnose virtual machine (VM) network routing problem that prevents communication to different destinations.
- network-watcher
Last updated 01/07/2021 -
+# Customer intent: I need to diagnose virtual machine (VM) network routing problem that prevents communication to different destinations.
# Diagnose a virtual machine network routing problem - Azure PowerShell
network-watcher Diagnose Vm Network Routing Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem.md
documentationcenter: network-watcher tags: azure-resource-manager
-# Customer intent: I need to diagnose virtual machine (VM) network routing problem that prevents communication to different destinations.
- network-watcher
Last updated 01/07/2021 -
+# Customer intent: I need to diagnose virtual machine (VM) network routing problem that prevents communication to different destinations.
# Tutorial: Diagnose a virtual machine network routing problem using the Azure portal
network-watcher Enable Network Watcher Flow Log Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/enable-network-watcher-flow-log-settings.md
Title: Enable Azure Network Watcher description: Learn how to enable Network Watcher. --++ Last updated 05/30/2022
network-watcher Migrate To Connection Monitor From Connection Monitor Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/migrate-to-connection-monitor-from-connection-monitor-classic.md
Title: Migrate to Connection Monitor from Connection Monitor
description: Learn how to migrate to Connection Monitor from Connection Monitor. -+ Last updated 06/30/2021
network-watcher Migrate To Connection Monitor From Network Performance Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/migrate-to-connection-monitor-from-network-performance-monitor.md
description: Learn how to migrate to Connection Monitor from Network Performance
-+ Last updated 06/30/2021
network-watcher Network Insights Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-troubleshooting.md
Title: Azure Monitor Network Insights troubleshooting description: Troubleshooting steps for issues that may arise while using Network insights-+
network-watcher Network Watcher Alert Triggered Packet Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-alert-triggered-packet-capture.md
Title: Use packet capture to do proactive network monitoring with alerts - Azure Functions description: This article describes how to create an alert triggered packet capture with Azure Network Watcher ms.assetid: 75e6e7c4-b3ba-4173-8815-b00d7d824e11 Last updated 01/09/2023
network-watcher Network Watcher Analyze Nsg Flow Logs Graylog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-analyze-nsg-flow-logs-graylog.md
Title: Analyze Azure network security group flow logs - Graylog | Microsoft Docs
+ Title: Analyze Azure network security group flow logs - Graylog
description: Learn how to manage and analyze network security group flow logs in Azure using Network Watcher and Graylog. tags: azure-resource-manager Last updated 07/03/2021
network-watcher Network Watcher Connectivity Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-cli.md
Title: Troubleshoot connections - Azure CLI
description: Learn how to use the connection troubleshoot capability of Azure Network Watcher using the Azure CLI. --++ Last updated 01/07/2021
network-watcher Network Watcher Connectivity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-overview.md
Title: Introduction to Azure Network Watcher Connection Troubleshoot | Microsoft Docs
-description: This page provides an overview of the Network Watcher connection troubleshooting capability
+ Title: Introduction to connection troubleshoot
+
+description: This page provides an overview of Azure Network Watcher connection troubleshoot capability.
--++ Last updated 11/10/2022
-# Introduction to connection troubleshoot in Azure Network Watcher
+# Introduction to Azure Network Watcher connection troubleshoot in Azure Network Watcher
The connection troubleshoot feature of Network Watcher provides the capability to check a direct TCP connection from a virtual machine to a virtual machine (VM), fully qualified domain name (FQDN), URI, or IPv4 address. Network scenarios are complex, they're implemented using network security groups, firewalls, user-defined routes, and resources provided by Azure. Complex configurations make troubleshooting connectivity issues challenging. Network Watcher helps reduce the amount of time to find and detect connectivity issues. The results returned can provide insights into whether a connectivity issue is due to a platform or a user configuration issue. Connectivity can be checked with [PowerShell](network-watcher-connectivity-powershell.md), [Azure CLI](network-watcher-connectivity-cli.md), and [REST API](network-watcher-connectivity-rest.md).
network-watcher Network Watcher Connectivity Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-portal.md
Title: Troubleshoot connections - Azure portal
description: Learn how to use the connection troubleshoot capability of Azure Network Watcher using the Azure portal. --++ Last updated 01/04/2021
network-watcher Network Watcher Connectivity Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-powershell.md
Title: Troubleshoot connections - Azure PowerShell
description: Learn how to use the connection troubleshoot capability of Azure Network Watcher using PowerShell. --++ Last updated 01/07/2021
network-watcher Network Watcher Connectivity Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-rest.md
Title: Troubleshoot connections - Azure REST API
description: Learn how to use the connection troubleshoot capability of Azure Network Watcher using the Azure REST API. -+ Last updated 01/07/2021
network-watcher Network Watcher Deep Packet Inspection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-deep-packet-inspection.md
Title: Packet inspection with Azure Network Watcher | Microsoft Docs
-description: This article describes how to use Network Watcher to perform deep packet inspection collected from a VM
+ Title: Packet inspection with Azure Network Watcher
+description: This article describes how to use Azure Network Watcher to perform deep packet inspection collected from a VM.
ms.assetid: 7b907d00-9c35-40f5-a61e-beb7b782276f -+ Last updated 01/07/2021
network-watcher Network Watcher Delete Nsg Flow Log Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-delete-nsg-flow-log-blobs.md
Title: Delete storage blobs for network security group flow logs in Azure Network Watcher | Microsoft Docs
+ Title: Delete storage blobs for network security group flow logs in Azure Network Watcher
description: This article explains how to delete the network security group flow log storage blobs that are outside their retention policy period in Azure Network Watcher. -+ Last updated 01/07/2021 - # Delete network security group flow log storage blobs in Network Watcher
network-watcher Network Watcher Diagnose On Premises Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-diagnose-on-premises-connectivity.md
Title: Diagnose on-premises connectivity via VPN gateway
description: This article describes how to diagnose on-premises connectivity via VPN gateway with Azure Network Watcher resource troubleshooting. ms.assetid: aeffbf3d-fd19-4d61-831d-a7114f7534f9 -+ Last updated 01/20/2021
network-watcher Network Watcher Intrusion Detection Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-intrusion-detection-open-source-tools.md
Title: Perform network intrusion detection with open source tools
description: This article describes how to use Azure Network Watcher and open source tools to perform network intrusion detection ms.assetid: 0f043f08-19e1-4125-98b0-3e335ba69681 -+ Last updated 09/15/2022
network-watcher Network Watcher Ip Flow Verify Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-ip-flow-verify-overview.md
Title: Introduction to IP flow verify in Azure Network Watcher | Microsoft Docs
-description: This page provides an overview of the Network Watcher IP flow verify capability
+ Title: Introduction to IP flow verify
+
+description: This page provides an overview of Azure Network Watcher IP flow verify capability.
--++ Last updated 10/04/2022
-# Introduction to IP flow verify in Azure Network Watcher
+# Introduction to Azure Network Watcher IP flow verify
IP flow verify checks if a packet is allowed or denied to or from a virtual machine. The information consists of direction, protocol, local IP, remote IP, local port, and a remote port. If the packet is denied by a security group, the name of the rule that denied the packet is returned. While any source or destination IP can be chosen, IP flow verify helps administrators quickly diagnose connectivity issues from or to the internet and from or to the on-premises environment.
network-watcher Network Watcher Monitor With Azure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-monitor-with-azure-automation.md
- Title: Troubleshoot and monitor VPN gateways - Azure Automation description: This article describes how to diagnose On-premises connectivity with Azure Automation and Network Watcher -+ Last updated 11/20/2020 - # Monitor VPN gateways with Network Watcher troubleshooting
network-watcher Network Watcher Next Hop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-next-hop-overview.md
Title: Introduction to next hop in Azure Network Watcher | Microsoft Docs
-description: This article provides an overview of the Network Watcher next hop capability.
+ Title: Introduction to Azure Network Watcher next hop
+description: Learn about Azure Network Watcher next hop capability that you can use to diagnose virtual machine routing problems.
ms.assetid: febf7bca-e0b7-41d5-838f-a5a40ebc5aac --++ Last updated 01/29/2020
-# Use next hop to diagnose virtual machine routing problems
+# Introduction to Azure Network Watcher next hop
Traffic from a virtual machine (VM) is sent to a destination based on the effective routes associated with a network interface (NIC). Next hop gets the next hop type and IP address of a packet from a specific VM and NIC. Knowing the next hop helps you determine if traffic is being directed to the intended destination, or whether the traffic is being sent nowhere. An improper configuration of routes, where traffic is directed to an on-premises location, or a virtual appliance, can lead to connectivity issues. Next hop also returns the route table associated with the next hop. If the route is defined as a user-defined route, that route is returned. Otherwise, next hop returns **System Route**.
network-watcher Network Watcher Nsg Auditing Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-auditing-powershell.md
Title: Automate NSG auditing - Security group view
description: This page provides instructions on how to configure auditing of a Network Security Group - -+ Last updated 03/01/2022 - # Automate NSG auditing with Azure Network Watcher Security group view
network-watcher Network Watcher Nsg Flow Logging Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-azure-resource-manager.md
Title: Network Watcher - Create NSG flow logs using an Azure Resource Manager template description: Use an Azure Resource Manager template and PowerShell to easily set up NSG Flow Logs. tags: azure-resource-manager -+ Last updated 02/09/2022
network-watcher Network Watcher Nsg Flow Logging Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-cli.md
-+ Last updated 12/09/2021
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
Title: Introduction to flow logging for NSGs
description: This article explains how to use the NSG flow logs feature of Azure Network Watcher. --++ Last updated 10/06/2022
network-watcher Network Watcher Nsg Flow Logging Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-powershell.md
Title: Manage NSG Flow logs - Azure PowerShell
+ Title: Manage NSG flow logs - Azure PowerShell
-description: This page explains how to manage Network Security Group Flow logs in Azure Network Watcher with Azure PowerShell
+description: This page explains how to manage network security group flow logs in Azure Network Watcher using Azure PowerShell.
-+ Last updated 12/24/2021
-# Configuring Network Security Group Flow logs with Azure PowerShell
+# Configure network security group flow logs using Azure PowerShell
> [!div class="op_single_selector"] > - [Azure portal](network-watcher-nsg-flow-logging-portal.md)
network-watcher Network Watcher Nsg Flow Logging Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-rest.md
Title: Manage NSG flow logs - Azure REST API
description: This page explains how to manage Network Security Group flow logs in Azure Network Watcher with REST API -+ Last updated 07/13/2021 - -
-# Configuring Network Security Group flow logs using REST API
+# Configure network security group flow logs using REST API
> [!div class="op_single_selector"] > - [Azure portal](network-watcher-nsg-flow-logging-portal.md)
network-watcher Network Watcher Nsg Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-grafana.md
Title: Manage NSG Flow Logs using Grafana
description: Manage and analyze Network Security Group Flow Logs in Azure using Network Watcher and Grafana. tags: azure-resource-manager Last updated 09/15/2022
-# Manage and analyze Network Security Group flow logs using Network Watcher and Grafana
+# Manage and analyze network security group flow logs using Network Watcher and Grafana
[Network Security Group (NSG) flow logs](network-watcher-nsg-flow-logging-overview.md) provide information that can be used to understand ingress and egress IP traffic on network interfaces. These flow logs show outbound and inbound flows on a per NSG rule basis, the NIC the flow applies to, 5-tuple information about the flow (Source/Destination IP, Source/Destination Port, Protocol), and if the traffic was allowed or denied.
network-watcher Network Watcher Packet Capture Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-cli.md
Title: Manage packet captures with Azure Network Watcher - Azure CLI | Microsoft Docs
-description: This page explains how to manage the packet capture feature of Network Watcher using the Azure CLI
+ Title: Manage packet captures in VMs with Azure Network Watcher - Azure CLI
+description: Learn how to manage packet captures in virtual machines with the packet capture feature of Network Watcher using the Azure CLI.
ms.assetid: cb0c1d10-f7f2-4c34-b08c-f73452430be8 -+ Last updated 12/09/2021
network-watcher Network Watcher Packet Capture Manage Portal Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-portal-vmss.md
Title: Manage packet captures in Virtual machine scale sets with Azure Network Watcher - Azure portal
+ Title: Manage packet captures in virtual machine scale sets - Azure portal
-description: Learn how to manage the packet capture feature of Network Watcher in virtual machine scale set using the Azure portal.
+description: Learn how to manage packet captures in virtual machine scale sets with the packet capture feature of Network Watcher using the Azure portal.
-+ Last updated 06/07/2022
network-watcher Network Watcher Packet Capture Manage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-portal.md
Title: Manage packet captures in VMs with Network Watcher - Azure portal-
+ Title: Manage packet captures in VMs with Azure Network Watcher - Azure portal
description: Learn how to manage packet captures in virtual machines with the packet capture feature of Network Watcher using the Azure portal. -+ Last updated 01/04/2023
network-watcher Network Watcher Packet Capture Manage Powershell Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-powershell-vmss.md
Title: Manage packet captures in Virtual machine scale sets - Azure PowerShell
+ Title: Manage packet captures in virtual machine scale sets - Azure PowerShell
-description: This page explains how to manage the packet capture feature of Network Watcher in virtual machine scale set using PowerShell
+description: Learn how to manage packet captures in virtual machine scale sets with the packet capture feature of Network Watcher using PowerShell.
-+ Last updated 06/07/2022
network-watcher Network Watcher Packet Capture Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-powershell.md
Title: Manage packet captures - Azure PowerShell-
-description: This page explains how to manage the packet capture feature of Network Watcher using PowerShell
+ Title: Manage packet captures in VMs with Azure Network Watcher - Azure PowerShell
+description: Learn how to manage packet captures in virtual machines with the packet capture feature of Network Watcher using PowerShell.
-+ Last updated 02/01/2021
network-watcher Network Watcher Packet Capture Manage Rest Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-rest-vmss.md
Title: Manage packet captures in Virtual machine scale sets with Azure Network Watcher- REST API | Microsoft Docs
-description: This page explains how to manage the packet capture feature of virtual machine scale set in Network Watcher using Azure REST API
+ Title: Manage packet captures in virtual machine scale sets - REST API
+
+description: Learn how to manage packet captures in virtual machine scale sets with the packet capture feature of Network Watcher using Azure REST API.
-+ Last updated 10/04/2022
network-watcher Network Watcher Packet Capture Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-rest.md
Title: Manage packet captures with Azure Network Watcher - REST API | Microsoft Docs
-description: This page explains how to manage the packet capture feature of Network Watcher using Azure REST API
+ Title: Manage packet captures in VMs with Azure Network Watcher - REST API
+description: Learn how to manage packet captures in virtual machines with the packet capture feature of Network Watcher using Azure REST API.
-+ Last updated 05/28/2021 - # Manage packet captures with Azure Network Watcher using Azure REST API
network-watcher Network Watcher Packet Capture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-overview.md
Title: Introduction to Packet capture in Azure Network Watcher | Microsoft Docs
-description: This page provides an overview of the Network Watcher packet capture's capability
+ Title: Introduction to packet capture in Azure Network Watcher
+description: Learn about the Network Watcher packet capture capability.
--++ Last updated 06/07/2022
-# Introduction to variable packet capture in Azure Network Watcher
+# Introduction to packet capture in Azure Network Watcher
> [!Important]
-> Packet capture is now also available for **Virtual Machine Scale Sets**. To check it out, visit [Manage packet capture in the Azure portal for VMSS](network-watcher-packet-capture-manage-portal-vmss.md).
+> Packet capture is now also available for **virtual machine scale sets**. To check it out, visit [Manage packet captures in virtual machine scale sets with Azure Network Watcher using the portal](network-watcher-packet-capture-manage-portal-vmss.md).
Network Watcher variable packet capture allows you to create packet capture sessions to track traffic to and from a virtual machine. Packet capture helps to diagnose network anomalies both reactively and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communications and much more.
network-watcher Network Watcher Read Nsg Flow Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-read-nsg-flow-logs.md
Title: Read NSG flow logs | Microsoft Docs
-description: Learn how to use Azure PowerShell to parse Network Security Group flow logs, which are created hourly and updated every few minutes in Azure Network Watcher.
+ Title: Read NSG flow logs
+description: Learn how to use Azure PowerShell to parse network security group flow logs, which are created hourly and updated every few minutes in Azure Network Watcher.
-+ Last updated 02/09/2021
network-watcher Network Watcher Security Group View Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-cli.md
Title: Analyze network security with Security Group View - Azure CLI
description: This article will describe how to use Azure CLI to analyze a virtual machines security with Security Group View. -+ Last updated 12/09/2021
network-watcher Network Watcher Security Group View Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-overview.md
Title: Introduction to Effective security rules view in Azure Network Watcher | Microsoft Docs
-description: This page provides an overview of the Network Watcher - Effective security rules view capability
+ Title: Introduction to effective security rules view in Azure Network Watcher
+description: Learn about Azure Network Watcher effective security rules view capability.
--++ Last updated 03/18/2022
network-watcher Network Watcher Security Group View Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-powershell.md
Title: Analyze network security - Security Group View - Azure PowerShell
description: This article will describe how to use PowerShell to analyze a virtual machines security with Security Group View. -+ Last updated 11/20/2020
network-watcher Network Watcher Security Group View Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-rest.md
Title: Analyze network security - Security Group View - Azure REST API
description: This article will describe how to the Azure REST API to analyze a virtual machines security with Security Group View. -+ Last updated 03/01/2022
network-watcher Network Watcher Troubleshoot Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-manage-cli.md
Title: Troubleshoot Azure VNET Gateway and Connections - Azure CLI
+ Title: Troubleshoot Azure VNet gateway and connections - Azure CLI
-description: This page explains how to use the Azure Network Watcher troubleshoot Azure CLI
+description: This page explains how to use the Azure Network Watcher troubleshoot Azure CLI.
--++ Last updated 07/25/2022 -
-# Troubleshoot Virtual Network Gateway and Connections using Azure Network Watcher Azure CLI
+# Troubleshoot virtual network gateway and connections with Azure Network Watcher using Azure CLI
> [!div class="op_single_selector"] > - [Portal](diagnose-communication-problem-between-networks.md)
network-watcher Network Watcher Troubleshoot Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-manage-powershell.md
Title: Troubleshoot Azure VNet gateway and connections - Azure PowerShell
-description: This page explains how to use the Azure Network Watcher troubleshoot PowerShell cmdlet
+description: This page explains how to use the Azure Network Watcher troubleshoot PowerShell.
--++ Last updated 11/22/2022
-# Troubleshoot Virtual Network Gateway and Connections using Azure Network Watcher PowerShell
+# Troubleshoot virtual network gateway and connections with Azure Network Watcher using PowerShell
> [!div class="op_single_selector"] > - [Portal](diagnose-communication-problem-between-networks.md)
network-watcher Network Watcher Troubleshoot Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-manage-rest.md
Title: Troubleshoot VNET Gateway and Connections - Azure REST API
+ Title: Troubleshoot VNet gateway and connections - Azure REST API
-description: This page explains how to troubleshoot Virtual Network Gateways and Connections with Azure Network Watcher using REST
+description: This page explains how to troubleshoot virtual network gateway and connections with Azure Network Watcher using REST API.
--++ Last updated 01/07/2021 -
-# Troubleshoot Virtual Network gateway and Connections using Azure Network Watcher
+# Troubleshoot virtual network gateway and connections with Azure Network Watcher using REST API
> [!div class="op_single_selector"] > - [Portal](diagnose-communication-problem-between-networks.md)
network-watcher Network Watcher Troubleshoot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-overview.md
Title: Introduction to resource troubleshooting
+ Title: Introduction to VPN troubleshoot
-description: This page provides an overview of the Network Watcher resource troubleshooting capabilities
+description: This page provides an overview of Azure Network Watcher VPN troubleshoot capability.
--++ Last updated 03/31/2022
-# Introduction to resource troubleshooting in Azure Network Watcher
+# Introduction to virtual network gateway troubleshooting in Azure Network Watcher
-Virtual Network Gateways provide connectivity between on-premises resources and other virtual networks within Azure. Monitoring gateways and their connections are critical to ensuring communication is not broken. Network Watcher provides the capability to troubleshoot gateways and connections. The capability can be called through the portal, PowerShell, Azure CLI, or REST API. When called, Network Watcher diagnoses the health of the gateway, or connection, and returns the appropriate results. The request is a long running transaction. The results are returned once the diagnosis is complete.
+Virtual network gateways provide connectivity between on-premises resources and other virtual networks within Azure. Monitoring gateways and their connections are critical to ensuring communication is not broken. Network Watcher provides the capability to troubleshoot gateways and connections. The capability can be called through the portal, PowerShell, Azure CLI, or REST API. When called, Network Watcher diagnoses the health of the gateway, or connection, and returns the appropriate results. The request is a long running transaction. The results are returned once the diagnosis is complete.
![Screenshot shows Network Watcher V P N Diagnostics.][2]
network-watcher Network Watcher Using Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-using-open-source-tools.md
Title: Visualize network traffic patterns with open source tools
description: This page describes how to use Network Watcher packet capture with Capanalysis to visualize traffic patterns to and from your VMs. -+ Last updated 02/25/2021
network-watcher Network Watcher Visualize Nsg Flow Logs Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-visualize-nsg-flow-logs-open-source-tools.md
Title: Visualize NSG flow logs - Elastic Stack
description: Manage and analyze Network Security Group Flow Logs in Azure using Network Watcher and Elastic Stack. -+ Last updated 09/15/2022
network-watcher Network Watcher Visualize Nsg Flow Logs Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-visualize-nsg-flow-logs-power-bi.md
Title: Visualizing Azure NSG flow logs - Power BI
description: Learn how to use Power BI to visualize Network Security Group flow logs to allow you to view information about IP traffic in Azure Network Watcher. -+ Last updated 06/23/2021
network-watcher Nsg Flow Logs Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/nsg-flow-logs-policy-portal.md
Title: QuickStart - Deploy and manage NSG Flow Logs using Azure Policy
description: This article explains how to use the built-in policies to manage the deployment of NSG flow logs - --++ Last updated 02/09/2022
network-watcher Quickstart Configure Network Security Group Flow Logs From Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/quickstart-configure-network-security-group-flow-logs-from-arm-template.md
Title: 'Quickstart: Configure Network Watcher network security group flow logs by using an Azure Resource Manager template (ARM template)'
-description: Learn how to enable network security group (NSG) flow logs programmatically by using an Azure Resource Manager template (ARM template) and Azure PowerShell.
+ Title: 'Quickstart: Configure network security group flow logs using an ARM template'
+description: Learn how to enable network security group (NSG) flow logs programmatically using an Azure Resource Manager (ARM) template and Azure PowerShell.
#Customer intent: I need to enable the network security group flow logs by using an Azure Resource Manager template.
-# Quickstart: Configure network security group flow logs by using an ARM template
+# Quickstart: Configure network security group flow logs using an Azure Resource Manager (ARM) template
-In this quickstart, you learn how to enable [network security group (NSG) flow logs](network-watcher-nsg-flow-logging-overview.md) by using an [Azure Resource Manager](../azure-resource-manager/management/overview.md) template (ARM template) and Azure PowerShell.
+In this quickstart, you learn how to enable [network security group (NSG) flow logs](network-watcher-nsg-flow-logging-overview.md) using an [Azure Resource Manager](../azure-resource-manager/management/overview.md) (ARM) template and Azure PowerShell.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
network-watcher Quickstart Configure Network Security Group Flow Logs From Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/quickstart-configure-network-security-group-flow-logs-from-bicep.md
Title: 'Quickstart: Configure Network Watcher network security group flow logs by using a Bicep file'
-description: Learn how to enable network security group (NSG) flow logs programmatically by using Bicep and Azure PowerShell.
+ Title: 'Quickstart: Configure Network Watcher network security group flow logs using a Bicep file'
+description: Learn how to enable network security group (NSG) flow logs programmatically using Bicep and Azure PowerShell.
#Customer intent: I need to enable the network security group flow logs by using a Bicep file.
-# Quickstart: Configure network security group flow logs by using a Bicep file
+# Quickstart: Configure network security group flow logs using a Bicep file
In this quickstart, you learn how to enable [network security group (NSG) flow logs](network-watcher-nsg-flow-logging-overview.md) by using a Bicep file
network-watcher Required Rbac Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/required-rbac-permissions.md
description: Learn which Azure role-based access control permissions are require
-+ Last updated 10/07/2022 - # Azure role-based access control permissions required to use Network Watcher capabilities
network-watcher Resource Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/resource-move.md
Title: Move Azure Network Watcher resources | Microsoft Docs
+ Title: Move Azure Network Watcher resources
description: Move Azure Network Watcher resources across regions --++ Last updated 06/10/2021
network-watcher Supported Region Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/supported-region-traffic-analytics.md
Title: Azure Traffic Analytics supported regions
-description: This article provides the list of Traffic Analytics supported regions.
+ Title: Traffic analytics supported regions
+
+description: This article provides the list of Azure Network Watcher traffic analytics supported regions.
-+ Last updated 06/15/2022
-# Supported regions: NSG
+
+# Azure Network Watcher traffic analytics supported regions
This article provides the list of regions supported by Traffic Analytics. You can view the list of supported regions of both NSG and Log Analytics Workspaces below.
+## Supported regions: NSG
+ You can use traffic analytics for NSGs in any of the following supported regions: :::row::: :::column span="":::
network-watcher Traffic Analytics Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-policy-portal.md
- Title: Deploy and manage Traffic Analytics using Azure Policy
+ Title: Deploy and manage traffic analytics using Azure Policy
-description: This article explains how to use the built-in policies to manage the deployment of Traffic Analytics
+description: This article explains how to use Azure built-in policies to manage the deployment of traffic analytics.
--++ Last updated 02/09/2022
-# Deploy and manage Traffic Analytics using Azure Policy
+# Deploy and manage Azure Network Watcher traffic analytics using Azure Policy
Azure Policy helps to enforce organizational standards and to assess compliance at-scale. Common use cases for Azure Policy include implementing governance for resource consistency, regulatory compliance, security, cost, and management. In this article, we will cover three built-in policies available for [Traffic Analytics](./traffic-analytics.md) to manage your setup.
network-watcher Traffic Analytics Schema Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-schema-update.md
 Title: Azure Traffic Analytics schema update - March 2020
+ Title: Traffic analytics schema update - March 2020
+ description: Sample queries with new fields in the Traffic Analytics schema. Use these three examples to replace the deprecated fields with the new ones. -+ Last updated 06/20/2022
network-watcher Traffic Analytics Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-schema.md
Title: Azure traffic analytics schema
+ Title: Traffic analytics schema
+ description: Understand schema of Traffic Analytics to analyze Azure network security group flow logs. -+ Last updated 03/29/2022
-# Schema and data aggregation in Traffic Analytics
+# Schema and data aggregation in Azure Network Watcher traffic analytics
Traffic Analytics is a cloud-based solution that provides visibility into user and application activity in cloud networks. Traffic Analytics analyzes Network Watcher network security group (NSG) flow logs to provide insights into traffic flow in your Azure cloud. With traffic analytics, you can:
network-watcher Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics.md
Title: Azure traffic analytics
-description: Learn what traffic analytics is, and how to use traffic analytics for viewing network activity, securing networks, and optimizing performance.
+ Title: Traffic analytics
+
+description: Learn what Azure Network Watcher traffic analytics is, and how to use it for viewing network activity, securing networks, and optimizing performance.
-# Traffic analytics
+# Azure Network Watcher Traffic analytics
Traffic analytics is a cloud-based solution that provides visibility into user and application activity in your cloud networks. Specifically, traffic analytics analyzes Azure Network Watcher network security group (NSG) flow logs to provide insights into traffic flow in your Azure cloud. With traffic analytics, you can:
network-watcher Usage Scenarios Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/usage-scenarios-traffic-analytics.md
Title: Usage scenarios of Azure Traffic Analytics
-description: This article describes the usage scenarios of Traffic Analytics.
+ Title: Usage scenarios of traffic analytics
+
+description: This article describes the usage scenarios of Azure Network Watcher traffic analytics.
--++ Last updated 05/30/2022
-# Usage scenarios
+
+# Usage scenarios of Azure Network Watcher traffic analytics
Some of the insights you might want to gain after Traffic Analytics is fully configured, are as follows:
network-watcher View Network Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/view-network-topology.md
Title: View Azure virtual network topology | Microsoft Docs
+ Title: View Azure virtual network topology
description: Learn how to view the resources in a virtual network, and the relationships between the resources. Last updated 11/11/2022
network-watcher View Relative Latencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/view-relative-latencies.md
Last updated 04/20/2022 - + # View relative latency to Azure regions from specific locations > [!WARNING]
networking Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/networking-overview.md
ExpressRoute enables you to extend your on-premises networks into the Microsoft
:::image type="content" source="./media/networking-overview/expressroute-connection-overview.png" alt-text="Azure ExpressRoute" border="false"::: ### <a name="vpngateway"></a>VPN Gateway
-VPN Gateway helps you create encrypted cross-premises connections to your virtual network from on-premises locations or create encrypted connections between VNets. There are different configurations available for VPN Gateway connections, such as site-to-site, point-to-site, and VNet-to-VNet.
-The following diagram illustrates multiple site-to-site VPN connections to the same virtual network.
+VPN Gateway helps you create encrypted cross-premises connections to your virtual network from on-premises locations, or create encrypted connections between VNets. There are different configurations available for VPN Gateway connections. Some of the main features include:
+* Site-to-site VPN connectivity
+* Point-to-site VPN connectivity
+* VNet-to-VNet VPN connectivity
-For more information about different types of VPN connections, see [What is VPN Gateway?](../../vpn-gateway/vpn-gateway-about-vpngateways.md).
+The following diagram illustrates multiple site-to-site VPN connections to the same virtual network. To view more connection diagrams, see [VPN Gateway - design](../../vpn-gateway/design.md). For more information about VPN Gateway, see [What is VPN Gateway?](../../vpn-gateway/vpn-gateway-about-vpngateways.md)
+ ### <a name="virtualwan"></a>Virtual WAN
-Azure Virtual WAN is a networking service that provides optimized and automated branch connectivity to, and through, Azure. Azure regions serve as hubs that you can choose to connect your branches to. You can leverage the Azure backbone to also connect branches for branch-to-VNet connectivity.
-Azure Virtual WAN brings together many Azure cloud connectivity services such as site-to-site VPN, ExpressRoute, and point-to-site user VPN into a single operational interface. Connectivity to Azure VNets is established by using virtual network connections. For more information, see [What is Azure Virtual WAN?](../../virtual-wan/virtual-wan-about.md).
+Azure Virtual WAN is a networking service that brings many networking, security, and routing functionalities together to provide a single operational interface. Connectivity to Azure VNets is established by using virtual network connections. Some of the main features include:
+
+* Branch connectivity (via connectivity automation from Virtual WAN Partner devices such as SD-WAN or VPN CPE)
+* Site-to-site VPN connectivity
+* Remote user VPN connectivity (point-to-site)
+* Private connectivity (ExpressRoute)
+* Intra-cloud connectivity (transitive connectivity for virtual networks)
+* VPN ExpressRoute inter-connectivity
+* Routing, Azure Firewall, and encryption for private connectivity
+
+For more information, see [What is Azure Virtual WAN?](../../virtual-wan/virtual-wan-about.md)
:::image type="content" source="../../virtual-wan/media/virtual-wan-about/virtual-wan-diagram.png" alt-text="Virtual WAN diagram." lightbox="../../virtual-wan/media/virtual-wan-about/virtual-wan-diagram.png":::
Azure Virtual WAN brings together many Azure cloud connectivity services such as
Azure DNS is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services. For more information, see [What is Azure DNS?](../../dns/dns-overview.md). ### <a name="bastion"></a>Azure Bastion
-The Azure Bastion service is a fully platform-managed PaaS service that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly in the Azure portal over TLS. When you connect via Azure Bastion, your virtual machines do not need a public IP address. For more information, see [What is Azure Bastion?](../../bastion/bastion-overview.md).
+Azure Bastion is service you can deploy that lets you connect to a virtual machine using your browser and the Azure portal, or via the native SSH or RDP client already installed on your local computer. The Azure Bastion service is a fully platform-managed PaaS service that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly from the Azure portal over TLS. When you connect via Azure Bastion, your virtual machines don't need a public IP address, agent, or special client software. For more information, see [What is Azure Bastion?](../../bastion/bastion-overview.md)
+ ### <a name="nat"></a>Virtual network NAT Gateway Virtual Network NAT (network address translation) simplifies outbound-only Internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses your specified static public IP addresses. Outbound connectivity is possible without load balancer or public IP addresses directly attached to virtual machines.
openshift Howto Restrict Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-restrict-egress.md
VMUSERNAME=aroadmin
az vm create --name ubuntu-jump \ --resource-group $RESOURCEGROUP \
- --ssh-key-values ~/.ssh/id_rsa.pub \
+ --generate-ssh-keys \
--admin-username $VMUSERNAME \ --image UbuntuLTS \ --subnet $JUMPSUBNET \
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-monitoring.md
Introducing Enhanced Metrics for Azure Database for PostgreSQL Flexible Server t
^ **Max Connections** here represents the configured value for _max_connections_ server parameter, and this metric is pooled every 30 minutes.
+#### Considerations when using the enhanced metrics
+
+- There is **50 database** limit on metrics with `database name` dimension.
+ * On **Burstable** SKU - this limit is 10 `database name` dimension
+- `database name` dimension limit is applied on OiD column (in other words Order-of-Creation of the database)
+- The `database name` in metrics dimension is **case insensitive**. Therefore the metrics for same database names in varying case (_ex. foo, FoO, FOO_) will be merged, and may not show accurate data.
+ ## Autovacuum metrics Autovaccum metrics can be used to monitor and tune autovaccum performance for Azure database for postgres flexible server. Each metric is emitted at a **30 minute** frequency, and has up to **93 days** of retention. Customers can configure alerts on the metrics and can also access the new metrics dimensions, to split and filter the metrics data on database name.
Autovaccum metrics can be used to monitor and tune autovaccum performance for Az
|**User Tables Vacuumed** (Preview) |tables_vacuumed_user_tables |Count|Number of user only tables that have been vacuumed in this database |DatabaseName|No | |**Vacuum Counter User Tables** (Preview) |vacuum_count_user_tables |Count|Number of times user only tables have been manually vacuumed in this database (not counting VACUUM FULL)|DatabaseName|No |
+#### Considerations when using the autovacuum metrics
+- There is **30 database** limit on metrics with `database name` dimension.
+ * On **Burstable** SKU - this limit is 10 `database name` dimension
+- `database name` dimension limit is applied on OiD column (in other words Order-of-Creation of the database)
-#### Applying filters and splitting on metrics with dimension
+## Applying filters and splitting on metrics with dimension
In the above list of metrics, some of the metrics have dimension such as `database name`, `state` etc. [Filtering](../../azure-monitor/essentials/metrics-charts.md#filters) and [Splitting](../../azure-monitor/essentials/metrics-charts.md#apply-splitting) are allowed for the metrics that have dimensions. These features show how various metric segments ("dimension values") affect the overall value of the metric. You can use them to identify possible outliers.
Here in this example below, we have done **splitting** by `State` dimension and
For more details on setting-up charts with dimensional metrics, see [Metric chart examples](../../azure-monitor/essentials/metric-chart-samples.md)
-#### Considerations when using the enhanced metrics
--- There is **50 database** limit on metrics with `database name` dimension.
- * On **Burstable** SKU - this limit is 10 `database name` dimension
-- `database name` dimension limit is applied on OiD column (in other words Order-of-Creation of the database)-- The `database name` in metrics dimension is **case insensitive**. Therefore the metrics for same database names in varying case (_ex. foo, FoO, FOO_) will be merged, and may not show accurate data.- ## Server logs In addition to the metrics, Azure Database for PostgreSQL also allows you to configure and access PostgreSQL standard logs. To learn more about logs, visit the [logging concepts doc](concepts-logging.md).
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| East US 2 | :heavy_check_mark: | :x: $ | :heavy_check_mark: | :heavy_check_mark: | | France Central | :heavy_check_mark: | :x: $ | :heavy_check_mark: | :heavy_check_mark: | | France South | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
-| Germany West Central | :x: $$ | :x: $ | :heavy_check_mark: | :x: |
+| Germany West Central | :x: $$ | :x: $ | :x: $ | :x: |
| Japan East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Japan West | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | Jio India West | :heavy_check_mark: (v3 only)| :x: | :heavy_check_mark: | :x: |
One advantage of running your workload in Azure is global reach. The flexible se
| West Central US | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | West Europe | :heavy_check_mark: | :x: $ | :heavy_check_mark: | :heavy_check_mark: | | West US | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
-| West US 2 | :x: $$ | :x: $ | :heavy_check_mark: | :heavy_check_mark:|
+| West US 2 | :x: $$ | :x: $ | :x: $ | :heavy_check_mark:|
| West US 3 | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | :x: | $ New Zone-redundant high availability deployments are temporarily blocked in these regions. Already provisioned HA servers are fully supported.
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Last updated 11/05/2022
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant for Flexible Server - PostgreSQL.
+This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant for Flexible Server - PostgreSQL
+
+## Release: February 2023
+* Public preview of [Autovacuum Metrics](./concepts-monitoring.md#autovacuum-metrics) for Azure Database for PostgreSQL ΓÇô Flexible Server.
## Release: December 2022
This page provides latest news and updates regarding feature additions, engine v
## Release: November 2022
-* Public preview of [Enhanced Metrics](./concepts-monitoring.md) for Azure Database for PostgreSQL ΓÇô Flexible Server
+* Public preview of [Enhanced Metrics](./concepts-monitoring.md#enhanced-metrics) for Azure Database for PostgreSQL ΓÇô Flexible Server
* Support for [minor versions](./concepts-supported-versions.md) 14.5, 13.8, 12.12, 11.17. <sup>$</sup> * General availability of Azure Database for PostgreSQL - Flexible Server in China North 3 & China East 3 Regions.
private-5g-core Collect Required Information For Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-service.md
You can specify a QoS for this service, or inherit the parent SIM Policy's QoS.
| The maximum bit rate (MBR) for uplink traffic (traveling away from user equipment (UEs)) across all SDFs that match data flow policy rules configured on this service. The MBR must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Mbps`. | **Maximum bit rate (MBR) - Uplink** | Yes| | The MBR for downlink traffic (traveling towards UEs) across all SDFs that match data flow policy rules configured on this service. The MBR must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Mbps`. | **Maximum bit rate (MBR) - Downlink** | Yes| | The default Allocation and Retention Policy (ARP) priority level for this service. Flows with a higher ARP priority level preempt flows with a lower ARP priority level. The ARP priority level must be an integer between 1 (highest priority) and 15 (lowest priority). | **Allocation and Retention Priority level** |No. Defaults to 9.|
-| The default 5G QoS Indicator (5QI) or QoS class identifier (QCI) value for this service. The 5QI (for 5G networks) or QCI (for 4G networks) value identifies a set of QoS characteristics that control QoS forwarding treatment for QoS flows or EPS bearers. </br></br>We recommend you choose a 5QI or QCI value that corresponds to a non-GBR QoS flow or EPS bearer. These values are in the following ranges: 5-9; 69-70; 79-80. For more details, see 3GPP TS 23.501 for 5QI or 3GPP TS 23.203 for QCI.</br></br>You can also choose a non-standardized 5QI or QCI value.</p><p>Azure Private 5G Core doesn't support 5QI or QCI values corresponding to GBR or delay-critical GBR QoS flows or EPS bearers. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5QI/QCI** |No. Defaults to 9.|
+| The default 5G QoS Indicator (5QI) or QoS class identifier (QCI) value for this service. The 5QI (for 5G networks) or QCI (for 4G networks) value identifies a set of QoS characteristics that control QoS forwarding treatment for QoS flows or EPS bearers. </br></br>You can choose a standardized or a non-standardized 5QI or QCI value. For more details, see 3GPP TS 23.501 for 5QI or 3GPP TS 23.203 for QCI. | **5QI/QCI** |No. Defaults to 9.|
| The default preemption capability for QoS flows or EPS bearers for this service. The preemption capability of a QoS flow or EPS bearer controls whether it can preempt another QoS flow or EPS bearer with a lower priority level. You can choose from the following values: </br></br>- **May not preempt** </br>- **May preempt** | **Preemption capability** |No. Defaults to **May not preempt**.| | The default preemption vulnerability for QoS flows or EPS bearers for this service. The preemption vulnerability of a QoS flow or EPS bearer controls whether it can be preempted by another QoS flow or EPS bearer with a higher priority level. You can choose from the following values: </br></br>- **Preemptible** </br>- **Not Preemptible** | **Preemption vulnerability** |No. Defaults to **Preemptible**.|
private-5g-core Collect Required Information For Sim Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-sim-policy.md
Collect each of the values in the table below for the network scope.
|The names of the services permitted on the data network. You must have already configured your chosen services. For more information on services, see [Policy control](policy-control.md). | **Service configuration** | No. The SIM policy will only use the service you configure using the same template. | |The maximum bitrate for traffic traveling away from UEs across all non-GBR QoS flows or EPS bearers of a given PDU session or PDN connection. The bitrate must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. | **Session aggregate maximum bit rate - Uplink** | Yes | |The maximum bitrate for traffic traveling towards UEs across all non-GBR QoS flows or EPS bearers of a given PDU session or PDN connection. The bitrate must be given in the following form: `<Quantity>` `<Unit>` </br></br>`<Unit>` must be one of the following: </br></br>- *bps* </br>- *Kbps* </br>- *Mbps* </br>- *Gbps* </br>- *Tbps* </br></br>`<Quantity>` is the quantity of your chosen unit. </br></br>For example, `10 Gbps`. | **Session aggregate maximum bit rate - Downlink** | Yes |
-|The default 5QI (for 5G) or QCI (for 4G) value for this data network. These values identify a set of QoS characteristics that control QoS forwarding treatment for QoS flows or EPS bearers. </br></br>We recommend you choose a 5QI or QCI value that corresponds to a non-GBR QoS flow or EPS bearer. These values are in the following ranges: 5-9; 69-70; 79-80. For more details, see 3GPP TS 23.501 for 5QI or 3GPP TS 23.203 for QCI. </br></br>You can also choose a non-standardized 5QI or QCI value. </br></br>Azure Private 5G Core doesn't support 5QI or QCI values corresponding to GBR or delay-critical GBR QoS flows or EPS bearers. Don't use a value in any of the following ranges: 1-4; 65-67; 71-76; 82-85. | **5QI/QCI** | No. Defaults to 9. |
+|The default 5QI (for 5G) or QCI (for 4G) value for this data network. These values identify a set of QoS characteristics that control QoS forwarding treatment for QoS flows or EPS bearers.</br></br>You can choose a standardized or a non-standardized 5QI or QCI value. For more details, see 3GPP TS 23.501 for 5QI or 3GPP TS 23.203 for QCI. | **5QI/QCI** | No. Defaults to 9. |
|The default Allocation and Retention Policy (ARP) priority level for this data network. Flows with a higher ARP priority level preempt flows with a lower ARP priority level. The ARP priority level must be an integer between 1 (highest priority) and 15 (lowest priority). | **Allocation and Retention Priority level** | No. Defaults to 1. | |The default preemption capability for QoS flows or EPS bearers on this data network. The preemption capability of a QoS flow or EPS bearer controls whether it can preempt another QoS flow or EPS bearer with a lower priority level. </br></br>You can choose from the following values: </br></br>- **May preempt** </br>- **May not preempt** | **Preemption capability** | No. Defaults to **May not preempt**.| |The default preemption vulnerability for QoS flows or EPS bearers on this data network. The preemption vulnerability of a QoS flow or EPS bearer controls whether it can be preempted by another QoS flow or EPS bearer with a higher priority level. </br></br>You can choose from the following values: </br></br>- **Preemptible** </br>- **Not Preemptible** | **Preemption vulnerability** | No. Defaults to **Preemptible**.|
private-5g-core Differentiated Services Codepoint 5Qi Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/differentiated-services-codepoint-5qi-mapping.md
+
+ Title: Azure Private 5G Core 5QI to DSCP mapping
+description: Learn about the mapping of 5QI to DSCP values that Azure Private 5G Core uses for transport level marking.
++++ Last updated : 01/27/2023++
+# 5QI to DSCP mapping
+
+This article details the mapping of 5G QoS identifier (5QI) to differentiated services codepoint (DSCP) values that Azure Private 5G Core uses for transport level marking.
+
+The 5QI value, determined when you configured your private mobile network's [policy control](policy-control.md), corresponds to a set of quality of service (QoS) characteristics that should be used for a QoS flow. These characteristics include guaranteed and maximum bitrates, priority levels, and limits on latency, jitter, and error rate.
+
+Azure Private 5G Core will attempt to configure DSCP markings on outbound packets based on the configured 5QI value. The DSCP markings allow the data network to determine how to prioritize a packet.
+
+Azure Private 5G Core performs 5QI to DSCP mapping on downlink packets (towards the UE). Uplink packets (away from the UE) are left unchanged.
+
+For standard DSCP values, see [RFC 2474: Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers](https://www.rfc-editor.org/rfc/rfc2474).
+
+## GBR 5QIs
+
+The following table contains the mapping of GBR 5QI to DSCP values.
+
+| 5QI value | DSCP value | DSCP value meaning |
+|--|--|--|
+| 1 | 46 | Expedited Forwarding |
+| 2 | 36 | Assured Forwarding 42 |
+| 3 | 10 | Assured Forwarding 11 |
+| 4 | 28 | Assured Forwarding 32 |
+| 65 | 46 | Expedited Forwarding |
+| 66 | 46 | Expedited Forwarding |
+| 67 | 34 | Assured Forwarding 41 |
+| 71 | 28 | Assured Forwarding 32 |
+| 72 | 28 | Assured Forwarding 32 |
+| 73 | 28 | Assured Forwarding 32 |
+| 74 | 28 | Assured Forwarding 32 |
+| 76 | 28 | Assured Forwarding 32 |
+
+## Delay-critical GBR 5QIs
+
+The following table contains the mapping of delay-critical GBR 5QI to DSCP values.
+
+| 5QI value | DSCP value | DSCP value meaning |
+|--|--|--|
+| 82 | 18 | Assured Forwarding 21 |
+| 83 | 18 | Assured Forwarding 21 |
+| 84 | 18 | Assured Forwarding 21 |
+| 85 | 18 | Assured Forwarding 21 |
+| 86 | 18 | Assured Forwarding 21 |
+
+## Non-GBR 5QIs
+
+The following table contains the mapping of non-GBR 5QI to DSCP values.
+
+| 5QI value | DSCP value | DSCP value meaning |
+|--|--|--|
+| 5 | 46 | Expedited Forwarding |
+| 6 | 10 | Assured Forwarding 11 |
+| 7 | 18 | Assured Forwarding 21 |
+| 8 | 18 | Assured Forwarding 21 |
+| 9 | 0 | Best Effort |
+| 69 | 34 | Assured Forwarding 41 |
+| 70 | 18 | Assured Forwarding 21 |
+| 79 | 18 | Assured Forwarding 21 |
+| 80 | 68 | Assured Forwarding 41 |
private-5g-core Policy Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/policy-control.md
A *QoS profile* has two main components.
- A *5G QoS identifier (5QI)*. The 5QI value corresponds to a set of QoS characteristics that should be used for the QoS flow. These characteristics include guaranteed and maximum bitrates, priority levels, and limits on latency, jitter, and error rate. The 5QI is given as a scalar number.
+ To allow for packet prioritization on the underlying transport network, Azure Private 5G Core will attempt to configure differentiated services codepoint (DSCP) markings on outbound packets based on the configured 5QI value for standardized GBR and non-GBR values. For more information on the mapping of 5QI to DSCP values, see [5QI to DSCP mapping](differentiated-services-codepoint-5qi-mapping.md).
+ You can find more information on 5QI values and each of the QoS characteristics in 3GPP TS 23.501. You can also find definitions for standardized (or non-dynamic) 5QI values. The required parameters for each 5QI value are pre-configured in the Next Generation Node B (gNB).
purview How To Monitor With Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-monitor-with-azure-monitor.md
The following table contains the list of metrics available to explore in the Azu
## Sending Diagnostic Logs
-Raw telemetry events are emitted to Azure Monitor. Events can be sent to a Log Analytics Workspace, archived to a customer storage account of choice, streamed to an event hub or sent to a partner solution for further analysis. Exporting of logs is done via the Diagnostic settings for the Microsoft Purview account on the Azure portal.
+Raw telemetry events are sent to Azure Monitor. Events can be sent to a Log Analytics Workspace, archived to a customer storage account of choice, streamed to an event hub, or sent to a partner solution for further analysis. Exporting of logs is done via the Diagnostic settings for the Microsoft Purview account on the Azure portal.
-Follow the steps to create a Diagnostic setting for your Microsoft Purview account and send to your preferred destination.
+Follow these steps to create a diagnostic setting for your Microsoft Purview account and send to your preferred destination:
-Create a new diagnostic setting to collect platform logs and metrics by following this article: [Create diagnostic settings to send platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md).
+1. Locate your Microsoft Purview account in the [Azure portal](https://portal.azure.com).
+1. In the menu under **Monitoring** select **Diagnostic settings**.
+1. Select **Add diagnostic setting** to create a new diagnostic setting to collect platform logs and metrics. For more information about these settings and logs, see [the Azure Monitor documentation.](../azure-monitor/essentials/diagnostic-settings.md).
- :::image type="content" source="./media/how-to-monitor-with-azure-monitor/step-one-diagnostic-setting.png" alt-text="Screenshot showing creating diagnostic log." lightbox="./media/how-to-monitor-with-azure-monitor/step-one-diagnostic-setting.png":::
+ :::image type="content" source="./media/how-to-monitor-with-azure-monitor/create-diagnostic-setting.png" alt-text="Screenshot showing creating diagnostic log." lightbox="./media/how-to-monitor-with-azure-monitor/create-diagnostic-setting.png":::
+
+1. You can send your logs to:
-You can send your logs to:
- [A log analytics workspace](#destinationlog-analytics-workspace) - [A storage account](#destinationstorage-account)
-
-#### Destination - Log Analytics Workspace
-Select the destination to a log analytics workspace to send the event to. Create a name for the diagnostic setting, select the applicable log category group and select the right subscription and workspace, then click save. The workspace doesn't have to be in the same region as the resource being monitored. Follow this article to [Create a New Log Analytics Workspace](../azure-monitor/logs/quick-create-workspace.md).
-
- :::image type="content" source="./media/how-to-monitor-with-azure-monitor/step-two-diagnostic-setting.png" alt-text="Screenshot showing assigning log analytics workspace to send event to." lightbox="./media/how-to-monitor-with-azure-monitor/step-two-diagnostic-setting.png":::
-
- :::image type="content" source="./media/how-to-monitor-with-azure-monitor/step-two-one-diagnostic-setting.png" alt-text="Screenshot showing saved diagnostic log event to log analytics workspace." lightbox="./media/how-to-monitor-with-azure-monitor/step-two-one-diagnostic-setting.png":::
-
-Verify the changes in **Log Analytics Workspace** by perfoming some operations to populate data such as creating/updating/deleting policy. After which you can open the **Log Analytics Workspace**, navigate to **Logs**, enter query filter as **"purviewsecuritylogs"**, then click **"Run"** to execute the query.
-
- :::image type="content" source="./media/how-to-monitor-with-azure-monitor/step-two-two-diagnostic-setting.png" alt-text="Screenshot showing log results in the Log Analytics Workspace after a query was run." lightbox="./media/how-to-monitor-with-azure-monitor/step-two-two-diagnostic-setting.png":::
-
-#### Destination - Storage account
-To log the events to a storage account; create a diagnostic setting name, select the log category,. select the destination as archieve to a storage account, select the right subscription and storage account then click save. A dedicated storage account is recommended for archiving the diagnostic logs. Following this article to [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal).
-
- :::image type="content" source="./media/how-to-monitor-with-azure-monitor/step-three-diagnostic-setting.png" alt-text="Screenshot showing assigning storage account for diagnostic log." lightbox="./media/how-to-monitor-with-azure-monitor/step-three-diagnostic-setting.png":::
-
- :::image type="content" source="./media/how-to-monitor-with-azure-monitor/step-three-one-diagnostic-setting.png" alt-text="Screenshot showing saved log events to storage account." lightbox="./media/how-to-monitor-with-azure-monitor/step-three-one-diagnostic-setting.png":::
-
-To see logs in the **Storage Account**, create/update/delete a policy, then open the **Storage Account**, navigate to **Containers**, and click on the container name
-
- :::image type="content" source="./media/how-to-monitor-with-azure-monitor/step-three-two-diagnostic-setting.png" alt-text="Screenshot showing container in storage account where the diagnostic logs have been sent to." lightbox="./media/how-to-monitor-with-azure-monitor/step-three-two-diagnostic-setting.png":::
-
-Navigate to the flie and download it to see the logs
-
- :::image type="content" source="./media/how-to-monitor-with-azure-monitor/step-three-three-diagnostic-setting.png" alt-text="Screenshot showing folders with details of logs." lightbox="./media/how-to-monitor-with-azure-monitor/step-three-three-diagnostic-setting.png":::
- :::image type="content" source="./media/how-to-monitor-with-azure-monitor/step-three-four-diagnostic-setting.png" alt-text="Screenshot showing details of logs." lightbox="./media/how-to-monitor-with-azure-monitor/step-three-four-diagnostic-setting.png":::
+### Destination - Log Analytics Workspace
+
+1. In the **Destination details**, select **Send to Log Analytics workspace**.
+1. Create a name for the diagnostic setting, select the applicable log category group and select the right subscription and workspace, then select save. The workspace doesn't have to be in the same region as the resource being monitored. You to create a new workspace, you can follow this article: [Create a New Log Analytics Workspace](../azure-monitor/logs/quick-create-workspace.md).
+
+ :::image type="content" source="./media/how-to-monitor-with-azure-monitor/log-analytics-diagnostic-setting.png" alt-text="Screenshot showing assigning log analytics workspace to send event to." lightbox="./media/how-to-monitor-with-azure-monitor/log-analytics-diagnostic-setting.png":::
+
+ :::image type="content" source="./media/how-to-monitor-with-azure-monitor/log-analytics-select-workspace-diagnostic-setting.png" alt-text="Screenshot showing saved diagnostic log event to log analytics workspace." lightbox="./media/how-to-monitor-with-azure-monitor/log-analytics-select-workspace-diagnostic-setting.png":::
+
+1. Verify the changes in your Log Analytics Workspace by performing some operations to populate data. For example, creating/updating/deleting a policy. After which you can open the **Log Analytics Workspace**, navigate to **Logs**, enter query filter as **"purviewsecuritylogs"**, then select **"Run"** to execute the query.
+
+ :::image type="content" source="./media/how-to-monitor-with-azure-monitor/log-analytics-view-logs-diagnostic-setting.png" alt-text="Screenshot showing log results in the Log Analytics Workspace after a query was run." lightbox="./media/how-to-monitor-with-azure-monitor/log-analytics-view-logs-diagnostic-setting.png":::
+
+### Destination - Storage account
+
+1. In the **Destination details**, select **Archive to a storage account**.
+1. Create a diagnostic setting name, select the log category, select the destination as archive to a storage account, select the right subscription and storage account then select save. A dedicated storage account is recommended for archiving the diagnostic logs. If you need a storage account, you can follow this article: [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal).
+
+ :::image type="content" source="./media/how-to-monitor-with-azure-monitor/storage-diagnostic-setting.png" alt-text="Screenshot showing assigning storage account for diagnostic log." lightbox="./media/how-to-monitor-with-azure-monitor/storage-diagnostic-setting.png":::
+
+ :::image type="content" source="./media/how-to-monitor-with-azure-monitor/storage-select-diagnostic-setting.png" alt-text="Screenshot showing saved log events to storage account." lightbox="./media/how-to-monitor-with-azure-monitor/storage-select-diagnostic-setting.png":::
+
+1. To see logs in the **Storage Account**, perform a sample action (for example: create/update/delete a policy), then open the **Storage Account**, navigate to **Containers**, and select the container name.
+
+ :::image type="content" source="./media/how-to-monitor-with-azure-monitor/storage-two-diagnostic-setting.png" alt-text="Screenshot showing container in storage account where the diagnostic logs have been sent to." lightbox="./media/how-to-monitor-with-azure-monitor/storage-two-diagnostic-setting.png":::
+
+1. Navigate to the file and download it to see the logs.
+
+ :::image type="content" source="./media/how-to-monitor-with-azure-monitor/storage-navigate-diagnostic-setting.png" alt-text="Screenshot showing folders with details of logs." lightbox="./media/how-to-monitor-with-azure-monitor/storage-navigate-diagnostic-setting.png":::
+
+ :::image type="content" source="./media/how-to-monitor-with-azure-monitor/storage-select-logs-diagnostic-setting.png" alt-text="Screenshot showing details of logs." lightbox="./media/how-to-monitor-with-azure-monitor/storage-select-logs-diagnostic-setting.png":::
+
+## Sample Log
+
+Here's a sample log you'd receive from a diagnostic setting.
+
+The event tracks the scan life cycle. A scan operation follows progress through a sequence of states, from Queued, Running and finally a terminal state of Succeeded | Failed | Canceled. An event is logged for each state transition and the schema of the event will have the following properties.
+
+```JSON
+{
+ "time": "<The UTC time when the event occurred>",
+ "properties": {
+ "dataSourceName": "<Registered data source friendly name>",
+ "dataSourceType": "<Registered data source type>",
+ "scanName": "<Scan instance friendly name>",
+ "assetsDiscovered": "<If the resultType is succeeded, count of assets discovered in scan run>",
+ "assetsClassified": "<If the resultType is succeeded, count of assets classified in scan run>",
+ "scanQueueTimeInSeconds": "<If the resultType is succeeded, total seconds the scan instance in queue>",
+ "scanTotalRunTimeInSeconds": "<If the resultType is succeeded, total seconds the scan took to run>",
+ "runType": "<How the scan is triggered>",
+ "errorDetails": "<Scan failure error>",
+ "scanResultId": "<Unique GUID for the scan instance>"
+ },
+ "resourceId": "<The azure resource identifier>",
+ "category": "<The diagnostic log category>",
+ "operationName": "<The operation that cause the event Possible values for ScanStatusLogEvent category are:
+ |AdhocScanRun
+ |TriggeredScanRun
+ |StatusChangeNotification>",
+ "resultType": "Queued ΓÇô indicates a scan is queued.
+ Running ΓÇô indicates a scan entered a running state.
+ Succeeded ΓÇô indicates a scan completed successfully.
+ Failed ΓÇô indicates a scan failure event.
+ Cancelled ΓÇô indicates a scan was cancelled. ",
+ "resultSignature": "<Not used for ScanStatusLogEvent category. >",
+ "resultDescription": "<This will have an error message if the resultType is Failed. >",
+ "durationMs": "<Not used for ScanStatusLogEvent category. >",
+ "level": "<The log severity level. Possible values are:
+ |Informational
+ |Error >",
+ "location": "<The location of the Microsoft Purview account>",
+}
+```
+
+The Sample log for an event instance is shown in the below section.
+
+```JSON
+{
+ "time": "2020-11-24T20:25:13.022860553Z",
+ "properties": {
+ "dataSourceName": "AzureDataExplorer-swD",
+ "dataSourceType": "AzureDataExplorer",
+ "scanName": "Scan-Kzw-shoebox-test",
+ "assetsDiscovered": "0",
+ "assetsClassified": "0",
+ "scanQueueTimeInSeconds": "0",
+ "scanTotalRunTimeInSeconds": "0",
+ "runType": "Manual",
+ "errorDetails": "empty_value",
+ "scanResultId": "0dc51a72-4156-40e3-8539-b5728394561f"
+ },
+ "resourceId": "/SUBSCRIPTIONS/111111111111-111-4EB2/RESOURCEGROUPS/FOOBAR-TEST-RG/PROVIDERS/MICROSOFT.PURVIEW/ACCOUNTS/FOOBAR-HEY-TEST-NEW-MANIFEST-EUS",
+ "category": "ScanStatusLogEvent",
+ "operationName": "TriggeredScanRun",
+ "resultType": "Delayed",
+ "resultSignature": "empty_value",
+ "resultDescription": "empty_value",
+ "durationMs": 0,
+ "level": "Informational",
+ "location": "eastus",
+}
+```
## Next steps
purview Register Scan Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-snowflake.md
To create and run a new scan, follow these steps:
1. Select **Continue**. 1. Select a **scan rule set** for classification. You can choose between the system default, existing custom rule sets, or [create a new rule set](create-a-scan-rule-set.md) inline. Check the [Classification](apply-classifications.md) article to learn more.
+> [!NOTE]
+> If you are using Self-hosted runtime then you will need to upgrade to version 5.26.404.1 or higher to use Snowflake classification. You can find the latest version of Microsoft Integration runtime [here](https://www.microsoft.com/download/details.aspx?id=39717).
1. Choose your **scan trigger**. You can set up a schedule or ran the scan once.
Go to the asset -> lineage tab, you can see the asset relationship when applicab
## Troubleshooting tips -- Check your account identifer in the source registration step. Don't include `https://` part at the front.
+- Check your account identifier in the source registration step. Don't include `https://` part at the front.
- Make sure the warehouse name and database name are in capital case on the scan setup page. - Check your key vault. Make sure there are no typos in the password. - Check the credential you set up in Microsoft Purview. The user you specify must have a default role with the necessary access rights to both the warehouse and the database you're trying to scan. See [Required permissions for scan](#required-permissions-for-scan). USE `DESCRIBE USER;` to verify the default role of the user you've specified for Microsoft Purview.
reliability Migrate App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-app-service.md
Availability zone support is a property of the App Service plan. The following a
- West US 3 - Availability zones can only be specified when creating a **new** App Service plan. A pre-existing App Service plan can't be converted to use availability zones. - Availability zones are only supported in the newer portion of the App Service footprint.
- - Currently, if you're running on Pv3, then it's possible that you're already on a footprint that supports availability zones. In this scenario, you can create a new App Service plan and specify zone redundancy.
- - If you aren't using Pv3 or a scale unit that supports availability zones, are in an unsupported region, or are unsure, see the [migration guidance](#migration-guidance-redeployment).
+ - Currently, if you're running on Pv2 or Pv3, then it's possible that you're already on a footprint that supports availability zones. In this scenario, you can create a new App Service plan and specify zone redundancy.
+ - If you aren't using Pv2/Pv3 or a scale unit that supports availability zones, are in an unsupported region, or are unsure, see the [migration guidance](#migration-guidance-redeployment).
## Downtime requirements
security Secure Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-design.md
Title: Design secure applications on Microsoft Azure description: This article discusses best practices to consider during the requirement and design phases of your web application project. -+ Previously updated : 06/11/2019 Last updated : 02/06/2023
ms.assetid: 521180dc-2cc9-43f1-ae87-2701de7ca6b8
# Design secure applications on Azure
-In this article we present security activities and controls to consider when you design applications for the cloud. Training resources along with security questions and concepts to consider during the requirements and design phases of the Microsoft [Security Development Lifecycle
+In this article, we present security activities and controls to consider when you design applications for the cloud. Training resources along with security questions and concepts to consider during the requirements and design phases of the Microsoft [Security Development Lifecycle
(SDL)](/previous-versions/windows/desktop/cc307891(v=msdn.10)) are covered. The goal is to help you define activities and Azure services that you can use to design a more secure application. The following SDL phases are covered in this article: -- Training-- Requirements-- Design
+* Training
+* Requirements
+* Design
## Training
-Before you begin developing your cloud application, take time to
-understand security and privacy on Azure. By taking this step, you can
-reduce the number and severity of exploitable vulnerabilities in your
-application. You'll be more prepared to react appropriately to the
-ever-changing threat landscape.
-
-Use the following resources during the training stage to familiarize
-yourself with the Azure services that are available to developers and
-with security best practices on Azure:
-
- - [Developer's guide to
- Azure](https://azure.microsoft.com/campaigns/developer-guide/) shows
- you how to get started with Azure. The guide shows you which
- services you can use to run your applications, store your data,
- incorporate intelligence, build IoT apps, and deploy your solutions
- in a more efficient and secure way.
-
- - [Get started guide for Azure
- developers](../../guides/developer/azure-developer-guide.md)
- provides essential information for developers who are looking to get
- started using the Azure platform for their development needs.
-
- - [SDKs and
- tools](../../index.yml?pivot=sdkstools)
- describes the tools that are available on Azure.
-
- - [Azure DevOps
- Services](/azure/devops/)
- provides development collaboration tools. The tools include
- high-performance pipelines, free Git repositories, configurable
- Kanban boards, and extensive automated and cloud-based load testing.
- The [DevOps Resource
- Center](/azure/devops/learn/) combines our
- resources for learning DevOps practices, Git version control, agile
- methods, how we work with DevOps at Microsoft, and how you can
- assess your own DevOps progression.
-
- - [Top 5 security items to consider before pushing to
- production](/training/modules/top-5-security-items-to-consider/index?WT.mc_id=Learn-Blog-tajanca)
- shows you how to help secure your web applications on Azure and
- protect your apps against the most common and dangerous web
- application attacks.
-
- - [Secure DevOps Kit for
- Azure](https://github.com/azsk/AzTS-docs/#readme) is a collection of
- scripts, tools, extensions, and automations that caters to the
- comprehensive Azure subscription and resource security needs of
- DevOps teams that use extensive automation. The Secure DevOps Kit
- for Azure can show you how to smoothly integrate security into your
- native DevOps workflows. The kit addresses tools like security
- verification tests (SVTs), which can help developers write secure
- code and test the secure configuration of their cloud applications
- in the coding and early development stages.
-
- - [Security best practices for Azure
- solutions](https://azure.microsoft.com/resources/security-best-practices-for-azure-solutions)
- provides a collection of security best practices to use as you
- design, deploy, and manage your cloud solutions by using Azure.
+
+Before you begin developing your cloud application, take time to understand security and privacy on Azure. By taking this step, you can reduce the number and severity of exploitable vulnerabilities in your application. You'll be more prepared to react appropriately to the ever-changing threat landscape.
+
+Use the following resources during the training stage to familiarize yourself with the Azure services that are available to developers and with security best practices on Azure:
+
+* [Developer's guide to Azure](https://azure.microsoft.com/campaigns/developer-guide/) shows you how to get started with Azure. The guide shows you which services you can use to run your applications, store your data, incorporate intelligence, build IoT apps, and deploy your solutions in a more efficient and secure way.
+
+* [Get started guide for Azure developers](../../guides/developer/azure-developer-guide.md) provides essential information for developers who are looking to get started using the Azure platform for their development needs.
+
+* [SDKs and tools](/azure/?pivot=sdkstools&product=developer-tools) describes the tools that are available on Azure.
+
+* [Azure DevOps Services](/azure/devops/) provides development collaboration tools. The tools include high-performance pipelines, free Git repositories, configurable Kanban boards, and extensive automated and cloud-based load testing. The [DevOps Resource Center](/azure/devops/learn/) combines our resources for learning DevOps practices, Git version control, agile methods, how we work with DevOps at Microsoft, and how you can assess your own DevOps progression.
+
+* [Top five security items to consider before pushing to production](/training/modules/top-5-security-items-to-consider/index?WT.mc_id=Learn-Blog-tajanca) shows you how to help secure your web applications on Azure and protect your apps against the most common and dangerous web application attacks.
+
+* [Secure DevOps Kit for Azure](https://github.com/azsk/AzTS-docs/#readme) is a collection of scripts, tools, extensions, and automations that cater to the comprehensive Azure subscription and resource security needs of DevOps teams that use extensive automation. The Secure DevOps Kit for Azure can show you how to smoothly integrate security into your native DevOps workflows. The kit addresses tools like security verification tests (SVTs), which can help developers write secure code and test the secure configuration of their cloud applications in the coding and early development stages.
+
+* [Security best practices for Azure solutions](https://azure.microsoft.com/resources/security-best-practices-for-azure-solutions) provides a collection of security best practices to use as you design, deploy, and manage your cloud solutions by using Azure.
## Requirements
-The requirements definition phase is a crucial step in defining what your application is and what it will do when it's released. The requirements phase is also a time to think about the security controls that you will build into your application. During this phase, you also begin the steps that you will take throughout the SDL to ensure that you release and deploy a secure application.
+
+The requirements definition phase is a crucial step in defining what your application is and what it will do when it's released. The requirements phase is also a time to think about the security controls that you'll build into your application. During this phase, you also begin the steps that you'll take throughout the SDL to ensure that you release and deploy a secure application.
### Consider security and privacy issues
-This phase is the best time to consider foundational security and
-privacy issues. Defining acceptable levels of security and privacy at
-the start of a project helps a team:
-- Understand risks associated with security issues.-- Identify and fix security bugs during development.-- Apply established levels of security and privacy throughout the entire project.
+This phase is the best time to consider foundational security and privacy issues. Defining acceptable levels of security and privacy at the start of a project helps a team:
+
+* Understand risks associated with security issues.
+* Identify and fix security bugs during development.
+* Apply established levels of security and privacy throughout the entire project.
-When you write the requirements for your application, be sure to
-consider security controls that can help keep your application and data
-safe.
+When you write the requirements for your application, be sure to consider security controls that can help keep your application and data safe.
### Ask security questions+ Ask security questions like:
- - Does my application contain sensitive data?
-
- - Does my application collect or store data that requires me to adhere
- to industry standards and compliance programs like the [Federal Financial Institution Examination Council (FFIEC)](/previous-versions/azure/security/blueprints/ffiec-analytics-overview) or the [Payment Card Industry Data Security Standards (PCI DSS)](/previous-versions/azure/security/blueprints/pcidss-analytics-overview)?
-
- - Does my application collect or contain sensitive personal or
- customer data that can be used, either on its own or with other
- information, to identify, contact, or locate a single person?
-
- - Does my application collect or contain data that can be used to
- access an individual's medical, educational, financial, or
- employment information? Identifying the sensitivity of your data
- during the requirements phase helps you classify your data and
- identify the data protection method you will use for your
- application.
-
- - Where and how is my data stored? Consider how you will monitor the
- storage services that your application uses for any unexpected
- changes (such as slower response times). Will you be able to
- influence logging to collect more detailed data and analyze a
- problem in depth?
-
- - Will my application be available to the public (on the internet) or
- internally only? If your application is available to the public, how
- do you protect the data that might be collected from being used in
- the wrong way? If your application is available internally only,
- consider who in your organization should have access to the
- application and how long they should have access.
-
- - Do you understand your identity model before you begin designing
- your application? How will you determine that users are who they say
- they are and what a user is authorized to do?
-
- - Does my application perform sensitive or important tasks (such as
- transferring money, unlocking doors, or delivering medicine)?
- Consider how you will validate that the user performing a sensitive
- task is authorized to perform the task and how you will authenticate
- that the person is who they say they are. Authorization (AuthZ) is
- the act of granting an authenticated security principal permission
- to do something. Authentication (AuthN) is the act of challenging a
- party for legitimate credentials.
-
- - Does my application perform any risky software activities, like
- allowing users to upload or download files or other data? If your
- application does perform risky activities, consider how your
- application will protect users from handling malicious files or
- data.
+* Does my application contain sensitive data?
-### Review OWASP top 10
-Consider reviewing the [<span class="underline">OWASP Top 10 Application Security Risks</span>](https://owasp.org/www-project-top-ten/).
-The OWASP Top 10 addresses critical security risks to web applications.
-Awareness of these security risks can help you make requirement and
-design decisions that minimize these risks in your application.
+* Does my application collect or store data that requires me to adhere to industry standards and compliance programs like the [Federal Financial Institution Examination Council (FFIEC)](/previous-versions/azure/security/blueprints/ffiec-analytics-overview) or the [Payment Card Industry Data Security Standards (PCI DSS)](/previous-versions/azure/security/blueprints/pcidss-analytics-overview)?
+
+* Does my application collect or contain sensitive personal or customer data that can be used, either on its own or with other information, to identify, contact, or locate a single person?
+
+* Does my application collect or contain data that can be used to access an individual's medical, educational, financial, or employment information? Identifying the sensitivity of your data during the requirements phase helps you classify your data and identify the data protection method you'll use for your application.
+
+* Where and how is my data stored? Consider how you'll monitor the storage services that your application uses for any unexpected changes (such as slower response times). Will you be able to influence logging to collect more detailed data and analyze a problem in depth?
+
+* Will my application be available to the public (on the internet) or internally only? If your application is available to the public, how do you protect the data that might be collected from being used in the wrong way? If your application is available internally only, consider who in your organization should have access to the application and how long they should have access.
+
+* Do you understand your identity model before you begin designing your application? How will you determine that users are who they say they are and what a user is authorized to do?
-Thinking about security controls to prevent breaches is important.
-However, you also want to [assume a breach](/devops/operate/security-in-devops)
-will occur. Assuming a breach helps answer some important questions
-about security in advance, so they don't have to be answered in an
-emergency:
+* Does my application perform sensitive or important tasks (such as transferring money, unlocking doors, or delivering medicine)? Consider how you'll validate that the user performing a sensitive task is authorized to perform the task and how you'll authenticate that the person is who they say they are. Authorization (AuthZ) is the act of granting an authenticated security principal permission to do something. Authentication (AuthN) is the act of challenging a party for legitimate credentials.
- - How will I detect an attack?
+* Does my application perform any risky software activities, like allowing users to upload or download files or other data? If your application does perform risky activities, consider how your application will protect users from handling malicious files or data.
- - What will I do if there is an attack or breach?
+### Review OWASP top 10
+
+Consider reviewing the [<span class="underline">OWASP Top 10 Application Security Risks</span>](https://owasp.org/www-project-top-ten/). The OWASP Top 10 addresses critical security risks to web applications. Awareness of these security risks can help you make requirement and design decisions that minimize these risks in your application.
+
+Thinking about security controls to prevent breaches is important. However, you also want to [assume a breach](/devops/operate/security-in-devops) will occur. Assuming a breach helps answer some important questions about security in advance, so they don't have to be answered in an emergency:
- - How am I going to recover from the attack like data leaking or
- tampering?
+* How will I detect an attack?
+* What will I do if there's an attack or breach?
+* How am I going to recover from the attack like data leaking or tampering?
## Design
-The design phase is critical for establishing best practices for design
-and functional specifications. It also is critical for performing risk
-analysis that helps mitigate security and privacy issues throughout a
-project.
-
-When you have security requirements in place and use secure design
-concepts, you can avoid or minimize opportunities for a security flaw. A
-security flaw is an oversight in the design of the application that
-might allow a user to perform malicious or unexpected actions after your
-application is released.
-
-During the design phase, also think about how you can apply security in
-layers; one level of defense isn't necessarily enough. What happens if
-an attacker gets past your web application firewall (WAF)? You want
-another security control in place to defend against that attack.
-
-With this in mind, we discuss the following secure design concepts and
-the security controls you should address when you design secure
-applications:
--- Use a secure coding library and a software framework.-- Scan for vulnerable components.-- Use threat modeling during application design.-- Reduce your attack surface.-- Adopt a policy of identity as the primary security perimeter.-- Require re-authentication for important transactions.-- Use a key management solution to secure keys, credentials, and other secrets.-- Protect sensitive data.-- Implement fail-safe measures.-- Take advantage of error and exception handling.-- Use logging and alerting.
+The design phase is critical for establishing best practices for design and functional specifications. It also is critical for performing risk analysis that helps mitigate security and privacy issues throughout a project.
+
+When you have security requirements in place and use secure design concepts, you can avoid or minimize opportunities for a security flaw. A security flaw is an oversight in the design of the application that might allow a user to perform malicious or unexpected actions after your application is released.
+
+During the design phase, also think about how you can apply security in layers; one level of defense isn't necessarily enough. What happens if an attacker gets past your web application firewall (WAF)? You want another security control in place to defend against that attack.
+
+With this in mind, we discuss the following secure design concepts and the security controls you should address when you design secure applications:
+
+* Use a secure coding library and a software framework.
+* Scan for vulnerable components.
+* Use threat modeling during application design.
+* Reduce your attack surface.
+* Adopt a policy of identity as the primary security perimeter.
+* Require reauthentication for important transactions.
+* Use a key management solution to secure keys, credentials, and other secrets.
+* Protect sensitive data.
+* Implement fail-safe measures.
+* Take advantage of error and exception handling.
+* Use logging and alerting.
### Use a secure coding library and a software framework
strings, and anything else that would be considered a security control)
instead of developing security controls from scratch. This helps guard against security-related design and implementation flaws.
-Be sure that you're using the latest version of your framework and all
-the security features that are available in the framework. Microsoft
-offers a comprehensive [set of development
-tools](https://azure.microsoft.com/product-categories/developer-tools/)
-for all developers, working on any platform or language, to deliver
-cloud applications. You can code with the language of your choice by
-choosing from various [SDKs](https://azure.microsoft.com/downloads/).
-You can take advantage of full-featured integrated development
-environments (IDEs) and editors that have advanced debugging
-capabilities and built-in Azure support.
-
-Microsoft offers a variety of [languages, frameworks, and
-tools](../../index.yml?panel=sdkstools-all&pivot=sdkstools)
-that you can use to develop applications on Azure. An example is [Azure
-for .NET and .NET Core
-developers](/dotnet/azure/). For each language
-and framework that we offer, you'll find quickstarts, tutorials, and API
-references to help you get started fast.
-
-Azure offers a variety of services you can use to host websites and web
-applications. These services let you develop in your favorite language,
-whether that's .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python.
-[Azure App Service Web
-Apps](../../app-service/overview.md)
-(Web Apps) is one of these services.
+Be sure that you're using the latest version of your framework and all the security features that are available in the framework. Microsoft offers a comprehensive [set of development tools](https://azure.microsoft.com/product-categories/developer-tools/) for all developers, working on any platform or language, to deliver cloud applications. You can code with the language of your choice by choosing from various [SDKs](https://azure.microsoft.com/downloads/). You can take advantage of full-featured integrated development environments (IDEs) and editors that have advanced debugging capabilities and built-in Azure support.
+
+Microsoft offers various [languages, frameworks, and tools](/azure/?panel=sdkstools-all&pivot=sdkstools&product=popular#languages-and-tools) that you can use to develop applications on Azure. An example is [Azure for .NET and .NET Core developers](/dotnet/azure/). For each language and framework that we offer, you'll find quickstarts, tutorials, and API references to help you get started fast.
+
+Azure offers various services you can use to host websites and web applications. These services let you develop in your favorite language, whether that's .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. [Azure App Service Web Apps](../../app-service/overview.md) (Web Apps) is one of these services.
Web Apps adds the power of Microsoft Azure to your application. It includes security, load balancing, autoscaling, and automated
Apps, like package management, staging environments, custom domains,
SSL/TLS certificates, and continuous deployment from Azure DevOps, GitHub, Docker Hub, and other sources.
-Azure offers other services that you can use to host websites and web
-applications. For most scenarios, Web Apps is the best choice. For a
-micro service architecture, consider [Azure Service
-Fabric](../../service-fabric/index.yml).
-If you need more control over the VMs that your code runs on, consider
-[Azure Virtual
-Machines](../../virtual-machines/index.yml).
-For more information about how to choose between these Azure services,
-see a [comparison of Azure App Service, Virtual Machines, Service
-Fabric, and Cloud
-Services](/azure/architecture/guide/technology-choices/compute-decision-tree).
+Azure offers other services that you can use to host websites and web applications. For most scenarios, Web Apps is the best choice. For a micro service architecture, consider [Azure Service Fabric](../../service-fabric/index.yml). If you need more control over the VMs that your code runs on, consider [Azure Virtual Machines](../../virtual-machines/index.yml). For more information about how to choose between these Azure services, see a [comparison of Azure App Service, Virtual Machines, Service Fabric, and Cloud Services](/azure/architecture/guide/technology-choices/compute-decision-tree).
### Apply updates to components
updated software versions are released continuously. Ensure that you
have an ongoing plan to monitor, triage, and apply updates or configuration changes to the libraries and components you use.
-See the [Open Web Application Security Project
-(OWASP)](https://www.owasp.org/) page on [using
-components with known
-vulnerabilities](https://owasp.org/www-project-top-ten/2017/A9_2017-Using_Components_with_Known_Vulnerabilities)
-for tool suggestions. You can also subscribe to email alerts for
-security vulnerabilities that are related to components you use.
+See the [Open Web Application Security Project](https://www.owasp.org/) (OWASP) page on [using components with known vulnerabilities](https://owasp.org/www-project-top-ten/2017/A9_2017-Using_Components_with_Known_Vulnerabilities) for tool suggestions. You can also subscribe to email alerts for security vulnerabilities that are related to components you use.
### Use threat modeling during application design
threat modeling during the design phase, when resolving potential issues
is relatively easy and cost-effective. Using threat modeling in the design phase can greatly reduce your total cost of development.
-To help facilitate the threat modeling process, we designed the [SDL
-Threat Modeling
-Tool](threat-modeling-tool.md)
-with non-security experts in mind. This tool makes threat modeling
-easier for all developers by providing clear guidance about how to
-create and analyze threat models.
-
-Modeling the application design and enumerating
-[STRIDE](https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxzZWN1cmVwcm9ncmFtbWluZ3xneDo0MTY1MmM0ZDI0ZjQ4ZDMy)
-threatsΓÇöSpoofing, Tampering, Repudiation, Information Disclosure, Denial
-of Service, and Elevation of PrivilegeΓÇöacross all trust boundaries has
-proven an effective way to catch design errors early on. The following
-table lists the STRIDE threats and gives some example mitigations that
-use features provided by Azure. These mitigations won't work in every
-situation.
+To help facilitate the threat modeling process, we designed the [SDL Threat Modeling Tool](threat-modeling-tool.md) with non-security experts in mind. This tool makes threat modeling easier for all developers by providing clear guidance about how to create and analyze threat models.
+
+Modeling the application design and enumerating [STRIDE](https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxzZWN1cmVwcm9ncmFtbWluZ3xneDo0MTY1MmM0ZDI0ZjQ4ZDMy) threats-Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege-across all trust boundaries has proven an effective way to catch design errors early on. The following table lists the STRIDE threats and gives some example mitigations that use features provided by Azure. These mitigations won't work in every situation.
| Threat | Security property | Potential Azure platform mitigation | | - | | |
situation.
| Tampering | Integrity | Validate SSL/TLS certificates. Applications that use SSL/TLS must fully verify the X.509 certificates of the entities they connect to. Use Azure Key Vault certificates to [manage your x509 certificates](../../key-vault/general/about-keys-secrets-certificates.md). | | Repudiation | Non-repudiation | Enable Azure [monitoring and diagnostics](/azure/architecture/best-practices/monitoring).| | Information Disclosure | Confidentiality | Encrypt sensitive data [at rest](../fundamentals/encryption-atrest.md) and [in transit](../fundamentals/data-encryption-best-practices.md#protect-data-in-transit). |
-| Denial of Service | Availability | Monitor performance metrics for potential denial of service conditions. Implement connection filters. [Azure DDoS protection](../../ddos-protection/ddos-protection-overview.md#next-steps), combined with application design best practices, provides defense against DDoS attacks.|
+| Denial of Service | Availability | Monitor performance metrics for potential denial of service conditions. Implement connection filters. [Azure DDoS protection](../../ddos-protection/ddos-protection-overview.md), combined with application design best practices, provides defense against DDoS attacks.|
| Elevation of Privilege | Authorization | Use Azure Active Directory <span class="underline"> </span> [Privileged Identity Management](../../active-directory/privileged-identity-management/pim-configure.md).| ### Reduce your attack surface
quick way to minimize your attack surface is to remove unused resources
and code from your application. The smaller your application, the smaller your attack surface. For example, remove: -- Code for features you haven't released yet.-- Debugging support code.-- Network interfaces and protocols that aren't used or which have been deprecated.-- Virtual machines and other resources that you aren't using.
+* Code for features you haven't released yet.
+* Debugging support code.
+* Network interfaces and protocols that aren't used or which have been deprecated.
+* Virtual machines and other resources that you aren't using.
Doing regular cleanup of your resources and ensuring that you remove unused code are great ways to ensure that there are fewer opportunities
changes, and what this means from a risk perspective.
An attack surface analysis helps you identify: -- Functions and parts of the system you need to review and test for security vulnerabilities.-- High-risk areas of code that require defense-in-depth protection (parts of the system that you need to defend).-- When you alter the attack surface and need to refresh a threat assessment.
+* Functions and parts of the system you need to review and test for security vulnerabilities.
+* High-risk areas of code that require defense-in-depth protection (parts of the system that you need to defend).
+* When you alter the attack surface and need to refresh a threat assessment.
Reducing opportunities for attackers to exploit a potential weak spot or vulnerability requires you to thoroughly analyze your application's
overall attack surface. It also includes disabling or restricting access
to system services, applying the principle of least privilege, and employing layered defenses wherever possible.
-We discuss [conducting an attack surface
-review](secure-develop.md#conduct-attack-surface-review) during the verification phase of
-the SDL.
+We discuss [conducting an attack surface review](secure-develop.md#conduct-attack-surface-review) during the verification phase of the SDL.
> [!NOTE] > **What's the difference between threat modeling and attack surface analysis?**
primary security perimeter.
Things you can do to develop an identity-centric approach to developing web applications: -- Enforce multi-factor authentication for users.-- Use strong authentication and authorization platforms.-- Apply the principle of least privilege.-- Implement just-in-time access.
+* Enforce multi-factor authentication for users.
+* Use strong authentication and authorization platforms.
+* Apply the principle of least privilege.
+* Implement just-in-time access.
#### Enforce multi-factor authentication for users
-Use two-factor authentication. Two-factor authentication is the current
-standard for authentication and authorization because it avoids the
-security weaknesses that are inherent in username and password types of
-authentication. Access to the Azure management interfaces (Azure
-portal/remote PowerShell) and to customer-facing services should be
-designed and configured to use [Azure Multi-Factor
-Authentication](../../active-directory/authentication/concept-mfa-howitworks.md).
+Use two-factor authentication. Two-factor authentication is the current standard for authentication and authorization because it avoids the security weaknesses that are inherent in username and password types of authentication. Access to the Azure management interfaces (Azure portal/remote PowerShell) and to customer-facing services should be designed and configured to use [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md).
#### Use strong authentication and authorization platforms
-Use platform-supplied authentication and authorization mechanisms
-instead of custom code. This is because developing custom authentication
-code can be prone to error. Commercial code (for example, from
-Microsoft) often is extensively reviewed for security. [Azure Active
-Directory (Azure
-AD)](../../active-directory/fundamentals/active-directory-whatis.md)
-is the Azure solution for identity and access management. These Azure AD
-tools and services help with secure development:
--- [Microsoft identity platform](../../active-directory/develop/index.yml)
-is a set of components that developers use to build apps that
-securely sign in users. The platform assists developers who are building
-single-tenant, line-of-business (LOB) apps and developers who are
-looking to develop multi-tenant apps. In addition to basic sign-in, apps
-built by using the Microsoft identity platform can call Microsoft APIs
-and custom APIs. The Microsoft identity platform supports industry-standard
-protocols like OAuth 2.0 and OpenID Connect.
--- [Azure Active Directory B2C (Azure AD
-B2C)](../../active-directory-b2c/index.yml) is an
-identity management service you can use to customize and control how
-customers sign up, sign in, and manage their profiles when they use
-your applications. This includes applications that are developed for
-iOS, Android, and .NET, among others. Azure AD B2C enables these
-actions while protecting customer identities.
+Use platform-supplied authentication and authorization mechanisms instead of custom code. This is because developing custom authentication code can be prone to error. Commercial code (for example, from Microsoft) often is extensively reviewed for security. [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) is the Azure solution for identity and access management. These Azure AD tools and services help with secure development:
+
+* [Microsoft identity platform](../../active-directory/develop/index.yml) is a set of components that developers use to build apps that securely sign in users. The platform assists developers who are building single-tenant, line-of-business (LOB) apps and developers who are looking to develop multi-tenant apps. In addition to basic sign-in, apps built by using the Microsoft identity platform can call Microsoft APIs and custom APIs. The Microsoft identity platform supports industry-standard protocols like OAuth 2.0 and OpenID Connect.
+
+* [Azure Active Directory B2C](../../active-directory-b2c/index.yml) (Azure AD B2C) is an identity management service you can use to customize and control how customers sign up, sign in, and manage their profiles when they use your applications. This includes applications that are developed for iOS, Android, and .NET, among others. Azure AD B2C enables these actions while protecting customer identities.
#### Apply the principle of least privilege
-The concept of [least
-privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege)
-means giving users the precise level of access and control they need to
-do their jobs and nothing more.
-
-Would a software developer need domain admin rights? Would an
-administrative assistant need access to administrative controls on their
-personal computer? Evaluating access to software is no different. If you
-use [Azure role-based access control
-(Azure RBAC)](../../role-based-access-control/overview.md)
-to give users different abilities and authority in your application, you
-wouldn't give everyone access to everything. By limiting access to what
-is required for each role, you limit the risk of a security issue
-occurring.
-
-Ensure that your application enforces [least
-privilege](/windows-server/identity/ad-ds/plan/security-best-practices/implementing-least-privilege-administrative-models#in-applications)
-throughout its access patterns.
+The concept of [least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege) means giving users the precise level of access and control they need to do their jobs and nothing more.
+
+Would a software developer need domain admin rights? Would an administrative assistant need access to administrative controls on their personal computer? Evaluating access to software is no different. If you use [Azure role-based access control](../../role-based-access-control/overview.md) (Azure RBAC) to give users different abilities and authority in your application, you wouldn't give everyone access to everything. By limiting access to what is required for each role, you limit the risk of a security issue occurring.
+
+Ensure that your application enforces [least privilege](/windows-server/identity/ad-ds/plan/security-best-practices/implementing-least-privilege-administrative-models#in-applications) throughout its access patterns.
> [!NOTE]
-> The rules of least privilege need to apply to the software and to the people creating the software. Software developers can be a huge risk to IT security if they are given too much access. The consequences can be severe if a developer has malicious intent or is given too much access. We recommend that the rules of least privilege be applied to developers throughout the development lifecycle.
+> The rules of least privilege need to apply to the software and to the people creating the software. Software developers can be a huge risk to IT security if they're given too much access. The consequences can be severe if a developer has malicious intent or is given too much access. We recommend that the rules of least privilege be applied to developers throughout the development lifecycle.
#### Implement just-in-time access
-Implement *just-in-time* (JIT) access to further lower the exposure time
-of privileges. Use [Azure AD Privileged Identity
-Management](../../active-directory/roles/security-planning.md#stage-3-take-control-of-administrator-activity)
+Implement *just-in-time* (JIT) access to further lower the exposure time of privileges. Use [Azure AD Privileged Identity Management](../../active-directory/roles/security-planning.md#stage-3-take-control-of-administrator-activity)
to: -- Give users the permissions they need only JIT.-- Assign roles for a shortened duration with confidence that the privileges are revoked automatically.
+* Give users the permissions they need only JIT.
+* Assign roles for a shortened duration with confidence that the privileges are revoked automatically.
-### Require re-authentication for important transactions
+### Require reauthentication for important transactions
-[Cross-site request
-forgery](/aspnet/core/security/anti-request-forgery)
-(also known as *XSRF* or *CSRF*) is an attack against web-hosted apps in
-which a malicious web app influences the interaction between a client
-browser and a web app that trusts that browser. Cross-site request
-forgery attacks are possible because web browsers send some types of
-authentication tokens automatically with every request to a website.
-This form of exploitation is also known as a *one-click attack* or
-*session riding* because the attack takes advantage of the user's
-previously authenticated session.
+[Cross-site request forgery](/aspnet/core/security/anti-request-forgery) (also known as *XSRF* or *CSRF*) is an attack against web-hosted apps in which a malicious web app influences the interaction between a client browser and a web app that trusts that browser. Cross-site request forgery attacks are possible because web browsers send some types of authentication tokens automatically with every request to a website. This form of exploitation is also known as a *one-click attack* or *session riding* because the attack takes advantage of the user's previously authenticated session.
The best way to defend against this kind of attack is to ask the user for something that only the user can provide before every important
manual techniques to find keys and secrets that are stored in code
repositories like GitHub. Don't put keys and secrets in these public code repositories or on any other server.
-Always put your keys, certificates, secrets, and connection strings in a
-key management solution. You can use a centralized solution in which
-keys and secrets are stored in hardware security modules (HSMs). Azure
-provides you with an HSM in the cloud with [Azure Key
-Vault](../../key-vault/general/overview.md).
+Always put your keys, certificates, secrets, and connection strings in a key management solution. You can use a centralized solution in which keys and secrets are stored in hardware security modules (HSMs). Azure provides you with an HSM in the cloud with [Azure Key Vault](../../key-vault/general/overview.md).
Key Vault is a *secret store*: it's a centralized cloud service for storing application secrets. Key Vault keeps your confidential data safe
Label all applicable data as sensitive when you design your data
formats. Ensure that the application treats the applicable data as sensitive. These practices can help you protect your sensitive data: -- Use encryption.-- Avoid hard-coding secrets like keys and passwords.-- Ensure that access controls and auditing are in place.
+* Use encryption.
+* Avoid hard-coding secrets like keys and passwords.
+* Ensure that access controls and auditing are in place.
#### Use encryption
-Protecting data should be an essential part of your security strategy.
-If your data is stored in a database or if it moves back and forth
-between locations, use encryption of [data at
-rest](../fundamentals/encryption-atrest.md)
-(while in the database) and encryption of [data in
-transit](../fundamentals/data-encryption-best-practices.md#protect-data-in-transit)
-(on its way to and from the user, the database, an API, or service
-endpoint). We recommend that you always use SSL/TLS protocols to
-exchange data. Ensure that you use the latest version of TLS for
-encryption (currently, this is version 1.2).
+Protecting data should be an essential part of your security strategy. If your data is stored in a database or if it moves back and forth between locations, use encryption of [data at rest](../fundamentals/encryption-atrest.md) (while in the database) and encryption of [data in transit](../fundamentals/data-encryption-best-practices.md#protect-data-in-transit) (on its way to and from the user, the database, an API, or service endpoint). We recommend that you always use SSL/TLS protocols to exchange data. Ensure that you use the latest version of TLS for encryption (currently, this is version 1.2).
#### Avoid hard-coding
-Some things should never be hard-coded in your software. Some examples
-are hostnames or IP addresses, URLs, email addresses, usernames,
-passwords, storage account keys, and other cryptographic keys. Consider
-implementing requirements around what can or can't be hard-coded in your
-code, including in the comment sections of your code.
-
-When you put comments in your code, ensure that you don't save any
-sensitive information. This includes your email address, passwords,
-connection strings, information about your application that would only
-be known by someone in your organization, and anything else that might
-give an attacker an advantage in attacking your application or
-organization.
-
-Basically, assume that everything in your development project will be
-public knowledge when it is deployed. Avoid including sensitive data of
-any kind in the project.
-
-Earlier, we discussed [Azure Key
-Vault](../../key-vault/general/overview.md). You
-can use Key Vault to store secrets like keys and passwords instead of
-hard-coding them. When you use Key Vault in combination with managed
-identities for Azure resources, your Azure web app can access secret
-configuration values easily and securely without storing any secrets in
-your source control or configuration. To learn more, see [Manage secrets
-in your server apps with Azure Key
-Vault](/training/modules/manage-secrets-with-azure-key-vault/).
+Some things should never be hard-coded in your software. Some examples are hostnames or IP addresses, URLs, email addresses, usernames, passwords, storage account keys, and other cryptographic keys. Consider implementing requirements around what can or can't be hard-coded in your code, including in the comment sections of your code.
+
+When you put comments in your code, ensure that you don't save any sensitive information. This includes your email address, passwords, connection strings, information about your application that would only be known by someone in your organization, and anything else that might give an attacker an advantage in attacking your application or organization.
+
+Basically, assume that everything in your development project will be public knowledge when it's deployed. Avoid including sensitive data of any kind in the project.
+
+Earlier, we discussed [Azure Key Vault](../../key-vault/general/overview.md). You can use Key Vault to store secrets like keys and passwords instead of hard-coding them. When you use Key Vault in combination with managed identities for Azure resources, your Azure web app can access secret configuration values easily and securely without storing any secrets in your source control or configuration. To learn more, see [Manage secrets in your server apps with Azure Key Vault](/training/modules/manage-secrets-with-azure-key-vault/).
### Implement fail-safe measures
-Your application must be able to handle
-[errors](/dotnet/standard/exceptions/) that
-occur during execution in a consistent manner. The application should
-catch all errors and either fail safe or closed.
+Your application must be able to handle [errors](/dotnet/standard/exceptions/) that occur during execution in a consistent manner. The application should catch all errors and either fail safe or closed.
-You should also ensure that errors are logged with sufficient user
-context to identify suspicious or malicious activity. Logs should be
-retained for a sufficient time to allow delayed forensic analysis. Logs
-should be in a format that can be easily consumed by a log management
-solution. Ensure that alerts for errors that are related to security are
-triggered. Insufficient logging and monitoring allows attackers to
-further attack systems and maintain persistence.
+You should also ensure that errors are logged with sufficient user context to identify suspicious or malicious activity. Logs should be retained for a sufficient time to allow delayed forensic analysis. Logs should be in a format that can be easily consumed by a log management solution. Ensure that alerts for errors that are related to security are triggered. Insufficient logging and monitoring allow attackers to further attack systems and maintain persistence.
### Take advantage of error and exception handling
-Implementing correct error and [exception
-handling](/dotnet/standard/exceptions/best-practices-for-exceptions)
-is an important part of defensive coding. Error and exception handling
-are critical to making a system reliable and secure. Mistakes in error
-handling can lead to different kinds of security vulnerabilities, such
-as leaking information to attackers and helping attackers understand
-more about your platform and design.
+Implementing correct error and [exception handling](/dotnet/standard/exceptions/best-practices-for-exceptions) is an important part of defensive coding. Error and exception handling are critical to making a system reliable and secure. Mistakes in error handling can lead to different kinds of security vulnerabilities, such as leaking information to attackers and helping attackers understand more about your platform and design.
Ensure that: -- You handle exceptions in a centralized manner to avoid duplicated
-[try/catch
-blocks](/dotnet/standard/exceptions/how-to-use-the-try-catch-block-to-catch-exceptions)
-in the code.
+* You handle exceptions in a centralized manner to avoid duplicated [try/catch blocks](/dotnet/standard/exceptions/how-to-use-the-try-catch-block-to-catch-exceptions) in the code.
-- All unexpected behaviors are handled inside the application.
+* All unexpected behaviors are handled inside the application.
-- Messages that are displayed to users don't leak critical data but do provide enough information to explain the issue.
+* Messages that are displayed to users don't leak critical data but do provide enough information to explain the issue.
-- Exceptions are logged and that they provide enough information for forensics or incident response teams to investigate.
+* Exceptions are logged and that they provide enough information for forensics or incident response teams to investigate.
-[Azure Logic
-Apps](../../logic-apps/logic-apps-overview.md)
-provides a first-class experience for [handling errors and
-exceptions](../../logic-apps/logic-apps-exception-handling.md)
-that are caused by dependent systems. You can use Logic Apps to create
-workflows to automate tasks and processes that integrate apps, data,
-systems, and services across enterprises and
-organizations.
+[Azure Logic Apps](../../logic-apps/logic-apps-overview.md) provides a first-class experience for [handling errors and exceptions](../../logic-apps/logic-apps-exception-handling.md) that are caused by dependent systems. You can use Logic Apps to create workflows to automate tasks and processes that integrate apps, data, systems, and services across enterprises and organizations.
### Use logging and alerting
-[Log](/aspnet/core/fundamentals/logging/)
-your security issues for security investigations and trigger alerts
-about issues to ensure that people know about problems in a timely
-manner. Enable auditing and logging on all components. Audit logs should
-capture user context and identify all important events.
+[Log](/aspnet/core/fundamentals/logging/) your security issues for security investigations and trigger alerts about issues to ensure that people know about problems in a timely manner. Enable auditing and logging on all components. Audit logs should capture user context and identify all important events.
Check that you don't log any sensitive data that a user submits to your site. Examples of sensitive data include: -- User credentials-- Social Security numbers or other identifying information-- Credit card numbers or other financial information-- Health information-- Private keys or other data that can be used to decrypt encrypted information-- System or application information that can be used to more effectively attack the application
+* User credentials
+* Social Security numbers or other identifying information
+* Credit card numbers or other financial information
+* Health information
+* Private keys or other data that can be used to decrypt encrypted information
+* System or application information that can be used to more effectively attack the application
Ensure that the application monitors user management events such as successful and failed user logins, password resets, password changes,
you detect and react to potentially suspicious behavior. It also allows
you to gather operations data, like who is accessing the application. ## Next steps+ In the following articles, we recommend security controls and activities that can help you develop and deploy secure applications. -- [Develop secure applications](secure-develop.md)-- [Deploy secure applications](secure-deploy.md)
+* [Develop secure applications](secure-develop.md)
+* [Deploy secure applications](secure-deploy.md)
security Secure Dev Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-dev-overview.md
Title: Secure development best practices on Microsoft Azure description: Best practices to help you develop more secure code and deploy a more secure application in the cloud. -+ Previously updated : 06/11/2019 Last updated : 02/06/2023
# Secure development best practices on Azure
-This series of articles presents security activities and controls to consider when you develop applications for the cloud. The phases of the Microsoft Security Development Lifecycle (SDL) and security
-questions and concepts to consider during each phase of the lifecycle are covered. The goal is to help you define activities and Azure services that you can use in each phase of the lifecycle to design, develop, and deploy a more secure application.
+
+This series of articles presents security activities and controls to consider when you develop applications for the cloud. The phases of the Microsoft Security Development Lifecycle (SDL) and security questions and concepts to consider during each phase of the lifecycle are covered. The goal is to help you define activities and Azure services that you can use in each phase of the lifecycle to design, develop, and deploy a more secure application.
The recommendations in the articles come from our experience with Azure security and from the experiences of our customers. You can use these articles as a reference for what you should consider during a specific phase of your development project, but we suggest that you also read through all of the articles from beginning to end at least once. Reading all articles introduces you to concepts that you might have missed in earlier phases of your project. Implementing these concepts before you release your product can help you build secure software, address security compliance requirements, and reduce development costs.
These articles are intended to be a resource for software designers, developers,
## Overview
-Security is one of the most important aspects of any application, and
-itΓÇÖs not a simple thing to get right. Fortunately, Azure provides many
-services that can help you secure your application in the cloud. These articles address activities and Azure services you can implement at each
-stage of your software development lifecycle to help you develop more secure code and deploy a more secure application in the cloud.
+Security is one of the most important aspects of any application, and it's not a simple thing to get right. Fortunately, Azure provides many services that can help you secure your application in the cloud. These articles address activities and Azure services you can implement at each stage of your software development lifecycle to help you develop more secure code and deploy a more secure application in the cloud.
## Security development lifecycle
-Following best practices for secure software development requires
-integrating security into each phase of the software development
-lifecycle, from requirement analysis to maintenance, regardless of the
-project methodology
-([waterfall](https://en.wikipedia.org/wiki/Waterfall_model),
-[agile](https://en.wikipedia.org/wiki/Agile_software_development), or
-[DevOps](https://en.wikipedia.org/wiki/DevOps)). In the wake of
-high-profile data breaches and the exploitation of operational security
-flaws, more developers are understanding that security needs to be
-addressed throughout the development process.
-
-The later you fix a problem in your development lifecycle, the more that
-fix will cost you. Security issues are no exception. If you disregard
-security issues in the early phases of your software development, each
-phase that follows might inherit the vulnerabilities of the preceding
-phase. Your final product will have accumulated multiple security issues
-and the possibility of a breach. Building security into each phase of
-the development lifecycle helps you catch issues early, and it helps you
-reduce your development costs.
-
-We follow the phases of the Microsoft [Security Development Lifecycle
-(SDL)](/previous-versions/windows/desktop/cc307891(v=msdn.10))
-to introduce activities and Azure services that you can use to fulfill
-secure software development practices in each phase of the lifecycle.
+Following best practices for secure software development requires integrating security into each phase of the software development lifecycle, from requirement analysis to maintenance, regardless of the project methodology ([waterfall](https://en.wikipedia.org/wiki/Waterfall_model), [agile](https://en.wikipedia.org/wiki/Agile_software_development), or [DevOps](https://en.wikipedia.org/wiki/DevOps)). In the wake of high-profile data breaches and the exploitation of operational security flaws, more developers are understanding that security needs to be addressed throughout the development process.
-The SDL phases are:
+The later you fix a problem in your development lifecycle, the more that fix will cost you. Security issues are no exception. If you disregard security issues in the early phases of your software development, each phase that follows might inherit the vulnerabilities of the preceding phase. Your final product will have accumulated multiple security issues and the possibility of a breach. Building security into each phase of the development lifecycle helps you catch issues early, and it helps you reduce your development costs.
- - Training
- - Requirements
- - Design
- - Implementation
- - Verification
- - Release
- - Response
+We follow the phases of the [Microsoft Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) (SDL) to introduce activities and Azure services that you can use to fulfill secure software development practices in each phase of the lifecycle.
+
+The SDL phases are:
![Security Development Lifecycle](./media/secure-dev-overview/01-sdl-phase.png) In these articles we group the SDL phases into design, develop, and deploy.
-## Engage your organizationΓÇÖs security team
+## Engage your organization's security team
-Your organization might have a formal application security program that
-assists you with security activities from start to finish during the
-development lifecycle. If your organization has security and compliance
-teams, be sure to engage them before you begin developing your
-application. Ask them at each phase of the SDL whether there are any
-tasks you missed.
+Your organization might have a formal application security program that assists you with security activities from start to finish during the development lifecycle. If your organization has security and compliance teams, be sure to engage them before you begin developing your application. Ask them at each phase of the SDL whether there are any tasks you missed.
We understand that many readers might not have a security or compliance team to engage. These articles can help guide you in the security questions and decisions you need to consider at each phase of the SDL.
team to engage. These articles can help guide you in the security questions and
Use the following resources to learn more about developing secure applications and to help secure your applications on Azure:
-[Microsoft Security Development Lifecycle
-(SDL)](/previous-versions/windows/desktop/cc307891(v=msdn.10))
-ΓÇô The SDL is a software development process from Microsoft that helps
-developers build more secure software. It helps you address security
-compliance requirements while reducing development costs.
-
-[Open Web Application Security Project
-(OWASP)](https://www.owasp.org/) ΓÇô OWASP is an online
-community that produces freely available articles, methodologies,
-documentation, tools, and technologies in the field of web application
-security.
-
-[Pushing Left, Like a
-Boss](https://wehackpurple.com/pushing-left-like-a-boss-part-1/)
-ΓÇô A series of online articles that outlines
-different types of application security activities that developers should complete to create more secure code.
-
-[Microsoft identity
-platform](../../active-directory/develop/index.yml) ΓÇô
-The Microsoft identity platform is an evolution of the Azure AD identity
-service and developer platform. ItΓÇÖs a full-featured platform that
-consists of an authentication service, open-source libraries,
-application registration and configuration, full developer
-documentation, code samples, and other developer content. The Microsoft
-identity platform supports industry-standard protocols like OAuth 2.0
-and OpenID Connect.
-
-[Security best practices for Azure
-solutions](https://azure.microsoft.com/resources/security-best-practices-for-azure-solutions/)
-ΓÇô A collection of security best practices to use when you design,
-deploy, and manage cloud solutions by using Azure. This paper is
-intended to be a resource for IT pros. This might include designers,
-architects, developers, and testers who build and deploy secure Azure
-solutions.
-
-[Security and Compliance Blueprints on
-Azure](../../governance/blueprints/samples/azure-security-benchmark-foundation/index.md) ΓÇô
-Azure Security and Compliance Blueprints are resources that can help you
-build and launch cloud-powered applications that comply with stringent
-regulations and standards.
+[Microsoft Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) (SDL) - The SDL is a software development process from Microsoft that helps developers build more secure software. It helps you address security compliance requirements while reducing development costs.
+[Open Web Application Security Project](https://www.owasp.org/) (OWASP) - OWASP is an online community that produces freely available articles, methodologies, documentation, tools, and technologies in the field of web application security.
+[Pushing Left, Like a Boss](https://wehackpurple.com/pushing-left-like-a-boss-part-1/) - A series of online articles that outline different types of application security activities that developers should complete to create more secure code.
+
+[Microsoft identity platform](../../active-directory/develop/index.yml) - The Microsoft identity platform is an evolution of the Azure AD identity service and developer platform. It's a full-featured platform that consists of an authentication service, open-source libraries, application registration and configuration, full developer documentation, code samples, and other developer content. The Microsoft identity platform supports industry-standard protocols like OAuth 2.0 and OpenID Connect.
+
+[Security best practices for Azure solutions](https://azure.microsoft.com/resources/security-best-practices-for-azure-solutions/) - A collection of security best practices to use when you design, deploy, and manage cloud solutions by using Azure. This paper is intended to be a resource for IT pros. This might include designers, architects, developers, and testers who build and deploy secure Azure solutions.
+
+[Security and Compliance Blueprints on Azure](../../governance/blueprints/samples/azure-security-benchmark-foundation/index.md) - Azure Security and Compliance Blueprints are resources that can help you build and launch cloud-powered applications that comply with stringent regulations and standards.
## Next steps+ In the following articles, we recommend security controls and activities that can help you design, develop, and deploy secure applications. -- [Design secure applications](secure-design.md)-- [Develop secure applications](secure-develop.md)-- [Deploy secure applications](secure-deploy.md)
+* [Design secure applications](secure-design.md)
+* [Develop secure applications](secure-develop.md)
+* [Deploy secure applications](secure-deploy.md)
sentinel Ama Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ama-migrate.md
The following tables show gap analyses for the log types that currently rely on
|**Windows Firewall Logs** | - | [Windows Firewall data connector](data-connectors-reference.md#windows-firewall) | |**Performance counters** | Collection only | Collection only | |**Windows Event Logs** | Collection only | Collection only |
-|**Custom logs** | Collection only | Collection only |
+|**Custom logs (text)** | Collection only | Collection only |
|**IIS logs** | Collection only | Collection only | |**Multi-homing** | Collection only | Collection only | |**Application and service logs** | - | Collection only |
The following tables show gap analyses for the log types that currently rely on
|**Syslog** | Collection only | [Syslog data connector](connect-syslog.md) | |**Common Event Format (CEF)** | [CEF via AMA data connector](connect-cef-ama.md) | [CEF data connector](connect-common-event-format.md) | |**Sysmon** | Collection only | Collection only |
-|**Custom logs** | - | Collection only |
+|**Custom logs (text)** | Collection only | Collection only |
|**Multi-homing** | Collection only | - | ## Recommended migration plan
sentinel Audit Table Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/audit-table-reference.md
+
+ Title: Microsoft Sentinel audit tables reference
+description: Learn about the fields in the SentinelAudit tables, used for audit monitoring and analysis.
+++ Last updated : 01/17/2023+++
+# Microsoft Sentinel audit tables reference
+
+This article describes the fields in the SentinelAudit tables, which are used for auditing user activity in Microsoft Sentinel resources. With the Microsoft Sentinel audit feature, you can keep tabs on the actions taken in your SIEM and get information on any changes made to your environment and the users that made those changes.
+
+Learn how to [query and use the audit table](monitor-analytics-rule-integrity.md) for deeper monitoring and visibility of actions in your environment.
+
+> [!IMPORTANT]
+>
+> The *SentinelAudit* data table is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+Microsoft Sentinel's audit feature currently covers only the analytics rule resource type, though other types may be added later. Many of the data fields in the following tables will apply across resource types, but some have specific applications for each type. The descriptions below will indicate one way or the other.
+
+## SentinelAudit table columns schema
+
+The following table describes the columns and data generated in the SentinelAudit data table:
+
+| ColumnName | ColumnType | Description |
+| | -- | -- |
+| **TenantId** | String | The tenant ID for your Microsoft Sentinel workspace. |
+| **TimeGenerated** | Datetime | The time (UTC) at which the audit event occurred. |
+| <a name="operationname_audit"></a>**OperationName** | String | The Azure operation being recorded. For example:<br>- `Microsoft.SecurityInsights/alertRules/Write`<br>- `Microsoft.SecurityInsights/alertRules/Delete` |
+| <a name="sentinelresourceid_audit"></a>**SentinelResourceId** | String | The unique identifier of the Microsoft Sentinel workspace and the associated resource on which the audit event occurred. |
+| **SentinelResourceName** | String | The resource name. For analytics rules, this is the rule name. |
+| <a name="status_audit"></a>**Status** | String | Indicates `Success` or `Failure` for the [OperationName](#operationname_audit). |
+| **Description** | String | Describes the operation, including extended data as needed. For example, for failures, this column might indicate the failure reason. |
+| **WorkspaceId** | String | The workspace GUID on which the audit issue occurred. The full Azure Resource Identifier is available in the [SentinelResourceID](#sentinelresourceid_audit) column. |
+| **SentinelResourceType** | String | The Microsoft Sentinel resource type being monitored. |
+| **SentinelResourceKind** | String | The specific type of resource being monitored. For example, for analytics rules: `NRT`. |
+| **CorrelationId** | String | The event correlation ID in GUID format. |
+| **ExtendedProperties** | Dynamic (json) | A JSON bag that varies by the [OperationName](#operationname_audit) value and the [Status](#status_audit) of the event.<br>See [Extended properties](#extended-properties) for details. |
+| **Type** | String | `SentinelAudit` |
+
+## Operation names for different resource types
+
+| Resource types | Operation names | Statuses |
+| -- | | -- |
+| **[Analytics rules](monitor-analytics-rule-integrity.md)** | - `Microsoft.SecurityInsights/alertRules/Write`<br>- `Microsoft.SecurityInsights/alertRules/Delete` | Success<br>Failure |
+
+## Extended properties
+
+### Analytics rules
+
+Extended properties for analytics rules reflect certain [rule settings](detect-threats-custom.md).
+
+| ColumnName | ColumnType | Description |
+| | -- | |
+| **CallerIpAddress** | String | The IP address from which the action was initiated. |
+| **CallerName** | String | The user or application that initiated the action. |
+| **OriginalResourceState** | Dynamic (json) | A JSON bag that describes the rule before the change. |
+| **Reason** | String | The reason why the operation failed. For example: `No permissions`. |
+| **ResourceDiffMemberNames** | Array\[String\] | An array of the properties that changed on the relevant resource. For example: `['custom_details','look_back']`. |
+| **ResourceDisplayName** | String | Name of the analytics rule on which the audit issue occurred. |
+| **ResourceGroupName** | String | Resource group of the workspace on which the audit issue occurred. |
+| **ResourceId** | String | The resource ID of the analytics rule on which the audit issue occurred. |
+| **SubscriptionId** | String | The subscription ID of the workspace on which the audit issue occurred. |
+| **UpdatedResourceState** | Dynamic (json) | A JSON bag that describes the rule after the change. |
+| **Uri** | String | The full-path resource ID of the analytics rule. |
+| **WorkspaceId** | String | The resource ID of the workspace on which the audit issue occurred. |
+| **WorkspaceName** | String | The name of the workspace on which the audit issue occurred. |
++
+## Next steps
+
+- Learn about [auditing and health monitoring in Microsoft Sentinel](health-audit.md).
+- [Turn on auditing and health monitoring](enable-monitoring.md) in Microsoft Sentinel.
+- [Monitor the health of your automation rules and playbooks](monitor-automation-health.md).
+- [Monitor the health of your data connectors](monitor-data-connector-health.md).
+- [Monitor the health and integrity of your analytics rules](monitor-analytics-rule-integrity.md).
+- [SentinelHealth tables reference](health-table-reference.md)
sentinel Enable Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/enable-monitoring.md
Title: Turn on health monitoring in Microsoft Sentinel
+ Title: Turn on auditing and health monitoring in Microsoft Sentinel
description: Monitor supported data connectors by using the SentinelHealth data table.- Previously updated : 11/07/2022 -+ Last updated : 01/19/2023
-# Turn on health monitoring for Microsoft Sentinel (preview)
+# Turn on auditing and health monitoring for Microsoft Sentinel (preview)
+
+Monitor the health and audit the integrity of supported Microsoft Sentinel resources by turning on the auditing and health monitoring feature in Microsoft Sentinel's **Settings** page. Get insights on health drifts, such as the latest failure events or changes from success to failure states, and on unauthorized actions, and use this information to create notifications and other automated actions.
-Monitor the health of supported Microsoft Sentinel resources by turning on the health monitoring feature in Microsoft Sentinel's **Settings** page. Get insights on health drifts, such as the latest failure events or changes from success to failure states, and use this information to create notifications and other automated actions.
+To get health data from the *SentinelHealth* data table, or to get auditing information from the *SentinelAudit* data table, you must first turn on the Microsoft Sentinel auditing and health monitoring feature for your workspace.
-To get health data from the *SentinelHealth* data table, you must first turn on the Microsoft Sentinel health feature for your workspace.
+This article instructs you how to turn on these features.
-When the health feature is turned on, the *SentinelHealth* data table is created at the first success or failure event generated for supported resource types.
+When the feature is turned on, the *SentinelHealth* and *SentinelAudit* data tables are created at the first event generated for the selected resources.
-The following resource types are currently supported:
+The following resource types are currently supported for health monitoring:
+- Analytics rules (New!)
- Data connectors - Automation rules - Playbooks (Azure Logic Apps workflows) > [!NOTE] > When monitoring playbook health, you'll also need to collect Azure Logic Apps diagnostic events from your playbooks in order to get the full picture of your playbook activity. See [**Monitor the health of your automation rules and playbooks**](monitor-automation-health.md) for more information.
-To configure the retention time for your health events, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md).
+Only the analytics rule resource type is currently supported for auditing.
++
+To configure the retention time for your audit and health events, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md).
> [!IMPORTANT] >
-> The *SentinelHealth* data table is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The *SentinelHealth* and *SentinelAudit* data tables are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
>
-## Turn on health monitoring for your workspace
+## Turn on auditing and health monitoring for your workspace
1. In Microsoft Sentinel, under the **Configuration** menu on the left, select **Settings**. 1. Select **Settings** from the banner.
-1. Scroll down to the **Health monitoring** section that appears below, and select it to expand.
+1. Scroll down to the **Auditing and health monitoring** section that appears below, and select it to expand.
-1. Select **Configure Diagnostic Settings**.
+1. Select **Enable** to enable auditing and health monitoring across all resource types and to send the auditing and monitoring data to your Microsoft Sentinel workspace (and nowhere else).
+
+ Or, select the **Configure diagnostic settings** link to enable health monitoring only for the data collector and/or automation resources, or to configure advanced options, like additional places to send the data.
:::image type="content" source="media/enable-monitoring/enable-health-monitoring.png" alt-text="Screenshot shows how to get to the health monitoring settings.":::
-1. In the **Diagnostic settings** screen, select **+ Add diagnostic setting**.
+ If you selected **Enable**, then the button will gray out and change to read **Enabling...** and then **Enabled**. At that point, auditing and health monitoring is enabled, and you're done! The appropriate diagnostic settings were added behind the scenes, and you can view and edit them by selecting the **Configure diagnostic settings** link.
+
+1. If you selected **Configure diagnostic settings**, then in the **Diagnostic settings** screen, select **+ Add diagnostic setting**.
+
+ (If you're editing an existing setting, select it from the list of diagnostic settings.)
- In the **Diagnostic setting name** field, enter a meaningful name for your setting.
- - In the **Logs** column, select the appropriate **Categories** for the resource types you want to monitor, for example **Data Collection - Connectors**.
+ - In the **Logs** column, select the appropriate **Categories** for the resource types you want to monitor, for example **Data Collection - Connectors**. Select **allLogs** if you want to monitor analytics rules.
- Under **Destination details**, select **Send to Log Analytics workspace**, and select your **Subscription** and **Log Analytics workspace** from the dropdown menus.
+ :::image type="content" source="media/enable-monitoring/diagnostic-settings.png" alt-text="Screenshot of diagnostic settings screen for enabling auditing and health monitoring.":::
+
+ If you require, you may select other destinations to which to send your data, in addition to the Log Analytics workspace.
+ 1. Select **Save** on the top banner to save your new setting.
-The *SentinelHealth* data table is created at the first success or failure event generated for the selected resources.
+The *SentinelHealth* and *SentinelAudit* data tables are created at the first event generated for the selected resources.
-## Access the *SentinelHealth* table
+## Verify that the tables are receiving data
In the Microsoft Sentinel **Logs** page, run a query on the *SentinelHealth* table. For example: ```kusto
-SentinelHealth
+_SentinelHealth()
| take 20 ``` ## Next steps -- Learn what [health monitoring in Microsoft Sentinel](health-audit.md) can do for you.-- [Monitor the health of your Microsoft Sentinel data connectors](monitor-data-connector-health.md).-- [Monitor the health of your Microsoft Sentinel automation rules](monitor-automation-health.md).-- See more information about the [*SentinelHealth* table schema](health-table-reference.md).
+- Learn about [auditing and health monitoring in Microsoft Sentinel](health-audit.md).
+- [Monitor the health of your automation rules and playbooks](monitor-automation-health.md).
+- [Monitor the health of your data connectors](monitor-data-connector-health.md).
+- [Monitor the health and integrity of your analytics rules](monitor-analytics-rule-integrity.md).
+- See more information about the [*SentinelHealth*](health-table-reference.md) and [*SentinelAudit*](audit-table-reference.md) table schemas.
sentinel Health Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/health-audit.md
Title: Health monitoring in Microsoft Sentinel
+ Title: Auditing and health monitoring in Microsoft Sentinel
description: Learn about the Microsoft Sentinel health and audit feature, which monitors service health drifts and user actions. - Previously updated : 08/19/2022-+ Last updated : 01/17/2023
-# Health monitoring in Microsoft Sentinel
+# Auditing and health monitoring in Microsoft Sentinel
-Microsoft Sentinel is a critical service for monitoring and ensuring your organizationΓÇÖs information security, so youΓÇÖll want to rest assured that itΓÇÖs always running smoothly. YouΓÇÖll want to be able to make sure that the service's many moving parts are always functioning as intended. You also might like to configure notifications of health drifts for relevant stakeholders who can take action. For example, you can configure email or Microsoft Teams messages to be sent to operations teams, managers, or officers, launch new tickets in your ticketing system, and so on.
+Microsoft Sentinel is a critical service for advancing and protecting the security of your organizationΓÇÖs technological and information assets, so youΓÇÖll want to rest assured that itΓÇÖs always running smoothly and free of interference. YouΓÇÖll want to be able to make sure that the service's many moving parts are always functioning as intended and that the service isn't being manipulated by unauthorized actions, whether by internal users or otherwise. You also might like to configure notifications of health drifts or unauthorized actions to be sent to relevant stakeholders who can respond or approve a response. For example, you can set conditions to trigger the sending of emails or Microsoft Teams messages to operations teams, managers, or officers, launch new tickets in your ticketing system, and so on.
-This article describes how Microsoft SentinelΓÇÖs health monitoring feature lets you monitor the activity of some of the serviceΓÇÖs key resources.
+This article describes how Microsoft SentinelΓÇÖs health monitoring and auditing features let you monitor the activity of some of the serviceΓÇÖs key resources and inspect logs of user actions within the service.
## Description
-This section describes the function and use cases of the health monitoring components.
+This section describes the function and use cases of the health monitoring and auditing components.
### Data storage
-Health data is collected in the *SentinelHealth* table in your Log Analytics workspace. The prevalent way you'll use this data is by querying the table.
+Health and audit data are collected in two tables in your Log Analytics workspace:
+
+- Health data is collected in the *SentinelHealth* table.
+- Audit data is collected in the *SentinelAudit* table.
+
+The prevalent way you'll use this data is by querying these tables.
+
+For best results, you should build your queries on the **pre-built functions** on these tables, ***_SentinelHealth()*** and ***_SentinelAudit()***, instead of querying the tables directly. These functions ensure the maintenance of your queries' backward compatibility in the event of changes being made to the schema of the tables themselves.
> [!IMPORTANT] >
-> - The *SentinelHealth* data table is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> - The *SentinelHealth* and *SentinelAudit* data tables are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
> > - When monitoring the health of **playbooks**, you'll also need to capture Azure Logic Apps diagnostic events from your playbooks, in addition to the *SentinelHealth* data, in order to get the full picture of your playbook activity. Azure Logic Apps diagnostic data is collected in the *AzureDiagnostics* table in your workspace. ### Use cases
+#### Health
+ **Is the data connector running correctly?** [Is the data connector receiving data](./monitor-data-connector-health.md)? For example, if you've instructed Microsoft Sentinel to run a query every 5 minutes, you want to check whether that query is being performed, how it's performing, and whether there are any risks or vulnerabilities related to the query.
-**Are my SAP systems running correctly?**
+**Did an automation rule run as expected?**
-[Are the SAP systems managed by your organization running correctly](monitor-sap-system-health.md)?. Are the systems up and running, or ar they unreachable? Does Microsoft Sentinel identify these systems as production systems?
+[Did your automation rule run when it was supposed to](./monitor-automation-health.md)&mdash;that is, when its conditions were met? Did all the actions in the automation rule run successfully?
-**Did an automation rule run as expected?**
+**Did an analytics rule run as expected?**
-[Did my automation rule run when it was supposed to](./monitor-automation-health.md) - that is, when its conditions were met? Did all the actions in the automation rule run successfully?
+[Did your analytics rule run when it was supposed to, and did it generate results](monitor-analytics-rule-integrity.md)? If you're expecting to see particular incidents in your queue but you don't, you want to know whether the rule ran but didn't find anything (or enough things), or didn't run at all.
-## How Microsoft Sentinel presents health data
+#### Audit
-To dive into the health data that Microsoft Sentinel generates, you can:
+**Were unauthorized changes made to an analytics rule?**
-- Run queries on the *SentinelHealth* data table from the Microsoft Sentinel **Logs** blade.
+[Was something changed in the rule](monitor-analytics-rule-integrity.md)? You didn't get the results you expected from your analytics rule, and it didn't have any health issues. You want to see if any unplanned changes were made to the rule, and if so, what changes were made, by whom, from where, and when.
++
+## How Microsoft Sentinel presents health and audit data
+
+To start collecting health and audit data, you need to [enable health and audit monitoring](enable-monitoring.md) in the Microsoft Sentinel settings. Then you can dive into the health and audit data that Microsoft Sentinel collects:
+
+- Run queries on the *SentinelHealth* and *SentinelAudit* data tables from the Microsoft Sentinel **Logs** blade.
- [Data connectors](monitor-data-connector-health.md#run-queries-to-detect-health-drifts) - [Automation rules and playbooks](monitor-automation-health.md#get-the-complete-automation-picture) (join query with Azure Logic Apps diagnostics)
+ - [Analytics rules](monitor-analytics-rule-integrity.md)
- Use the health monitoring workbooks provided in Microsoft Sentinel. - [Data connectors](monitor-data-connector-health.md#use-the-health-monitoring-workbook)
To dive into the health data that Microsoft Sentinel generates, you can:
- Export the data into various destinations, like your Log Analytics workspace, archiving to a storage account, and more. Learn about the [supported destinations](../azure-monitor/essentials/diagnostic-settings.md) for your logs. ## Next steps-- [Turn on health monitoring](enable-monitoring.md) in Microsoft Sentinel.-- Monitor the health of your [automation rules and playbooks](monitor-automation-health.md).-- Monitor the health of your [data connectors](monitor-data-connector-health.md).-- See more information about the [*SentinelHealth* table schema](health-table-reference.md).+
+- [Turn on auditing and health monitoring](enable-monitoring.md) in Microsoft Sentinel.
+- [Monitor the health of your automation rules and playbooks](monitor-automation-health.md).
+- [Monitor the health of your data connectors](monitor-data-connector-health.md).
+- [Monitor the health and integrity of your analytics rules](monitor-analytics-rule-integrity.md).
+- See more information about the [*SentinelHealth*](health-table-reference.md) and [*SentinelAudit*](audit-table-reference.md) table schemas.
+
+See also:
+- Using the [Microsoft Sentinel Solution for SAP](sap/solution-overview.md)? [Monitor its health](monitor-sap-system-health.md) too.
sentinel Health Table Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/health-table-reference.md
Title: SentinelHealth tables reference
+ Title: Microsoft Sentinel health tables reference
description: Learn about the fields in the SentinelHealth tables, used for health monitoring and analysis. Previously updated : 11/08/2022 Last updated : 01/17/2023
-# SentinelHealth tables reference
+# Microsoft Sentinel health tables reference
This article describes the fields in the *SentinelHealth* table used for monitoring the health of Microsoft Sentinel resources. With the Microsoft Sentinel [health monitoring feature](health-audit.md), you can keep tabs on the proper functioning of your SIEM and get information on any health drifts in your environment. Learn how to query and use the health table for deeper monitoring and visibility of actions in your environment: - For [data connectors](monitor-data-connector-health.md) - For [automation rules and playbooks](monitor-automation-health.md)
+- For [analytics rules](monitor-analytics-rule-integrity.md)
> [!IMPORTANT] > > The *SentinelHealth* data table is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
-Microsoft Sentinel's health monitoring feature covers different kinds of resources, such as [data connectors](monitor-data-connector-health.md) and [automation rules](monitor-automation-health.md). Many of the data fields in the following tables apply across resource types, but some have specific applications for each type. The descriptions below will indicate one way or the other.
+Microsoft Sentinel's health monitoring feature covers different kinds of resources (see the resource types in the **SentinelResourceType** field in the first table below). Many of the data fields in the following tables apply across resource types, but some have specific applications for each type. The descriptions below will indicate one way or the other.
## SentinelHealth table columns schema
The following table describes the columns and data generated in the SentinelHeal
| ColumnName | ColumnType | Description | | | -- | - | | **TenantId** | String | The tenant ID for your Microsoft Sentinel workspace. |
-| **TimeGenerated** | Datetime | The time at which the health event occurred. |
-| <a name="operationname_health"></a>**OperationName** | String | The health operation. Possible values depend on the resource type.<br>See [Operation names for different resource types](#operation-names-for-different-resource-types) for details. |
-| <a name="sentinelresourceid_health"></a>**SentinelResourceId** | String | The unique identifier of the resource on which the health event occurred, and its associated Microsoft Sentinel workspace. |
-| **SentinelResourceName** | String | The resource name. |
-| <a name="status_health"></a>**Status** | String | Indicates the overall result of the operation. Possible values depend on the operation name.<br>See [Operation names for different resource types](#operation-names-for-different-resource-types) for details. |
-| **Description** | String | Describes the operation, including extended data as needed. For failures, this can include details of the failure reason. |
+| **TimeGenerated** | Datetime | The time (UTC) at which the health event occurred. |
+| <a name="operationname_health"></a>**OperationName** | String | The health operation. Possible values depend on the resource type.<br>See [Operation names for different resource types](#operation-names-for-different-resource-types) for details. |
+| <a name="sentinelresourceid_health"></a>**SentinelResourceId** | String | The unique identifier of the resource on which the health event occurred, and its associated Microsoft Sentinel workspace. |
+| **SentinelResourceName** | String | The name of the resource (connector, rule, or playbook). |
+| <a name="status_health"></a>**Status** | String | Indicates the overall result of the operation. Possible values depend on the operation name.<br>See [Operation names for different resource types](#operation-names-for-different-resource-types) for details. |
+| **Description** | String | Describes the operation, including extended data as needed. For failures, this can include details of the failure reason. |
| **Reason** | Enum | Shows a basic reason or error code for the failure of the resource. Possible values depend on the resource type. More detailed reasons can be found in the **Description** field. | | **WorkspaceId** | String | The workspace GUID on which the health issue occurred. The full Azure Resource Identifier is available in the [SentinelResourceID](#sentinelresourceid_health) column. |
-| **SentinelResourceType** | String | The Microsoft Sentinel resource type being monitored.<br>Possible values: `Data connector`, `Automation rule`, `Playbook` |
-| **SentinelResourceKind** | String | A resource classification within the resource type.<br>- For data connectors, this is the type of connected data source. |
+| **SentinelResourceType** | String | The Microsoft Sentinel resource type being monitored.<br>Possible values: `Data connector`, `Automation rule`, `Playbook`, `Analytics rule` |
+| **SentinelResourceKind** | String | A resource classification within the resource type.<br>- For data connectors, this is the type of connected data source.<br>- For analytics rules, this is the type of rule. |
| **RecordId** | String | A unique identifier for the record that can be shared with the support team for better correlation as needed. | | **ExtendedProperties** | Dynamic (json) | A JSON bag that varies by the [OperationName](#operationname_health) value and the [Status](#status_health) of the event.<br>See [Extended properties](#extended-properties) for details. | | **Type** | String | `SentinelHealth` |
The following table describes the columns and data generated in the SentinelHeal
| **[Data collectors](monitor-data-connector-health.md)** | Data fetch status change<br><br>__________________<br>Data fetch failure summary | Success<br>Failure<br>_____________<br>Informational | | **[Automation rules](monitor-automation-health.md)** | Automation rule run | Success<br>Partial success<br>Failure | | **[Playbooks](monitor-automation-health.md)** | Playbook was triggered | Success<br>Failure |
+| **[Analytics rules](monitor-analytics-rule-integrity.md)** | Scheduled analytics rule run<br>NRT analytics rule run | Success<br>Failure |
## Extended properties
For `Data fetch status change` events with a success indicator, the bag contains
| **TriggeredByName** | Dynamic (json) | Information on the identity (user or application) that triggered the playbook. | | **TriggeredOn** | String | `Incident`. The object on which the playbook was triggered.<br>(Playbooks using the alert trigger are logged only if they're called by automation rules, so those playbook runs will appear in the **TriggeredPlaybooks** extended property under automation rule events.) |
+### Analytics rules
+
+Extended properties for analytics rules reflect certain [rule settings](detect-threats-custom.md).
+
+| ColumnName | ColumnType | Description |
+| - | -- | - |
+| **AggregationKind** | String | The event grouping setting. `AlertPerResult` or `SingleAlert`. |
+| **AlertsGeneratedAmount** | Integer | The number of alerts generated by this running of the rule. |
+| **CorrelationId** | String | The event correlation ID in GUID format. |
+| **EntitiesDroppedDueToMappingIssuesAmount** | Integer | The number of entities dropped due to mapping issues. |
+| **EntitiesGeneratedAmount** | Integer | The number of entities generated by this running of the rule. |
+| **Issues** | String | |
+| **QueryEndTimeUTC** | Datetime | The UTC time the query began to run. |
+| **QueryFrequency** | Datetime | Value of the "Run query every" setting (HH:MM:SS). |
+| **QueryPerformanceIndicators** | String | |
+| **QueryPeriod** | Datetime | Value of the "Lookup data from the last" setting (HH:MM:SS). |
+| **QueryResultAmount** | Integer | The number of results captured by the query.<br>The rule will generate an alert if this number exceeds the threshold as defined below. |
+| **QueryStartTimeUTC** | Datetime | The UTC time the query completed its run. |
+| **RuleId** | String | The rule ID for this analytics rule. |
+| **SuppressionDuration** | Time | The rule suppression duration (HH:MM:SS). |
+| **SuppressionEnabled** | String | Is rule suppression enabled. `True/False`. |
+| **TriggerOperator** | String | The operator portion of the threshold of results required to generate an alert. |
+| **TriggerThreshold** | Integer | The number portion of the threshold of results required to generate an alert. |
+| **TriggerType** | String | The type of rule being triggered. `Scheduled` or `NrtRun`. |
## Next steps -- Learn about [health monitoring in Microsoft Sentinel](health-audit.md).-- [Turn on health monitoring](enable-monitoring.md) in Microsoft Sentinel.-- Monitor the [health of your data connectors](monitor-data-connector-health.md).-- Monitor the [health of your automation rules and playbooks](monitor-automation-health.md).
+- Learn about [auditing and health monitoring in Microsoft Sentinel](health-audit.md).
+- [Turn on auditing and health monitoring](enable-monitoring.md) in Microsoft Sentinel.
+- [Monitor the health of your automation rules and playbooks](monitor-automation-health.md).
+- [Monitor the health of your data connectors](monitor-data-connector-health.md).
+- [Monitor the health and integrity of your analytics rules](monitor-analytics-rule-integrity.md).
+- [SentinelAudit tables reference](audit-table-reference.md)
sentinel Monitor Analytics Rule Integrity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-analytics-rule-integrity.md
+
+ Title: Monitor the health and audit the integrity of your Microsoft Sentinel analytics rules
+description: Use the SentinelHealth data table to keep track of your analytics rules' execution and performance.
+++ Last updated : 01/17/2023++
+# Monitor the health and audit the integrity of your analytics rules
+
+To ensure uninterrupted and tampering-free threat detection in your Microsoft Sentinel service, keep track of your analytics rules' health and integrity by monitoring their execution and audit logs.
+
+Set up notifications of health and audit events for relevant stakeholders, who can then take action. For example, define and send email or Microsoft Teams messages, create new tickets in your ticketing system, and so on.
+
+This article describes how to use Microsoft Sentinel's [auditing and health monitoring features](health-audit.md) to keep track of your analytics rules' health and integrity from within Microsoft Sentinel.
+
+> [!IMPORTANT]
+>
+> The *SentinelHealth* and *SentinelAudit* data tables are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Summary
+
+- **Microsoft Sentinel analytics rule health logs:**
+
+ - This log captures events that record the running of analytics rules, and the end result of these runnings&mdash;if they succeeded or failed, and if they failed, why.
+ - The log also records how many events were captured by the query, whether or not that number passed the threshold and caused an alert to be fired.
+ - These logs are collected in the *SentinelHealth* table in Log Analytics.
+
+- **Microsoft Sentinel analytics rule audit logs:**
+
+ - This log captures events that record changes made to any analytics rule, including which rule was changed, what the change was, the state of the rule settings before and after the change, the user or identity that made the change, the source IP and date/time of the change, and more.
+ - These logs are collected in the *SentinelAudit* table in Log Analytics.
+
+## Use the SentinelHealth and SentinelAudit data tables (Preview)
+
+To get audit and health data from the tables described above, you must first turn on the Microsoft Sentinel health feature for your workspace. For more information, see [Turn on auditing and health monitoring for Microsoft Sentinel](enable-monitoring.md).
+
+Once the health feature is turned on, the *SentinelHealth* data table is created at the first success or failure event generated for your automation rules and playbooks.
+
+### Understanding SentinelHealth and SentinelAudit table events
+
+The following types of analytics rule health events are logged in the *SentinelHealth* table:
+
+- **Scheduled analytics rule run**.
+
+- **NRT analytics rule run**.
+
+ For more information, see [SentinelHealth table columns schema](health-table-reference.md#sentinelhealth-table-columns-schema).
+
+The following types of analytics rule audit events are logged in the *SentinelAudit* table:
+
+- **Create or update analytics rule**.
+
+- **Analytics rule deleted**.
+
+ For more information, see [SentinelAudit table columns schema](audit-table-reference.md#sentinelaudit-table-columns-schema).
+
+### Statuses, errors and suggested steps
+
+For either **Scheduled analytics rule run** or **NRT analytics rule run**, you may see any of the following statuses and descriptions:
+- **Success**: Rule executed successfully, generating `<n>` alert(s).
+
+- **Success**: Rule executed successfully, but did not reach the threshold (`<n>`) required to generate an alert.
+
+- **Failure**: These are the possible descriptions for rule failure, and what you can do about them.
+
+ | Description | Remediation |
+ | | |
+ | An internal server error occurred while running the query. | |
+ | The query execution timed out. | |
+ | A table referenced in the query was not found. | Verify that the relevant data source is connected. |
+ | A semantic error occurred while running the query. | Try resetting the alert rule by editing and saving it (without changing any settings). |
+ | A function called by the query is named with a reserved word. | Remove or rename the function. |
+ | A syntax error occurred while running the query. | Try resetting the alert rule by editing and saving it (without changing any settings). |
+ | The workspace does not exist. | |
+ | This query was found to use too many system resources and was prevented from running. | |
+ | A function called by the query was not found. | Verify the existence in your workspace of all functions called by the query. |
+ | The workspace used in the query was not found. | Verify that all workspaces in the query exist. |
+ | You don't have permissions to run this query. | Try resetting the alert rule by editing and saving it (without changing any settings). |
+ | You don't have access permissions to one or more of the resources in the query. | |
+ | The query referred to a storage path that was not found. | |
+ | The query was denied access to a storage path. | |
+ | Multiple functions with the same name are defined in this workspace. | Remove or rename the redundant function and reset the rule by editing and saving it. |
+ | This query did not return any result. | |
+ | Multiple result sets in this query are not allowed. | |
+ | Query results contain inconsistent number of fields per row. | |
+ | The rule's running was delayed due to long data ingestion times. | |
+ | The rule's running was delayed due to temporary issues. | |
+ | The alert was not enriched due to temporary issues. | |
+ | The alert was not enriched due to entity mapping issues. | |
+ | \<*number*> entities were dropped in alert \<*name*> due to the 32 KB alert size limit. | |
+ | \<*number*> entities were dropped in alert \<*name*> due to entity mapping issues. | |
+ | The query resulted in \<*number*> events, which exceeds the maximum of \<*limit*> results allowed for \<*rule type*> rules with alert-per-row event-grouping configuration. Alert-per-row was generated for first \<*limit*-1> events and an additional aggregated alert was generated to account for all events.<br>- \<*number*> = number of events returned by the query<br>- \<*limit*> = currently 150 alerts for scheduled rules, 30 for NRT rules<br>- \<*rule type*> = Scheduled or NRT
++
+## Next steps
+
+- Learn about [auditing and health monitoring in Microsoft Sentinel](health-audit.md).
+- [Turn on auditing and health monitoring](enable-monitoring.md) in Microsoft Sentinel.
+- [Monitor the health of your automation rules and playbooks](monitor-automation-health.md).
+- [Monitor the health of your data connectors](monitor-data-connector-health.md).
+- See more information about the [*SentinelHealth*](health-table-reference.md) and [*SentinelAudit*](audit-table-reference.md) table schemas.
sentinel Monitor Automation Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-automation-health.md
To ensure proper functioning and performance of your security orchestration, aut
Set up notifications of health events for relevant stakeholders, who can then take action. For example, define and send email or Microsoft Teams messages, create new tickets in your ticketing system, and so on.
-This article describes how to use Microsoft Sentinel's health monitoring features to keep track of your automation rules and playbooks' health from within Microsoft Sentinel.
+This article describes how to use Microsoft Sentinel's [health monitoring features](health-audit.md) to keep track of your automation rules and playbooks' health from within Microsoft Sentinel.
## Summary
-Automation health monitoring in Microsoft Sentinel has two parts:
-| Feature | Table | Coverage | Enable from |
-| - | - | - | - |
-| **Microsoft Sentinel automation health logs** | *SentinelHealth* | - Automation rules run<br>- Playbooks triggered | Microsoft Sentinel settings > Health monitoring |
-| **Azure Logic Apps diagnostics logs** | *AzureDiagnostics* | - Playbook run started/ended<br>- Playbook actions/triggers started/ended | Logic Apps resource > [Diagnostics settings](../azure-monitor/essentials/diagnostic-settings.md?tabs=portal#create-diagnostic-settings) |
- **Microsoft Sentinel automation health logs:**
Select a particular run to see the results of the actions in the playbook.
## Next steps -- Learn what [health monitoring in Microsoft Sentinel](health-audit.md) can do for you.-- [Turn on health monitoring](enable-monitoring.md) in Microsoft Sentinel.-- Monitor the health of your [data connectors](monitor-data-connector-health.md).-- See more information about the [*SentinelHealth* table schema](health-table-reference.md).
+- Learn about [auditing and health monitoring in Microsoft Sentinel](health-audit.md).
+- [Turn on auditing and health monitoring](enable-monitoring.md) in Microsoft Sentinel.
+- [Monitor the health of your data connectors](monitor-data-connector-health.md).
+- [Monitor the health and integrity of your analytics rules](monitor-analytics-rule-integrity.md).
+- See more information about the [*SentinelHealth*](health-table-reference.md) and [*SentinelAudit*](audit-table-reference.md) table schemas.
sentinel Monitor Data Connector Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-data-connector-health.md
For more information, see [Azure Monitor alerts overview](../azure-monitor/alert
## Next steps -- Learn what [health monitoring in Microsoft Sentinel](health-audit.md) can do for you.-- [Turn on health monitoring](enable-monitoring.md) in Microsoft Sentinel.-- Monitor the health of your [automation rules and playbooks](monitor-automation-health.md).-- See more information about the [*SentinelHealth* table schema](health-table-reference.md).
+- Learn about [auditing and health monitoring in Microsoft Sentinel](health-audit.md).
+- [Turn on auditing and health monitoring](enable-monitoring.md) in Microsoft Sentinel.
+- [Monitor the health of your automation rules and playbooks](monitor-automation-health.md).
+- [Monitor the health and integrity of your analytics rules](monitor-analytics-rule-integrity.md).
+- See more information about the [*SentinelHealth*](health-table-reference.md) and [*SentinelAudit*](audit-table-reference.md) table schemas.
sentinel Monitor Sap System Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-sap-system-health.md
This screenshot shows an example of an alert generated by the *SAP - Data collec
## Next steps -- Learn what [health monitoring in Microsoft Sentinel](health-audit.md) can do for you.-- [Turn on health monitoring](enable-monitoring.md) in Microsoft Sentinel.-- Monitor the health of your [automation rules and playbooks](monitor-automation-health.md).-- See more information about the [*SentinelHealth* table schema](health-table-reference.md).-
-
-
+- Learn about the [Microsoft Sentinel Solution for SAP](sap/solution-overview.md).
+- Learn how to [deploy the Microsoft Sentinel Solution for SAP](sap/deployment-overview.md)
+- Learn about [auditing and health monitoring](health-audit.md) in other areas of Microsoft Sentinel.
sentinel Solution Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/solution-overview.md
Troubleshooting:
- [Troubleshoot your Microsoft Sentinel Solution for SAP deployment](sap-deploy-troubleshoot.md) - [Configure SAP Transport Management System](configure-transport.md)
+- [Monitor the health and role of your SAP systems](../monitor-sap-system-health.md)
Reference files:
site-recovery Vmware Azure Manage Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-manage-configuration-server.md
You can optionally delete the configuration server by using PowerShell.
``` $vault = Get-AzRecoveryServicesVault -Name <name of your vault>
- Set-AzRecoveryServicesVaultContext -ARSVault $vault
+ Set-AzRecoveryServicesAsrVaultContext -Vault $vault
``` 4. Retrieve the configuration server.
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
Locate the installer files for the serverΓÇÖs operating system using the followi
4. After successfully installing, register the source machine with the above appliance using the following command: ```cmd
- "C:\Program Files (x86)\Microsoft Azure Site Recovery\agent\UnifiedAgentConfigurator.exe" /SourceConfigFilePath "config.json" /CSType CSPrime
+ "C:\Program Files (x86)\Microsoft Azure Site Recovery\agent\UnifiedAgentConfigurator.exe" /SourceConfigFilePath "config.json" /CSType CSPrime /CredLessDiscovery true
``` #### Installation settings
Syntax | `.\UnifiedAgentInstaller.exe /Platform vmware /Role MS /CSType CSPrime
Setting | Details |
-Syntax | `"<InstallLocation>\UnifiedAgentConfigurator.exe" /SourceConfigFilePath "config.json" /CSType CSPrime >`
+Syntax | `"<InstallLocation>\UnifiedAgentConfigurator.exe" /SourceConfigFilePath "config.json" /CSType CSPrime /CredentialLessDiscovery true`
`/SourceConfigFilePath` | Mandatory. Full file path of the Mobility Service configuration file. Use any valid folder. `/CSType` | Mandatory. Used to define modernized or legacy architecture. (CSPrime or CSLegacy).
+`/CredentialLessDiscovery` | Optional. Specifies whether credential-less discovery will be performed or not.
### Linux machine
storage Archive Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-blob.md
N/A
When moving a large number of blobs to the Archive tier, use a batch operation for optimal performance. A batch operation sends multiple API calls to the service with a single request. The suboperations supported by the [Blob Batch](/rest/api/storageservices/blob-batch) operation include [Delete Blob](/rest/api/storageservices/delete-blob) and [Set Blob Tier](/rest/api/storageservices/set-blob-tier).
+> [!NOTE]
+> The [Set Blob Tier](/rest/api/storageservices/set-blob-tier) suboperation of the [Blob Batch](/rest/api/storageservices/blob-batch) operation is not yet supported in accounts that have a hierarchical namespace.
+ To archive blobs with a batch operation, use one of the Azure Storage client libraries. The following code example shows how to perform a basic batch operation with the .NET client library: :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/AccessTiers.cs" id="Snippet_BulkArchiveContainerContents":::
storage Archive Rehydrate To Online Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-to-online-tier.md
N/A
-To rehydrate a large number of blobs at one time, call the [Blob Batch](/rest/api/storageservices/blob-batch) operation to call [Set Blob Tier](/rest/api/storageservices/set-blob-tier) as a bulk operation. For a code example that shows how to perform the batch operation, see [AzBulkSetBlobTier](/samples/azure/azbulksetblobtier/azbulksetblobtier/).
+To rehydrate a large number of blobs at one time, call the [Blob Batch](/rest/api/storageservices/blob-batch) operation to call [Set Blob Tier](/rest/api/storageservices/set-blob-tier) as a bulk operation.
+
+> [!NOTE]
+> Rehydrating blobs by calling the [Blob Batch](/rest/api/storageservices/blob-batch) operation is not yet supported in accounts that have a hierarchial namespace.
+
+For a code example that shows how to perform the batch operation, see [AzBulkSetBlobTier](/samples/azure/azbulksetblobtier/azbulksetblobtier/).
+ ## Check the status of a rehydration operation
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
Some data stays idle in the cloud and is rarely, if ever, accessed. The followin
``` > [!NOTE]
-> Microsoft recommends that you upload your blobs directly the archive tier for greater efficiency. You can specify the archive tier in the *x-ms-access-tier* header on the [Put Blob](/rest/api/storageservices/put-blob) or [Put Block List](/rest/api/storageservices/put-block-list) operation. The *x-ms-access-tier* header is supported with REST version 2018-11-09 and newer or the latest blob storage client libraries.
+> Microsoft recommends that you upload your blobs directly to the archive tier for greater efficiency. You can specify the archive tier in the *x-ms-access-tier* header on the [Put Blob](/rest/api/storageservices/put-blob) or [Put Block List](/rest/api/storageservices/put-block-list) operation. The *x-ms-access-tier* header is supported with REST version 2018-11-09 and newer or the latest blob storage client libraries.
### Expire data based on age
storage Soft Delete Blob Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-overview.md
Previously updated : 06/22/2022 Last updated : 02/06/2023
Soft-deleted objects are invisible unless they're explicitly displayed or listed
> [!IMPORTANT] > This section doesn't apply to accounts that have a hierarchical namespace.
-Calling an operation such as [Put Blob](/rest/api/storageservices/put-blob), [Put Block List](/rest/api/storageservices/put-block-list), or [Copy Blob](/rest/api/storageservices/copy-blob) overwrites the data in a blob. When blob soft delete is enabled, overwriting a blob automatically creates a soft-deleted snapshot of the blob's state prior to the write operation. When the retention period expires, the soft-deleted snapshot is permanently deleted.
+Calling an operation such as [Put Blob](/rest/api/storageservices/put-blob), [Put Block List](/rest/api/storageservices/put-block-list), or [Copy Blob](/rest/api/storageservices/copy-blob) overwrites the data in a blob. When blob soft delete is enabled, overwriting a blob automatically creates a soft-deleted snapshot of the blob's state prior to the write operation. When the retention period expires, the soft-deleted snapshot is permanently deleted. The operation performed by the system to create the snapshot doesn't appear in Azure Monitor resource logs or Storage Analytics logs.
Soft-deleted snapshots are invisible unless soft-deleted objects are explicitly displayed or listed. For more information about how to list soft-deleted objects, see [Manage and restore soft-deleted blobs](soft-delete-blob-manage.md).
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
Previously updated : 12/08/2022 Last updated : 02/06/2023
The following table describes whether a feature is supported in a standard gener
| Storage feature | Default | HNS | NFS | SFTP | ||-|||--|
-| [Access tier - archive](access-tiers-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Access tier - cool](access-tiers-overview.md) | &#x2705; | &#x2705; | &#x2705;| &#x2705; |
-| [Access tier - hot](access-tiers-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Access tier - archive](access-tiers-overview.md) | &#x2705; | &#x2705;<sup>3</sup> | &#x2705;<sup>3</sup> | &#x2705;<sup>3</sup> |
+| [Access tier - cool](access-tiers-overview.md) | &#x2705; | &#x2705;<sup>3</sup> | &#x2705;<sup>3</sup>| &#x2705;<sup>3</sup> |
+| [Access tier - hot](access-tiers-overview.md) | &#x2705; | &#x2705;<sup>3</sup> | &#x2705;<sup>3</sup> | &#x2705;<sup>3</sup> |
| [Azure Active Directory security](authorize-access-azure-active-directory.md) | &#x2705; | &#x2705; | &#x2705;<sup>1</sup> | &#x2705;<sup>1</sup> | | [Azure DNS Zone endpoints (preview)](../common/storage-account-overview.md?toc=/azure/storage/blobs/toc.json#storage-account-endpoints) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Blob inventory](blob-inventory.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; |
The following table describes whether a feature is supported in a standard gener
<sup>2</sup> Only locally redundant storage (LRS) and zone-redundant storage (ZRS) are supported.
+<sup>3</sup> Setting the tier of a blob by using the [Blob Batch](/rest/api/storageservices/blob-batch) operation is not yet supported in accounts that have a hierarchial namespace.
+ ## Premium block blob accounts The following table describes whether a feature is supported in a premium block blob account when you enable a hierarchical namespace (HNS), NFS 3.0 protocol, or SFTP.
storage Storage Files Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md
If the subscription under which the file share is deployed is associated with the same Azure AD tenant as the Azure AD DS deployment to which the VM is domain-joined, you can then access Azure file shares using the same Azure AD credentials. The limitation is imposed not on the subscription but on the associated Azure AD tenant. * <a id="ad-support-subscription"></a>
-**Can I enable either Azure AD DS or on-premises AD DS authentication for Azure file shares using an Azure AD tenant that is different from the Azure file share's primary tenant?**
+**Can I enable either Azure AD DS or on-premises AD DS authentication for Azure file shares using an Azure AD tenant that's different from the Azure file share's primary tenant?**
- No, Azure Files only supports Azure AD DS or on-premises AD DS integration with an Azure AD tenant that resides in the same subscription as the file share. Only one subscription can be associated with an Azure AD tenant. This limitation applies to both Azure AD DS and on-premises AD DS authentication methods. When using on-premises AD DS for authentication, [the AD DS credential must be synced to the Azure AD](../../active-directory/hybrid/how-to-connect-install-roadmap.md) that the storage account is associated with.
+ No. Azure Files only supports Azure AD DS or on-premises AD DS integration with an Azure AD tenant that resides in the same subscription as the file share. A subscription can only be associated with one Azure AD tenant. When using on-premises AD DS for authentication, [the AD DS credential must be synced to the Azure AD](../../active-directory/hybrid/how-to-connect-install-roadmap.md) that the storage account is associated with.
* <a id="ad-multiple-forest"></a> **Does on-premises AD DS authentication for Azure file shares support integration with an AD DS environment using multiple forests?**
synapse-analytics Apache Spark 24 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-24-runtime.md
Title: Azure Synapse Runtime for Apache Spark 2.4
+ Title: Azure Synapse Runtime for Apache Spark 2.4 (EOLA)
description: Supported versions of Spark, Scala, Python, and .NET for Apache Spark 2.4.-+ Last updated 04/18/2022 -+
-# Azure Synapse Runtime for Apache Spark 2.4
+# Azure Synapse Runtime for Apache Spark 2.4 (EOLA)
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 2.4.
+> [!IMPORTANT]
+> * End of life announced (EOLA) for Azure Synapse Runtime for Apache Spark 2.4 has been announced July 29, 2022.
+> * In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 2.4 will be retired as of September 29, 2023. End of life announced (EOLA) runtime will not have bug and feature fixes. Security fixes will be backported based on risk assessment.
+> * We recommend that you upgrade your Apache Spark 2.4 workloads to version 3.2 or 3.3 at your earliest convenience.
+ ## Component versions | Component | Version | | -- | -- |
synapse-analytics Apache Spark 3 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-3-runtime.md
Title: Azure Synapse Runtime for Apache Spark 3.1
+ Title: Azure Synapse Runtime for Apache Spark 3.1 (EOLA)
description: Supported versions of Spark, Scala, Python, and .NET for Apache Spark 3.1.
-# Azure Synapse Runtime for Apache Spark 3.1
+# Azure Synapse Runtime for Apache Spark 3.1 (EOLA)
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.1.
+> [!IMPORTANT]
+> * End of life announced (EOLA) for Azure Synapse Runtime for Apache Spark 3.1 has been announced January 26, 2023.
+> * In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 3.1 will be retired as of January 26, 2024. End of life announced (EOLA) runtime will not have bug and feature fixes. Security fixes will be backported based on risk assessment.
+> * We recommend that you upgrade your Apache Spark 3.1 workloads to version 3.2 or 3.3 at your earliest convenience.
+ ## Component versions | Component | Version |
synapse-analytics Apache Spark 32 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-32-runtime.md
widgetsnbextension==3.5.2
## R libraries (Preview)
-abind 1.4-5
-
-anomalize 0.2.2
-
-anytime 0.3.9
-
-arrow 7.0.0
-
-askpass 1.1
-
-assertthat 0.2.1
-
-backports 1.4.1
-
-base64enc 0.1-3
-
-BH 1.78.0-0
-
-bit 4.0.4
-
-bit64 4.0.5
-
-blob 1.2.3
-
-brew 1.0-7
-
-brio 1.1.3
-
-broom 0.8.0
-
-bslib 0.3.1
-
-cachem 1.0.6
-
-callr 3.7.0
-
-car 3.0-13
-
-carData 3.0-5
-
-caret 6.0-86
-
-cellranger 1.1.0
-
-checkmate 2.1.0
-
-chron 2.3-56
-
-cli 3.3.0
-
-clipr 0.8.0
-
-colorspace 2.0-3
-
-commonmark 1.8.0
-
-config 0.3.1
-
-corrplot 0.92
-
-covr 3.5.1
-
-cpp11 0.4.2
-
-crayon 1.5.1
-
-credentials 1.3.2
-
-crosstalk 1.2.0
-
-curl 4.3.2
-
-data.table 1.14.2
-
-DBI 1.1.2
-
-dbplyr 2.1.1
-
-desc 1.4.1
-
-devtools 2.3.2
-
-diffobj 0.3.5
-
-digest 0.6.29
-
-dplyr 1.0.9
-
-DT 0.22
-
-dtplyr 1.2.1
-
-dygraphs 1.1.1.6
-
-ellipsis 0.3.2
-
-evaluate 0.15
-
-extraDistr 1.9.1
-
-fansi 1.0.3
-
-farver 2.1.0
-
-fastmap 1.1.0
-
-forcats 0.5.1
-
-foreach 1.5.2
-
-forecast 8.13
-
-forge 0.2.0
-
-formatR 1.12
-
-fracdiff 1.5-1
-
-fs 1.5.2
-
-furrr 0.3.0
-
-futile.logger 1.4.3
-
-futile.options 1.0.1
-
-future 1.25.0
-
-future.apply 1.9.0
-
-gargle 1.2.0
-
-generics 0.1.2
-
-gert 1.6.0
-
-ggplot2 3.3.6
-
-gh 1.3.0
-
-gitcreds 0.1.1
-
-glmnet 4.1-4
-
-globals 0.14.0
-
-glue 1.6.2
-
-gower 1.0.0
-
-gridExtra 2.3
-
-gsubfn 0.7
-
-gtable 0.3.0
-
-gtools 3.8.2
-
-hardhat 0.2.0
-
-haven 2.5.0
-
-highr 0.9
-
-hms 1.1.1
-
-htmltools 0.5.2
-
-htmlwidgets 1.5.4
-
-httr 1.4.3
-
-hwriter 1.3.2.1
-
-ids 1.0.1
-
-ini 0.3.1
-
-inline 0.3.19
-
-ipred 0.9-12
-
-isoband 0.2.5
-
-iterators 1.0.14
-
-jquerylib 0.1.4
-
-jsonlite 1.7.2
-
-knitr 1.39
-
-labeling 0.4.2
-
-lambda.r 1.2.4
-
-later 1.3.0
-
-lava 1.6.10
-
-lazyeval 0.2.2
-
-lifecycle 1.0.1
-
-listenv 0.8.0
-
-lme4 1.1-29
-
-lmtest 0.9-40
-
-loo 2.5.1
-
-lubridate 1.8.0
-
-magrittr 2.0.3
-
-maptools 1.1-4
-
-markdown 1.1
-
-MatrixModels 0.5-0
-
-matrixStats 0.62.0
-
-memoise 2.0.1
-
-mime 0.12
-
-minqa 1.2.4
-
-ModelMetrics 1.2.2.2
-
-modelr 0.1.8
-
-munsell 0.5.0
-
-nloptr 2.0.1
-
-notebookutils 3.1.2-20220721.3
-
-numDeriv 2016.8-1.1
-
-openssl 2.0.0
-
-padr 0.6.0
-
-parallelly 1.31.1
-
-pbkrtest 0.5.1
-
-pillar 1.7.0
-
-pkgbuild 1.3.1
-
-pkgconfig 2.0.3
-
-pkgload 1.2.4
-
-plogr 0.2.0
-
-plotly 4.10.0
-
-plotrix 3.8-1
-
-plyr 1.8.7
-
-praise 1.0.0
-
-prettyunits 1.1.1
-
-pROC 1.18.0
-
-processx 3.5.3
-
-prodlim 2019.11.13
-
-progress 1.2.2
-
-progressr 0.10.0
-
-promises 1.2.0.1
-
-prophet 0.6.1
-
-proto 1.0.0
-
-ps 1.7.0
-
-purrr 0.3.4
-
-quadprog 1.5-8
-
-quantmod 0.4.20
-
-quantreg 5.93
-
-R.methodsS3 1.8.1
-
-R.oo 1.24.0
-
-R.utils 2.12.0
-
-r2d3 0.2.6
-
-R6 2.5.1
-
-randomForest 4.7-1
-
-rappdirs 0.3.3
-
-rcmdcheck 1.4.0
-
-RColorBrewer 1.1-3
-
-Rcpp 1.0.8.3
-
-RcppArmadillo 0.11.0.0.0
-
-RcppEigen 0.3.3.9.2
-
-RcppParallel 5.1.5
-
-RcppRoll 0.3.0
-
-readr 2.1.2
-
-readxl 1.4.0
-
-recipes 0.2.0
-
-rematch 1.0.1
-
-rematch2 2.1.2
-
-remotes 2.4.2
-
-reprex 2.0.1
-
-reshape2 1.4.3
-
-reticulate 1.18
-
-rex 1.2.1
-
-rlang 1.0.2
-
-rmarkdown 2.14
-
-RODBC 1.3-19
-
-roxygen2 7.1.2
-
-rprojroot 2.0.3
-
-rsample 0.1.1
-
-RSQLite 2.2.13
-
-rstan 2.21.5
-
-rstantools 2.2.0
-
-rstatix 0.7.0
-
-rstudioapi 0.13
-
-rversions 2.1.1
-
-rvest 1.0.2
-
-sass 0.4.1
-
-scales 1.2.0
-
-selectr 0.4-2
-
-sessioninfo 1.2.2
-
-shape 1.4.6
-
-slider 0.2.2
-
-sourcetools 0.1.7
-
-sp 1.4-7
-
-sparklyr 1.5.2
-
-SparseM 1.81
-
-sqldf 0.4-11
-
-SQUAREM 2021.1
-
-StanHeaders 2.21.0-7
-
-stringi 1.7.6
-
-stringr 1.4.0
-
-sweep 0.2.3
-
-sys 3.4
-
-testthat 3.1.4
-
-tibble 3.1.7
-
-tibbletime 0.1.6
-
-tidyr 1.2.0
-
-tidyselect 1.1.2
-
-tidyverse 1.3.1
-
-timeDate 3043.102
-
-timetk 2.8.0
-
-tinytex 0.38
-
-tseries 0.10-51
-
-tsfeatures 1.0.2
-
-TTR 0.24.3
-
-tzdb 0.3.0
-
-urca 1.3-0
-
-usethis 2.1.5
-
-utf8 1.2.2
-
-uuid 1.1-0
-
-vctrs 0.4.1
-
-viridisLite 0.4.0
-
-vroom 1.5.7
-
-waldo 0.4.0
-
-warp 0.2.0
-
-whisker 0.4
-
-withr 2.5.0
-
-xfun 0.30
-
-xml2 1.3.3
-
-xopen 1.0.0
-
-xtable 1.8-4
-
-xts 0.12.1
-
-yaml 2.3.5
-
-zip 2.2.0
-
-zoo 1.8-10
+| **Library** | **Version** | **Library** | **Version** | **Library** | **Version** |
+|:-:|:--:|::|:--:|::|:--:|
+| askpass | 1.1 | highcharter | 0.9.4 | readr | 2.1.3 |
+| assertthat | 0.2.1 | highr | 0.9 | readxl | 1.4.1 |
+| backports | 1.4.1 | hms | 1.1.2 | recipes | 1.0.3 |
+| base64enc | 0.1-3 | htmltools | 0.5.3 | rematch | 1.0.1 |
+| bit | 4.0.5 | htmlwidgets | 1.5.4 | rematch2 | 2.1.2 |
+| bit64 | 4.0.5 | httpcode | 0.3.0 | remotes | 2.4.2 |
+| blob | 1.2.3 | httpuv | 1.6.6 | reprex | 2.0.2 |
+| brew | 1.0-8 | httr | 1.4.4 | reshape2 | 1.4.4 |
+| brio | 1.1.3 | ids | 1.0.1 | rjson | 0.2.21 |
+| broom | 1.0.1 | igraph | 1.3.5 | rlang | 1.0.6 |
+| bslib | 0.4.1 | infer | 1.0.3 | rlist | 0.4.6.2 |
+| cachem | 1.0.6 | ini | 0.3.1 | rmarkdown | 2.18 |
+| callr | 3.7.3 | ipred | 0.9-13 | RODBC | 1.3-19 |
+| caret | 6.0-93 | isoband | 0.2.6 | roxygen2 | 7.2.2 |
+| cellranger | 1.1.0 | iterators | 1.0.14 | rprojroot | 2.0.3 |
+| cli | 3.4.1 | jquerylib | 0.1.4 | rsample | 1.1.0 |
+| clipr | 0.8.0 | jsonlite | 1.8.3 | rstudioapi | 0.14 |
+| clock | 0.6.1 | knitr | 1.41 | rversions | 2.1.2 |
+| colorspace | 2.0-3 | labeling | 0.4.2 | rvest | 1.0.3 |
+| commonmark | 1.8.1 | later | 1.3.0 | sass | 0.4.4 |
+| config | 0.3.1 | lava | 1.7.0 | scales | 1.2.1 |
+| conflicted | 1.1.0 | lazyeval | 0.2.2 | selectr | 0.4-2 |
+| coro | 1.0.3 | lhs | 1.1.5 | sessioninfo | 1.2.2 |
+| cpp11 | 0.4.3 | lifecycle | 1.0.3 | shiny | 1.7.3 |
+| crayon | 1.5.2 | lightgbm | 3.3.3 | slider | 0.3.0 |
+| credentials | 1.3.2 | listenv | 0.8.0 | sourcetools | 0.1.7 |
+| crosstalk | 1.2.0 | lobstr | 1.1.2 | sparklyr | 1.7.8 |
+| crul | 1.3 | lubridate | 1.9.0 | SQUAREM | 2021.1 |
+| curl | 4.3.3 | magrittr | 2.0.3 | stringi | 1.7.8 |
+| data.table | 1.14.6 | maps | 3.4.1 | stringr | 1.4.1 |
+| DBI | 1.1.3 | memoise | 2.0.1 | sys | 3.4.1 |
+| dbplyr | 2.2.1 | mime | 0.12 | systemfonts | 1.0.4 |
+| desc | 1.4.2 | miniUI | 0.1.1.1 | testthat | 3.1.5 |
+| devtools | 2.4.5 | modeldata | 1.0.1 | textshaping | 0.3.6 |
+| dials | 1.1.0 | modelenv | 0.1.0 | tibble | 3.1.8 |
+| DiceDesign | 1.9 | ModelMetrics | 1.2.2.2 | tidymodels | 1.0.0 |
+| diffobj | 0.3.5 | modelr | 0.1.10 | tidyr | 1.2.1 |
+| digest | 0.6.30 | munsell | 0.5.0 | tidyselect | 1.2.0 |
+| downlit | 0.4.2 | numDeriv | 2016.8-1.1 | tidyverse | 1.3.2 |
+| dplyr | 1.0.10 | openssl | 2.0.4 | timechange | 0.1.1 |
+| dtplyr | 1.2.2 | parallelly | 1.32.1 | timeDate | 4021.106 |
+| e1071 | 1.7-12 | parsnip | 1.0.3 | tinytex | 0.42 |
+| ellipsis | 0.3.2 | patchwork | 1.1.2 | torch | 0.9.0 |
+| evaluate | 0.18 | pillar | 1.8.1 | triebeard | 0.3.0 |
+| fansi | 1.0.3 | pkgbuild | 1.4.0 | TTR | 0.24.3 |
+| farver | 2.1.1 | pkgconfig | 2.0.3 | tune | 1.0.1 |
+| fastmap | 1.1.0 | pkgdown | 2.0.6 | tzdb | 0.3.0 |
+| fontawesome | 0.4.0 | pkgload | 1.3.2 | urlchecker | 1.0.1 |
+| forcats | 0.5.2 | plotly | 4.10.1 | urltools | 1.7.3 |
+| foreach | 1.5.2 | plyr | 1.8.8 | usethis | 2.1.6 |
+| forge | 0.2.0 | praise | 1.0.0 | utf8 | 1.2.2 |
+| fs | 1.5.2 | prettyunits | 1.1.1 | uuid | 1.1-0 |
+| furrr | 0.3.1 | pROC | 1.18.0 | vctrs | 0.5.1 |
+| future | 1.29.0 | processx | 3.8.0 | viridisLite | 0.4.1 |
+| future.apply | 1.10.0 | prodlim | 2019.11.13 | vroom | 1.6.0 |
+| gargle | 1.2.1 | profvis | 0.3.7 | waldo | 0.4.0 |
+| generics | 0.1.3 | progress | 1.2.2 | warp | 0.2.0 |
+| gert | 1.9.1 | progressr | 0.11.0 | whisker | 0.4 |
+| ggplot2 | 3.4.0 | promises | 1.2.0.1 | withr | 2.5.0 |
+| gh | 1.3.1 | proxy | 0.4-27 | workflows | 1.1.2 |
+| gistr | 0.9.0 | pryr | 0.1.5 | workflowsets | 1.0.0 |
+| gitcreds | 0.1.2 | ps | 1.7.2 | xfun | 0.35 |
+| globals | 0.16.2 | purrr | 0.3.5 | xgboost | 1.6.0.1 |
+| glue | 1.6.2 | quantmod | 0.4.20 | XML | 3.99-0.12 |
+| googledrive | 2.0.0 | r2d3 | 0.2.6 | xml2 | 1.3.3 |
+| googlesheets4 | 1.0.1 | R6 | 2.5.1 | xopen | 1.0.0 |
+| gower | 1.0.0 | ragg | 1.2.4 | xtable | 1.8-4 |
+| GPfit | 1.0-8 | rappdirs | 0.3.3 | xts | 0.12.2 |
+| gtable | 0.3.1 | rbokeh | 0.5.2 | yaml | 2.3.6 |
+| hardhat | 1.2.0 | rcmdcheck | 1.4.0 | yardstick | 1.1.0 |
+| haven | 2.5.1 | RColorBrewer | 1.1-3 | zip | 2.2.2 |
+| hexbin | 1.28.2 | Rcpp | 1.0.9 | zoo | 1.8-11 |
## Next steps
synapse-analytics Apache Spark 33 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-33-runtime.md
# Azure Synapse Runtime for Apache Spark 3.3 (Preview)
-Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.3.
-
-> [!NOTE]
-> Azure Synapse Runtime for Apache Spark 3.3 is currently in Public Preview. Please expect some major components version changes as well.
+Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.3.
+> [!IMPORTANT]
+> * Azure Synapse Runtime for Apache Spark 3.3 is currently in Public Preview.
+> * We are actively rolling out the final changes to all production regions with the goal of ensuring a seamless implementation. As we monitor the stability of these updates, we tentatively anticipate a general availability date of February 23rd. Please note that this is subject to change and we will provide updates as they become available.
+> * The .NET Core 3.1 library has reached end of support and therefore has been removed from the Azure Synapse Runtime for Apache Spark 3.3, meaning users will no longer be able to access Spark APIs through C# and F# and execute C# code in notebooks and through jobs. For more information about Azure Synapse Runtime for Apache Spark 3.3 and its components, please refer to the Release Notes. Additionally, for more information about the guidelines for the availability of support throughout the life of a product which applies to Azure Synapse Analytics, please refer to the Lifecycle Policy.
+
## Component versions | Component | Version |
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This docume
| Java | 1.8.0_282 | | Scala | 2.12.15 | | Hadoop | 3.3.3 |
-| .NET Core | 3.1 |
-| .NET | 2.0.0 |
-| Delta Lake | 2.1.0 |
-| Python | 3.8 |
-| R (Preview) | 4.2.2 |
+| Delta Lake | 2.2.0 |
+| Python | 3.10 |
+| R (Preview) | 4.2.2 |
## Libraries The following sections present the libraries included in Azure Synapse Runtime for Apache Spark 3.3. ### Scala and Java libraries
-| **Library** | **Version** | **Library** | **Version** | **Library** | **Version** |
-|:-:|:--:|:--:|:-:|:-:|:-:|
-| activation | 1.1.1 | httpclient5 | 5.1.3 | opentracing-api | 0.33.0 |
-| adal4j | 1.6.3 | httpcore | 4.4.14 | opentracing-noop | 0.33.0 |
-| aircompressor | 0.21 | httpmime | 4.5.13 | opentracing-util | 0.33.0 |
-| algebra | 2.12-2.0.1 | hyperspace-core-spark3.2_2.12 | 0.5.1-synapse | orc-core | 1.7.6 |
-| aliyun-java-sdk-core | 4.5.10 | impulse-core_spark3.2_2.12 | 1.0.4 | orc-mapreduce | 1.7.6 |
-| aliyun-java-sdk-kms | 2.11.0 | impulse-telemetry-mds_spark3.2_2.12 | 1.0.4 | orc-shims | 1.7.6 |
-| aliyun-java-sdk-ram | 3.1.0 | ini4j | 0.5.4 | oro | 2.0.8 |
-| aliyun-sdk-oss | 3.13.0 | isolation-forest_3.2.0_2.12 | 2.0.8 | osgi-resource-locator | 1.0.3 |
-| annotations | 17.0.0 | istack-commons-runtime | 3.0.8 | paranamer | 2.8 |
-| antlr-runtime | 3.5.2 | ivy | 2.5.1 | parquet-column | 1.12.3 |
-| antlr4-runtime | 4.8 | jackson-annotations | 2.13.4 | parquet-common | 1.12.3 |
-| aopalliance-repackaged | 2.6.1 | jackson-core | 2.13.4 | parquet-encoding | 1.12.3 |
-| arpack_combined_all | 0.1 | jackson-core-asl | 1.9.13 | parquet-format-structures | 1.12.3 |
-| arpack | 2.2.1 | jackson-databind | 2.13.4.1 | parquet-hadoop | 1.12.3 |
-| arrow-format | 7.0.0 | jackson-dataformat-cbor | 2.13.4 | parquet-jackson | 1.12.3 |
-| arrow-memory-core | 7.0.0 | jackson-mapper-asl | 1.9.13 | peregrine-spark | 0.10.2 |
-| arrow-memory-netty | 7.0.0 | jackson-module-scala_2.12 | 2.13.4 | pickle | 1.2 |
-| arrow-vector | 7.0.0 | jakarta.annotation-api | 1.3.5 | postgresql | 42.2.9 |
-| audience-annotations | 0.5.0 | jakarta.inject | 2.6.1 | protobuf-java | 2.5.0 |
-| avro | 1.11.0 | jakarta.servlet-api | 4.0.3 | proton-j | 0.33.8 |
-| avro-ipc | 1.11.0 | jakarta.validation-api | 2.0.2 | py4j | 0.10.9.5 |
-| avro-mapred | 1.11.0 | jakarta.ws.rs-api | 2.1.6 | qpid-proton-j-extensions | 1.2.4 |
-| aws-java-sdk-bundle | 1.11.1026 | jakarta.xml.bind-api | 2.3.2 | RoaringBitmap | 0.9.25 |
-| azure-data-lake-store-sdk | 2.3.9 | janino | 3.0.16 | rocksdbjni | 6.20.3 |
-| azure-eventhubs | 3.3.0 | javassist | 3.25.0-GA | scala-collection-compat_2.12 | 2.1.1 |
-| azure-eventhubs-spark_2.12 | 2.3.22 | javatuples | 1.2 | scala-compiler | 2.12.15 |
-| azure-keyvault-core | 1.0.0 | javax.jdo | 3.2.0-m3 | scala-java8 | compat_2.12-0.9.0 |
-| azure-storage | 7.0.1 | javolution | 5.5.1 | scala-library | 2.12.15 |
-| azure-synapse-ml-pandas | 2.12-0.1.1 | jaxb-api | 2.2.11 | scala-parser-combinators_2.12 | 1.1.2 |
-| azure-synapse-ml-predict | 2.12-1.0 | jaxb-runtime | 2.3.2 | scala-reflect | 2.12.15 |
-| blas | 2.2.1 | jcl-over-slf4j | 1.7.32 | scala-xml_2.12 | 1.2.0 |
-| bonecp | 0.8.0.RELEASE | jdo-api | 3.0.1 | scalactic_2.12 | 3.0.5 |
-| breeze | 2.12-1.2 | jdom2 | 2.0.6 | shapeless_2.12 | 2.3.7 |
-| breeze-macros | 2.12-1.2 | jersey-client | 2.36 | shims | 0.9.25 |
-| cats-kernel | 2.12-2.1.1 | jersey-common | 2.36 | slf4j-api | 1.7.32 |
-| chill | 2.12-0.10.0 | jersey-container-servlet | 2.36 | snappy-java | 1.1.8.4 |
-| chill-java | 0.10.0 | jersey-container-servlet-core | 2.36 | spark_diagnostic_cli | 2.0.0_spark-3.3.0 |
-| client-jar-sdk | 1.14.0 | jersey-hk2 | 2.36 | spark-3.3-advisor-core_2.12 | 1.0.14 |
-| cntk | 2.4 | jersey-server | 2.36 | spark-3.3-rpc-history-server-app-listener_2.12 | 1.0.0 |
-| commons-cli | 1.5.0 | jettison | 1.1 | spark-3.3-rpc-history-server-core_2.12 | 1.0.0 |
-| commons-codec | 1.15 | jetty-util | 9.4.48.v20220622 | spark-avro_2.12 | 3.3.1.5.2-76471634 |
-| commons-collections | 3.2.2 | jetty-util-ajax | 9.4.48.v20220622 | spark-catalyst_2.12 | 3.3.1.5.2-76471634 |
-| commons-collections4 | 4.4 | JLargeArrays | 1.5 | spark-cdm-connector-assembly | 1.19.2 |
-| commons-compiler | 3.0.16 | jline | 2.14.6 | spark-core_2.12 | 3.3.1.5.2-76471634 |
-| commons-compress | 1.21 | joda-time | 2.10.13 | spark-enhancement_2.12 | 3.3.1.5.2-76471634 |
-| commons-crypto | 1.1.0 | jodd-core | 3.5.2 | spark-enhancementui_2.12 | 3.0.0 |
-| commons-dbcp | 1.4 | jpam | 1.1 | spark-graphx_2.12 | 3.3.1.5.2-76471634 |
-| commons-io | 2.11.0 | jsch | 0.1.54 | spark-hadoop-cloud_2.12 | 3.3.1.5.2-76471634 |
-| commons-lang | 2.6 | json | 1.8 | spark-hive_2.12 | 3.3.1.5.2-76471634 |
-| commons-lang3 | 3.12.0 | json | 20090211 | spark-hive-thriftserver_2.12 | 3.3.1.5.2-76471634 |
-| commons-logging | 1.1.3 | json-simple | 1.1 | spark-kusto-synapse-connector_3.1_2.12 | 1.1.1 |
-| commons-math3 | 3.6.1 | json4s-ast_2.12 | 3.7.0-M11 | spark-kvstore_2.12 | 3.3.1.5.2-76471634 |
-| commons-pool | 1.5.4 | json4s-core_2.12 | 3.7.0-M11 | spark-launcher_2.12 | 3.3.1.5.2-76471634 |
-| commons-pool2 | 2.11.1 | json4s-jackson_2.12 | 3.7.0-M11 | spark-lighter-contract_2.12 | 2.0.0_spark-3.3.0 |
-| commons-text | 1.10.0 | json4s-scalap_2.12 | 3.7.0-M11 | spark-lighter-core_2.12 | 2.0.0_spark-3.3.0 |
-| compress-lzf | 1.1 | jsr305 | 3.0.0 | spark-microsoft-tools_2.12 | 3.3.1.5.2-76471634 |
-| config | 1.3.4 | jta | 1.1 | spark-mllib_2.12 | 3.3.1.5.2-76471634 |
-| core | 1.1.2 | JTransforms | 3.1 | spark-mllib-local_2.12 | 3.3.1.5.2-76471634 |
-| cos_api-bundle | 5.6.19 | jul-to-slf4j | 1.7.32 | spark-mssql-connector | 1.2.0 |
-| cosmos-analytics-spark-3.2.1-connector | 1.6.3 | kafka-clients | 2.8.1 | spark-network-common_2.12 | 3.3.1.5.2-76471634 |
-| curator-client | 2.13.0 | kryo-shaded | 4.0.2 | spark-network-shuffle_2.12 | 3.3.1.5.2-76471634 |
-| curator-framework | 2.13.0 | kusto-data | 2.8.2 | spark-repl_2.12 | 3.3.1.5.2-76471634 |
-| curator-recipes | 2.13.0 | kusto-ingest | 2.8.2 | spark-sketch_2.12 | 3.3.1.5.2-76471634 |
-| datanucleus-api-jdo | 4.2.4 | kusto-spark_synapse_3.0_2.12 | 2.9.3 | spark-sql_2.12 | 3.3.1.5.2-76471634 |
-| datanucleus-core | 4.1.17 | lapack | 2.2.1 | spark-sql-kafka-0-10_2.12 | 3.3.1.5.2-76471634 |
-| datanucleus-rdbms | 4.1.19 | leveldbjni-all | 1.8 | spark-streaming_2.12 | 3.3.1.5.2-76471634 |
-| delta-core_2.12 | 2.1.0.2 | libfb303 | 0.9.3 | spark-streaming-kafka-0-10_2.12 | 3.3.1.5.2-76471634 |
-| delta-storage | 2.1.0.2 | libshufflejni.so | | spark-streaming-kafka-0-10-assembly_2.12 | 3.3.1.5.2-76471634 |
-| derby | 10.14.2.0 | libthrift | 0.12.0 | spark-tags_2.12 | 3.3.1.5.2-76471634 |
-| dropwizard-metrics-hadoop-metrics2-reporter | 0.1.2 | libvegasjni.so | | spark-token-provider-kafka-0-10_2.12 | 3.3.1.5.2-76471634 |
-| flatbuffers-java | 1.12.0 | lightgbmlib | 3.2.110 | spark-unsafe_2.12 | 3.3.1.5.2-76471634 |
-| fluent-logger-jar-with-dependencies | jdk8 | log4j-1.2-api | 2.17.2 | spark-yarn_2.12 | 3.3.1.5.2-76471634 |
-| genesis-client_2.12 | 0.19.0-jar-with-dependencies | log4j-api | 2.17.2 | SparkCustomEvents | 3.2.0-1.0.5 |
-| gson | 2.8.6 | log4j-core | 2.17.2 | sparknativeparquetwriter_2.12 | 0.6.0-spark-3.3 |
-| guava | 14.0.1 | log4j-slf4j-impl | 2.17.2 | spire_2.12 | 0.17.0 |
-| hadoop-aliyun | 3.3.3.5.2-76471634 | lz4-java | 1.8.0 | spire-macros_2.12 | 0.17.0 |
-| hadoop-annotations | 3.3.3.5.2-76471634 | mdsdclientdynamic | 2.0 | spire-platform_2.12 | 0.17.0 |
-| hadoop-aws | 3.3.3.5.2-76471634 | metrics-core | 4.2.7 | spire-util_2.12 | 0.17.0 |
-| hadoop-azure | 3.3.3.5.2-76471634 | metrics-graphite | 4.2.7 | spray-json_2.12 | 1.3.5 |
-| hadoop-azure-datalake | 3.3.3.5.2-76471634 | metrics-jmx | 4.2.7 | sqlanalyticsconnector | 3.3.0-2.0.8 |
-| hadoop-client-api | 3.3.3.5.2-76471634 | metrics-json | 4.2.7 | ST4 | 4.0.4 |
-| hadoop-client-runtime | 3.3.3.5.2-76471634 | metrics-jvm | 4.2.7 | stax-api | 1.0.1 |
-| hadoop-cloud-storage | 3.3.3.5.2-76471634 | microsoft-catalog-metastore-client | 1.0.83 | stream | 2.9.6 |
-| hadoop-cos | 3.3.3.5.2-76471634 | microsoft-log4j-etwappender | 1.0 | structuredstreamforspark_2.12 | 3.2.0-2.3.0 |
-| hadoop-openstack | 3.3.3.5.2-76471634 | microsoft-spark | | super-csv | 2.2.0 |
-| hadoop-shaded-guava | 1.1.1 | minlog | 1.3.0 | synapseml_2.12 | 0.10.1-22-95f451ab-SNAPSHOT |
-| hadoop-yarn-server-web-proxy | 3.3.3.5.2-76471634 | mssql-jdbc | 8.4.1.jre8 | synapseml-cognitive_2.12 | 0.10.1-22-95f451ab-SNAPSHOT |
-| hdinsight-spark-metrics | 3.2.0-1.0.5 | mysql-connector-java | 8.0.18 | synapseml-core_2.12 | 0.10.1-22-95f451ab-SNAPSHOT |
-| HikariCP | 2.5.1 | netty-all | 4.1.74.Final | synapseml-deep-learning_2.12 | 0.10.1-22-95f451ab-SNAPSHOT |
-| hive-beeline | 2.3.9 | netty-buffer | 4.1.74.Final | synapseml-internal_2.12 | 0.0.0-99-bda3814c-SNAPSHOT |
-| hive-cli | 2.3.9 | netty-codec | 4.1.74.Final | synapseml-lightgbm_2.12 | 0.10.1-22-95f451ab-SNAPSHOT |
-| hive-common | 2.3.9 | netty-common | 4.1.74.Final | synapseml-opencv_2.12 | 0.10.1-22-95f451ab-SNAPSHOT |
-| hive-exec | 2.3.9-core | netty-handler | 4.1.74.Final | synapseml-vw_2.12 | 0.10.1-22-95f451ab-SNAPSHOT |
-| hive-jdbc | 2.3.9 | netty-resolver | 4.1.74.Final | synfs | 3.3.0-20221106.6 |
-| hive-llap-common | 2.3.9 | netty-tcnative-classes | 2.0.48.Final | threeten-extra | 1.5.0 |
-| hive-metastore | 2.3.9 | netty-transport | 4.1.74.Final | tink | 1.6.1 |
-| hive-serde | 2.3.9 | netty-transport-classes-epoll | 4.1.74.Final | TokenLibrary-assembly | 3.3.2 |
-| hive-service-rpc | 3.1.2 | netty-transport-classes-kqueue | 4.1.74.Final | transaction-api | 1.1 |
-| hive-shims-0.23 | 2.3.9 | netty-transport-native-epoll | 4.1.74.Final-linux-aarch_64 | tridenttokenlibrary-assembly | 1.0.8 |
-| hive-shims | 2.3.9 | netty-transport-native-epoll | 4.1.74.Final-linux-x86_64 | univocity-parsers | 2.9.1 |
-| hive-shims-common | 2.3.9 | netty-transport-native-kqueue | 4.1.74.Final-osx-aarch_64 | VegasConnector | 1.2.02_2.12_3.2 |
-| hive-shims-scheduler | 2.3.9 | netty-transport-native-kqueue | 4.1.74.Final-osx-x86_64 | velocity | 1.5 |
-| hive-storage-api | 2.7.2 | netty-transport-native-unix-common | 4.1.74.Final | vw-jni | 8.9.1 |
-| hive-vector-code-gen | 2.3.9 | notebook-utils | 3.3.0-20221106.6 | wildfly-openssl | 1.0.7.Final |
-| hk2-api | 2.6.1 | objenesis | 3.2 | xbean-asm9-shaded | 4.20 |
-| hk2-locator | 2.6.1 | onnxruntime_gpu | 1.8.1 | xz | 1.8 |
-| hk2-utils | 2.6.1 | opencsv | 2.3 | zookeeper | 3.6.2.5.2-76471634 |
-| httpclient | 4.5.13 | opencv | 3.2.0-1 | zookeeper-jute | 3.6.2.5.2-76471634 |
-| | | | | zstd-jni | 1.5.2-1 |
---
+| Library | Version | Library | Version | Library | Version |
+|-|-|-||-||
+| activation | 1.1.1 | httpclient5 | 5.1.3 | opencsv | 2.3 |
+| adal4j | 1.6.3 | httpcore | 4.4.14 | opencv | 3.2.0-1 |
+| aircompressor | 0.21 | httpmime | 4.5.13 | opentest4j | 1.2.0 |
+| algebra_2.12 | 2.0.1 | impulse-core_spark3.3_2.12 | 1.0.6 | opentracing-api | 0.33.0 |
+| aliyun-java-sdk-core | 4.5.10 | impulse-telemetry | mds_spark3.3_2.12-1.0.6 | opentracing-noop | 0.33.0 |
+| aliyun-java-sdk-kms | 2.11.0 | ini4j | 0.5.4 | opentracing-util | 0.33.0 |
+| aliyun-java-sdk-ram | 3.1.0 | isolation | forest_3.2.0_2.12-2.0.8 | orc-core | 1.7.6 |
+| aliyun-sdk-oss | 3.13.0 | istack-commons-runtime | 3.0.8 | orc-mapreduce | 1.7.6 |
+| annotations | 17.0.0 | ivy | 2.5.1 | orc-shims | 1.7.6 |
+| antlr-runtime | 3.5.2 | jackson-annotations | 2.13.4 | oro | 2.0.8 |
+| antlr4-runtime | 4.8 | jackson-core | 2.13.4 | osgi-resource-locator | 1.0.3 |
+| aopalliance-repackaged | 2.6.1 | jackson-core-asl | 1.9.13 | paranamer | 2.8 |
+| apiguardian-api | 1.1.0 | jackson-databind | 2.13.4.1 | parquet-column | 1.12.3 |
+| arpack | 2.2.1 | jackson-dataformat-cbor | 2.13.4 | parquet-common | 1.12.3 |
+| arpack_combined_all | 0.1 | jackson-mapper-asl | 1.9.13 | parquet-encoding | 1.12.3 |
+| arrow-format | 7.0.0 | jackson-module-scala_2.12 | 2.13.4 | parquet-format-structures | 1.12.3 |
+| arrow-memory-core | 7.0.0 | jakarta.annotation-api | 1.3.5 | parquet-hadoop | 1.12.3 |
+| arrow-memory-netty | 7.0.0 | jakarta.inject | 2.6.1 | parquet-jackson | 1.12.3 |
+| arrow-vector | 7.0.0 | jakarta.servlet-api | 4.0.3 | peregrine-spark_3.3.0 | 0.10.3 |
+| audience-annotations | 0.5.0 | jakarta.validation-api | 2.0.2 | pickle | 1.2 |
+| autotune-client_2.12 | 1.0.0-3.3 | jakarta.ws.rs-api | 2.1.6 | postgresql | 42.2.9 |
+| autotune-common_2.12 | 1.0.0-3.3 | jakarta.xml.bind-api | 2.3.2 | protobuf-java | 2.5.0 |
+| avro | 1.11.0 | janino | 3.0.16 | proton-j | 0.33.8 |
+| avro-ipc | 1.11.0 | javassist | 3.25.0-GA | py4j | 0.10.9.5 |
+| avro-mapred | 1.11.0 | javatuples | 1.2 | qpid-proton-j-extensions | 1.2.4 |
+| aws-java-sdk-bundle | 1.11.1026 | javax.jdo | 3.2.0-m3 | RoaringBitmap | 0.9.25 |
+| azure-data-lake-store-sdk | 2.3.9 | javolution | 5.5.1 | rocksdbjni | 6.20.3 |
+| azure-eventhubs | 3.3.0 | jaxb-api | 2.2.11 | scala-collection-compat_2.12 | 2.1.1 |
+| azure-eventhubs | spark_2.12-2.3.22 | jaxb-runtime | 2.3.2 | scala-compiler | 2.12.15 |
+| azure-keyvault-core | 1.0.0 | jcl-over-slf4j | 1.7.32 | scala-java8-compat_2.12 | 0.9.0 |
+| azure-storage | 7.0.1 | jdo-api | 3.0.1 | scala-library | 2.12.15 |
+| azure-synapse-ml-pandas_2.12 | 0.1.1 | jdom2 | 2.0.6 | scala-parser-combinators_2.12 | 1.1.2 |
+| azure-synapse-ml-predict_2.12 | 1.0 | jersey-client | 2.36 | scala-reflect | 2.12.15 |
+| blas | 2.2.1 | jersey-common | 2.36 | scala-xml_2.12 | 1.2.0 |
+| bonecp | 0.8.0.RELEASE | jersey-container-servlet | 2.36 | scalactic_2.12 | 3.2.14 |
+| breeze_2.12 | 1.2 | jersey-container-servlet-core | 2.36 | shapeless_2.12 | 2.3.7 |
+| breeze-macros_2.12 | 1.2 | jersey-hk2 | 2.36 | shims | 0.9.25 |
+| cats-kernel_2.12 | 2.1.1 | jersey-server | 2.36 | slf4j-api | 1.7.32 |
+| chill_2.12 | 0.10.0 | jettison | 1.1 | snappy-java | 1.1.8.4 |
+| chill-java | 0.10.0 | jetty-util | 9.4.48.v20220622 | spark_diagnostic_cli | 2.0.1_spark-3.3.0 |
+| client-jar-sdk | 1.14.0 | jetty-util-ajax | 9.4.48.v20220622 | spark-3.3-advisor-core_2.12 | 1.0.14 |
+| cntk | 2.4 | JLargeArrays | 1.5 | spark-3.3-rpc-history-server-app-listener_2.12 | 1.0.0 |
+| commons | compiler-3.0.16 | jline | 2.14.6 | spark-3.3-rpc-history-server-core_2.12 | 1.0.0 |
+| commons-cli | 1.5.0 | joda-time | 2.10.13 | spark-avro_2.12 | 3.3.1.5.2-82353445 |
+| commons-codec | 1.15 | jodd-core | 3.5.2 | spark-catalyst_2.12 | 3.3.1.5.2-82353445 |
+| commons-collections | 3.2.2 | jpam | 1.1 | spark-cdm-connector-assembly-spark3.3 | 1.19.4 |
+| commons-collections4 | 4.4 | jsch | 0.1.54 | spark-core_2.12 | 3.3.1.5.2-82353445 |
+| commons-compress | 1.21 | json | 1.8 | spark-enhancement_2.12 | 3.3.1.5.2-82353445 |
+| commons-crypto | 1.1.0 | json | 20090211 | spark-enhancementui_2.12 | 3.0.0 |
+| commons-dbcp | 1.4 | json | 20210307 | spark-graphx_2.12 | 3.3.1.5.2-82353445 |
+| commons-io | 2.11.0 | json | simple-1.1.1 | spark-hadoop-cloud_2.12 | 3.3.1.5.2-82353445 |
+| commons-lang | 2.6 | json | simple-1.1 | spark-hive_2.12 | 3.3.1.5.2-82353445 |
+| commons-lang3 | 3.12.0 | json4s-ast_2.12 | 3.7.0-M11 | spark-hive-thriftserver_2.12 | 3.3.1.5.2-82353445 |
+| commons-logging | 1.1.3 | json4s-core_2.12 | 3.7.0-M11 | spark-kusto-synapse-connector_3.1_2.12 | 1.1.1 |
+| commons-math3 | 3.6.1 | json4s-jackson_2.12 | 3.7.0-M11 | spark-kvstore_2.12 | 3.3.1.5.2-82353445 |
+| commons-pool | 1.5.4 | json4s-scalap_2.12 | 3.7.0-M11 | spark-launcher_2.12 | 3.3.1.5.2-82353445 |
+| commons-pool2 | 2.11.1 | jsr305 | 3.0.0 | spark-lighter-contract_2.12 | 2.0.0_spark-3.3.0 |
+| commons-text | 1.10.0 | jta | 1.1 | spark-lighter-core_2.12 | 2.0.0_spark-3.3.0 |
+| compress-lzf | 1.1 | JTransforms | 3.1 | spark-microsoft-tools_2.12 | 3.3.1.5.2-82353445 |
+| config | 1.3.4 | jul-to-slf4j | 1.7.32 | spark-mllib_2.12 | 3.3.1.5.2-82353445 |
+| core | 1.1.2 | junit-jupiter | 5.5.2 | spark-mllib-local_2.12 | 3.3.1.5.2-82353445 |
+| cos_api-bundle | 5.6.19 | junit-jupiter-api | 5.5.2 | spark-mssql-connector | 1.2.0 |
+| cosmos-analytics-spark-3.3.0-connector | 1.6.4 | junit-jupiter-engine | 5.5.2 | spark-network-common_2.12 | 3.3.1.5.2-82353445 |
+| curator-client | 2.13.0 | junit-jupiter-params | 5.5.2 | spark-network-shuffle_2.12 | 3.3.1.5.2-82353445 |
+| curator-framework | 2.13.0 | junit-platform-commons | 1.5.2 | spark-repl_2.12 | 3.3.1.5.2-82353445 |
+| curator-recipes | 2.13.0 | junit-platform-engine | 1.5.2 | spark-sketch_2.12 | 3.3.1.5.2-82353445 |
+| datanucleus-api-jdo | 4.2.4 | kafka-clients | 2.8.1 | spark-sql_2.12 | 3.3.1.5.2-82353445 |
+| datanucleus-core | 4.1.17 | kryo-shaded | 4.0.2 | spark-sql-kafka | 0-10_2.12-3.3.1.5.2-82353445 |
+| datanucleus-rdbms | 4.1.19 | kusto-data | 2.8.2 | spark-streaming_2.12 | 3.3.1.5.2-82353445 |
+| delta-core_2.12 | 2.2.0.1 | kusto-ingest | 2.8.2 | spark-streaming-kafka | 0-10-assembly_2.12-3.3.1.5.2-82353445 |
+| delta-iceberg_2.12 | 2.2.0.1 | kusto-spark_synapse_3.0_2.12 | 2.9.3 | spark-streaming-kafka | 0-10_2.12-3.3.1.5.2-82353445 |
+| delta-storage | 2.2.0.1 | lapack | 2.2.1 | spark-tags_2.12 | 3.3.1.5.2-82353445 |
+| derby | 10.14.2.0 | leveldbjni-all | 1.8 | spark-token-provider-kafka | 0-10_2.12-3.3.1.5.2-82353445 |
+| dropwizard-metrics-hadoop-metrics2-reporter | 0.1.2 | libfb303 | 0.9.3 | spark-unsafe_2.12 | 3.3.1.5.2-82353445 |
+| flatbuffers-java | 1.12.0 | libshufflejni.so N/A | | spark-yarn_2.12 | 3.3.1.5.2-82353445 |
+| fluent-logger-jar-with-dependencies | jdk8 | libthrift | 0.12.0 | SparkCustomEvents | 3.2.0-1.0.5 |
+| genesis-client_2.12 | 0.21.0-jar-with-dependencies | libvegasjni.so | | sparknativeparquetwriter_2.12 | 0.6.0-spark-3.3 |
+| gson | 2.8.6 | lightgbmlib | 3.2.110 | spire_2.12 | 0.17.0 |
+| guava | 14.0.1 | log4j-1.2-api | 2.17.2 | spire-macros_2.12 | 0.17.0 |
+| hadoop-aliyun | 3.3.3.5.2-82353445 | log4j-api | 2.17.2 | spire-platform_2.12 | 0.17.0 |
+| hadoop-annotations | 3.3.3.5.2-82353445 | log4j-core | 2.17.2 | spire-util_2.12 | 0.17.0 |
+| hadoop-aws | 3.3.3.5.2-82353445 | log4j-slf4j-impl | 2.17.2 | spray-json_2.12 | 1.3.5 |
+| hadoop-azure | 3.3.3.5.2-82353445 | lz4-java | 1.8.0 | sqlanalyticsconnector | 3.3.0-2.0.8 |
+| hadoop-azure-datalake | 3.3.3.5.2-82353445 | mdsdclientdynamic | 2.0 | ST4 | 4.0.4 |
+| hadoop-azure-trident | 1.0.8 | metrics-core | 4.2.7 | stax-api | 1.0.1 |
+| hadoop-client-api | 3.3.3.5.2-82353445 | metrics-graphite | 4.2.7 | stream | 2.9.6 |
+| hadoop-client-runtime | 3.3.3.5.2-82353445 | metrics-jmx | 4.2.7 | structuredstreamforspark_2.12 | 3.2.0-2.3.0 |
+| hadoop-cloud-storage | 3.3.3.5.2-82353445 | metrics-json | 4.2.7 | super-csv | 2.2.0 |
+| hadoop-cos | 3.3.3.5.2-82353445 | metrics-jvm | 4.2.7 | synapseml_2.12 | 0.10.1-69-84f5b579-SNAPSHOT |
+| hadoop-openstack | 3.3.3.5.2-82353445 | microsoft-catalog-metastore-client | 1.1.2 | synapseml-cognitive_2.12 | 0.10.1-69-84f5b579-SNAPSHOT |
+| hadoop-shaded-guava | 1.1.1 | microsoft-log4j-etwappender | 1.0 | synapseml-core_2.12 | 0.10.1-69-84f5b579-SNAPSHOT |
+| hadoop-yarn-server-web-proxy | 3.3.3.5.2-82353445 | minlog | 1.3.0 | synapseml-deep-learning_2.12 | 0.10.1-69-84f5b579-SNAPSHOT |
+| hdinsight-spark-metrics | 3.2.0-1.0.5 | mssql-jdbc | 8.4.1.jre8 | synapseml-internal_2.12 | 0.0.0-105-28644623-SNAPSHOT |
+| HikariCP | 2.5.1 | mysql-connector-java | 8.0.18 | synapseml-lightgbm_2.12 | 0.10.1-69-84f5b579-SNAPSHOT |
+| hive-beeline | 2.3.9 | netty-all | 4.1.74.Final | synapseml-opencv_2.12 | 0.10.1-69-84f5b579-SNAPSHOT |
+| hive-cli | 2.3.9 | netty-buffer | 4.1.74.Final | synapseml-vw_2.12 | 0.10.1-69-84f5b579-SNAPSHOT |
+| hive-common | 2.3.9 | netty-codec | 4.1.74.Final | synfs | 3.3.0-20230110.6 |
+| hive-exec | 2.3.9-core | netty-common | 4.1.74.Final | threeten-extra | 1.5.0 |
+| hive-jdbc | 2.3.9 | netty-handler | 4.1.74.Final | tink | 1.6.1 |
+| hive-llap-common | 2.3.9 | netty-resolver | 4.1.74.Final | TokenLibrary | assembly-3.4.1 |
+| hive-metastore | 2.3.9 | netty-tcnative-classes | 2.0.48.Final | transaction-api | 1.1 |
+| hive-serde | 2.3.9 | netty-transport | 4.1.74.Final | trident-core | 1.1.16 |
+| hive-service-rpc | 3.1.2 | netty-transport-classes-epoll | 4.1.74.Final | tridenttokenlibrary-assembly | 1.2.3 |
+| hive-shims | 0.23-2.3.9 | netty-transport-classes-kqueue | 4.1.74.Final | univocity-parsers | 2.9.1 |
+| hive-shims | 2.3.9 | netty-transport-native-epoll | 4.1.74.Final-linux-aarch_64 | VegasConnector | 1.2.02_2.12_3.2 |
+| hive-shims-common | 2.3.9 | netty-transport-native-epoll | 4.1.74.Final-linux-x86_64 | velocity | 1.5 |
+| hive-shims-scheduler | 2.3.9 | netty-transport-native-kqueue | 4.1.74.Final-osx-aarch_64 | vw-jni | 8.9.1 |
+| hive-storage-api | 2.7.2 | netty-transport-native-kqueue | 4.1.74.Final-osx-x86_64 | wildfly-openssl | 1.0.7.Final |
+| hive-vector-code-gen | 2.3.9 | netty-transport-native-unix-common | 4.1.74.Final | xbean-asm9-shaded | 4.20 |
+| hk2-api | 2.6.1 | notebook-utils-3.3.0 | 20230110.6 | xz | 1.8 |
+| hk2-locator | 2.6.1 | objenesis | 3.2 | zookeeper | 3.6.2.5.2-82353445 |
+| hk2-utils | 2.6.1 | onnx-protobuf_2.12 | 0.9.1-assembly | zookeeper-jute | 3.6.2.5.2-82353445 |
+| httpclient | 4.5.13 | onnxruntime_gpu | 1.8.1 | zstd-jni | 1.5.2-1 |
### Python libraries (Normal VMs)-
-| **Library** | **Version** | **Library** | **Version** | **Library** | **Version** |
-|:--:|::|:--:|::|:-:|:-:|
-| _libgcc_mutex | 0.1 | keras-applications | 1.0.8 | pyparsing | 2.4.7 |
-| _openmp_mutex | 4.5 | keras-preprocessing | 1.1.2 | pyqt | 5.12.3 |
-| _py-xgboost-mutex | 2.0 | keras2onnx | 1.6.5 | pyqt-impl | 5.12.3 |
-| abseil-cpp | 20210324.0 | kiwisolver | 1.3.1 | pyqt5-sip | 4.19.18 |
-| absl-py | 0.13.0 | koalas | 1.8.0 | pyqtchart | 5.12 |
-| adal | 1.2.7 | krb5 | 1.19.1 | pyqtwebengine | 5.12.1 |
-| adlfs | 0.7.7 | lcms2 | 2.12 | pysocks | 1.7.1 |
-| aiohttp | 3.7.4.post0 | ld_impl_linux-64 | 2.36.1 | python | 3.8.10 |
-| alsa-lib | 1.2.3 | lerc | 2.2.1 | python-dateutil | 2.8.1 |
-| appdirs | 1.4.4 | liac-arff | 2.5.0 | python-flatbuffers | 1.12 |
-| arrow-cpp | 3.0.0 | libaec | 1.0.5 | python_abi | 3.8 |
-| astor | 0.8.1 | libblas | 3.9.0 | pytorch | 1.8.1 |
-| astunparse | 1.6.3 | libbrotlicommon | 1.0.9 | pytz | 2021.1 |
-| async-timeout | 3.0.1 | libbrotlidec | 1.0.9 | pyu2f | 0.1.5 |
-| attrs | 21.2.0 | libbrotlienc | 1.0.9 | pywavelets | 1.1.1 |
-| aws-c-cal | 0.5.11 | libcblas | 3.9.0 | pyyaml | 5.4.1 |
-| aws-c-common | 0.6.2 | libclang | 11.1.0 | pyzmq | 22.1.0 |
-| aws-c-event-stream | 0.2.7 | libcurl | 7.77.0 | qt | 5.12.9 |
-| aws-c-io | 0.10.5 | libdeflate | 1.7 | re2 | 2021.04.01 |
-| aws-checksums | 0.1.11 | libedit | 3.1.20210216 | readline | 8.1 |
-| aws-sdk-cpp | 1.8.186 | libev | 4.33 | regex | 2021.7.6 |
-| azure-datalake-store | 0.0.51 | libevent | 2.1.10 | requests | 2.25.1 |
-| azure-identity | 2021.03.15b1 | libffi | 3.3 | requests-oauthlib | 1.3.0 |
-| azure-storage-blob | 12.8.1 | libgcc-ng | 9.3.0 | retrying | 1.3.3 |
-| backcall | 0.2.0 | libgfortran-ng | 9.3.0 | rsa | 4.7.2 |
-| backports | 1.0 | libgfortran5 | 9.3.0 | ruamel_yaml | 0.15.100 |
-| backports.functools_lru_cache | 1.6.4 | libglib | 2.68.3 | s2n | 1.0.10 |
-| beautifulsoup4 | 4.9.3 | libiconv | 1.16 | salib | 1.3.11 |
-| blas | 2.109 | liblapack | 3.9.0 | scikit-image | 0.18.1 |
-| blas-devel | 3.9.0 | liblapacke | 3.9.0 | scikit-learn | 0.23.2 |
-| blinker | 1.4 | libllvm10 | 10.0.1 | scipy | 1.5.3 |
-| blosc | 1.21.0 | libllvm11 | 11.1.0 | seaborn | 0.11.1 |
-| bokeh | 2.3.2 | libnghttp2 | 1.43.0 | seaborn-base | 0.11.1 |
-| brotli | 1.0.9 | libogg | 1.3.5 | setuptools | 49.6.0 |
-| brotli-bin | 1.0.9 | libopus | 1.3.1 | shap | 0.39.0 |
-| brotli-python | 1.0.9 | libpng | 1.6.37 | six | 1.16.0 |
-| brotlipy | 0.7.0 | libpq | 13.3 | skl2onnx | 1.8.0.1 |
-| brunsli | 0.1 | libprotobuf | 3.15.8 | sklearn-pandas | 2.2.0 |
-| bzip2 | 1.0.8 | libsodium | 1.0.18 | slicer | 0.0.7 |
-| c-ares | 1.17.1 | libssh2 | 1.9.0 | smart_open | 5.1.0 |
-| ca-certificates | 2021.7.5 | libstdcxx-ng | 9.3.0 | smmap | 3.0.5 |
-| cachetools | 4.2.2 | libthrift | 0.14.1 | snappy | 1.1.8 |
-| cairo | 1.16.0 | libtiff | 4.2.0 | soupsieve | 2.2.1 |
-| certifi | 2021.5.30 | libutf8proc | 2.6.1 | sqlite | 3.36.0 |
-| cffi | 1.14.5 | libuuid | 2.32.1000 | statsmodels | 0.12.2 |
-| chardet | 4.0.0 | libuv | 1.41.1 | tabulate | 0.8.9 |
-| charls | 2.2.0 | libvorbis | 1.3.7 | tenacity | 7.0.0 |
-| click | 8.0.1 | libwebp-base | 1.2.0 | tensorboard | 2.4.1 |
-| cloudpickle | 1.6.0 | libxcb | 1.14 | tensorboard-plugin-wit | 1.8.0 |
-| conda | 4.9.2 | libxgboost | 1.4.0 | tensorflow | 2.4.1 |
-| conda-package-handling | 1.7.3 | libxkbcommon | 1.0.3 | tensorflow-base | 2.4.1 |
-| configparser | 5.0.2 | libxml2 | 2.9.12 | tensorflow-estimator | 2.4.0 |
-| cryptography | 3.4.7 | libzopfli | 1.0.3 | termcolor | 1.1.0 |
-| cudatoolkit | 11.1.1 | lightgbm | 3.2.1 | textblob | 0.15.3 |
-| cycler | 0.10.0 | lime | 0.2.0.1 | threadpoolctl | 2.1.0 |
-| cython | 0.29.23 | llvm-openmp | 11.1.0 | tifffile | 2021.4.8 |
-| cytoolz | 0.11.0 | llvmlite | 0.36.0 | tk | 8.6.10 |
-| dash | 1.20.0 | locket | 0.2.1 | toolz | 0.11.1 |
-| dash-core-components | 1.16.0 | lz4-c | 1.9.3 | tornado | 6.1 |
-| dash-html-components | 1.1.3 | markdown | 3.3.4 | tqdm | 4.61.2 |
-| dash-renderer | 1.9.1 | markupsafe | 2.0.1 | traitlets | 5.0.5 |
-| dash-table | 4.11.3 | matplotlib | 3.4.2 | typing-extensions | 3.10.0.0 |
-| dash_cytoscape | 0.2.0 | matplotlib-base | 3.4.2 | typing_extensions | 3.10.0.0 |
-| dask-core | 2021.6.2 | matplotlib-inline | 0.1.2 | unixodbc | 2.3.9 |
-| databricks-cli | 0.12.1 | mkl | 2021.2.0 | urllib3 | 1.26.4 |
-| dataclasses | 0.8 | mkl-devel | 2021.2.0 | wcwidth | 0.2.5 |
-| dbus | 1.13.18 | mkl-include | 2021.2.0 | webencodings | 0.5.1 |
-| debugpy | 1.3.0 | mleap | 0.17.0 | werkzeug | 2.0.1 |
-| decorator | 4.4.2 | mlflow-skinny | 1.18.0 | wheel | 0.36.2 |
-| dill | 0.3.4 | msal | 2021.06.08 | wrapt | 1.12.1 |
-| entrypoints | 0.3 | msal-extensions | 2021.06.08 | xgboost | 1.4.0 |
-| et_xmlfile | 1.1.0 | msrest | 2021.06.01 | xorg-kbproto | 1.0.7002 |
-| expat | 2.4.1 | multidict | 5.1.0 | xorg-libice | 1.0.10 |
-| fire | 0.4.0 | mysql-common | 8.0.25 | xorg-libsm | 1.2.3 |
-| flask | 2.0.1 | mysql-libs | 8.0.25 | xorg-libx11 | 1.7.2 |
-| flask-compress | 1.10.1 | ncurses | 6.2 | xorg-libxext | 1.3.4 |
-| fontconfig | 2.13.1 | networkx | 2.5.1 | xorg-libxrender | 0.9.10003 |
-| freetype | 2.10.4 | ninja | 1.10.2 | xorg-renderproto | 0.11.1002 |
-| fsspec | 2021.6.1 | nltk | 3.6.2 | xorg-xextproto | 7.3.0002 |
-| future | 0.18.2 | nspr | 4.30 | xorg-xproto | 7.0.31007 |
-| gast | 0.3.3 | nss | 3.67 | xz | 5.2.5 |
-| gensim | 3.8.3 | numba | 0.53.1 | yaml | 0.2.5 |
-| geographiclib | 1.52 | numpy | 1.19.4 | yarl | 1.6.3 |
-| geopy | 2.1.0 | oauthlib | 3.1.1 | zeromq | 4.3.4 |
-| gettext | 0.21.0 | olefile | 0.46 | zfp | 0.5.5 |
-| gevent | 21.1.2 | onnx | 1.9.0 | zipp | 3.5.0 |
-| gflags | 2.2.2 | onnxconverter-common | 1.7.0 | zlib | 1.2.11010 |
-| giflib | 5.2.1 | onnxmltools | 1.7.0 | zope.event | 4.5.0 |
-| gitdb | 4.0.7 | onnxruntime | 1.7.2 | zope.interface | 5.4.0 |
-| gitpython | 3.1.18 | openjpeg | 2.4.0 | zstd | 1.4.9 |
-| glib | 2.68.3 | openpyxl | 3.0.7 | azure-common | 1.1.27 |
-| glib-tools | 2.68.3 | openssl | 1.1.1k | azure-core | 1.16.0 |
-| glog | 0.5.0 | opt_einsum | 3.3.0 | azure-graphrbac | 0.61.1 |
-| gobject-introspection | 1.68.0 | orc | 1.6.7 | azure-mgmt-authorization | 0.61.0 |
-| google-auth | 1.32.1 | packaging | 21.0 | azure-mgmt-containerregistry | 8.0.0 |
-| google-auth-oauthlib | 0.4.1 | pandas | 1.2.3 | azure-mgmt-core | 1.3.0 |
-| google-pasta | 0.2.0 | parquet-cpp | 1.5.1 | azure-mgmt-keyvault | 2.2.0 |
-| greenlet | 1.1.0 | parso | 0.8.2 | azure-mgmt-resource | 13.0.0 |
-| grpc-cpp | 1.37.1 | partd | 1.2.0 | azure-mgmt-storage | 11.2.0 |
-| grpcio | 1.37.1 | patsy | 0.5.1 | azureml-core | 1.34.0 |
-| gst-plugins-base | 1.18.4 | pcre | 8.45 | azureml-mlflow | 1.34.0 |
-| gstreamer | 1.18.4 | pexpect | 4.8.0 | azureml-opendatasets | 1.34.0 |
-| h5py | 2.10.0 | pickleshare | 0.7.5 | backports-tempfile | 1.0 |
-| hdf5 | 1.10.6 | pillow | 8.2.0 | backports-weakref | 1.0.post1 |
-| html5lib | 1.1 | pip | 21.1.1 | contextlib2 | 0.6.0.post1 |
-| hummingbird-ml | 0.4.0 | pixman | 0.40.0 | docker | 4.4.4 |
-| icu | 68.1 | plotly | 4.14.3 | ipywidgets | 7.6.3 |
-| idna | 2.10 | pmdarima | 1.8.2 | jeepney | 0.6.0 |
-| imagecodecs | 2021.3.31 | pooch | 1.4.0 | jmespath | 0.10.0 |
-| imageio | 2.9.0 | portalocker | 1.7.1 | jsonpickle | 2.0.0 |
-| importlib-metadata | 4.6.1 | prompt-toolkit | 3.0.19 | kqlmagiccustom | 0.1.114.post8 |
-| intel-openmp | 2021.2.0 | protobuf | 3.15.8 | lxml | 4.6.5 |
-| interpret | 0.2.4 | psutil | 5.8.0 | msrestazure | 0.6.4 |
-| interpret-core | 0.2.4 | ptyprocess | 0.7.0 | mypy | 0.780 |
-| ipykernel | 6.0.1 | py-xgboost | 1.4.0 | mypy-extensions | 0.4.3 |
-| ipython | 7.23.1 | py4j | 0.10.9 | ndg-httpsclient | 0.5.1 |
-| ipython_genutils | 0.2.0 | pyarrow | 3.0.0 | pandasql | 0.7.3 |
-| isodate | 0.6.0 | pyasn1 | 0.4.8 | pathspec | 0.8.1 |
-| itsdangerous | 2.0.1 | pyasn1-modules | 0.2.8 | prettytable | 2.4.0 |
-| jdcal | 1.4.1 | pycairo | 1.20.1 | pyperclip | 1.8.2 |
-| jedi | 0.18.0 | pycosat | 0.6.3 | ruamel-yaml | 0.17.4 |
-| jinja2 | 3.0.1 | pycparser | 2.20 | ruamel-yaml-clib | 0.2.6 |
-| joblib | 1.0.1 | pygments | 2.9.0 | secretstorage | 3.3.1 |
-| jpeg | 9d | pygobject | 3.40.1 | sqlalchemy | 1.4.20 |
-| jupyter_client | 6.1.12 | pyjwt | 2.1.0 | typed-ast | 1.4.3 |
-| jupyter_core | 4.7.1 | pyodbc | 4.0.30 | torchvision | 0.9.1 |
-| jxrlib | 1.1 | pyopenssl | 20.0.1 | websocket-client | 1.1.0 |
-
+| Library | Version | Library | Version | Library | Version |
+||--|||--||
+| _libgcc_mutex | 0.1=conda_forge | hdf5 | 1.12.2=nompi_h2386368_100 | parquet-cpp | g1.5.1=2 |
+| _openmp_mutex | 4.5=2_kmp_llvm | html5lib | 1.1=pyh9f0ad1d_0 | parso | g0.8.3=pyhd8ed1ab_0 |
+| _py-xgboost-mutex | 2.0=cpu_0 | humanfriendly | 10.0=py310hff52083_4 | partd | g1.3.0=pyhd8ed1ab_0 |
+| _tflow_select | 2.3.0=mkl | hummingbird-ml | 0.4.0=pyhd8ed1ab_0 | pathos | g0.3.0=pyhd8ed1ab_0 |
+| absl-py | 1.3.0=pyhd8ed1ab_0 | icu | 58.2=hf484d3e_1000 | pathspec | 0.10.1 |
+| adal | 1.2.7=pyhd8ed1ab_0 | idna | 3.4=pyhd8ed1ab_0 | patsy | g0.5.3=pyhd8ed1ab_0 |
+| adlfs | 0.7.7=pyhd8ed1ab_0 | imagecodecs | 2022.9.26=py310h90cd304_3 | pcre2 | g10.40=hc3806b6_0 |
+| aiohttp | 3.8.3=py310h5764c6d_1 | imageio | 2.9.0=py_0 | pexpect | g4.8.0=pyh1a96a4e_2 |
+| aiosignal | 1.3.1=pyhd8ed1ab_0 | importlib-metadata | 5.0.0=pyha770c72_1 | pickleshare | g0.7.5=py_1003 |
+| anyio | 3.6.2 | interpret | 0.2.4=py37_0 | pillow | g9.2.0=py310h454ad03_3 |
+| aom | 3.5.0=h27087fc_0 | interpret-core | 0.2.4=py37h21ff451_0 | pip | g22.3.1=pyhd8ed1ab_0 |
+| applicationinsights | 0.11.10 | ipykernel | 6.17.0=pyh210e3f2_0 | pkginfo | 1.8.3 |
+| argcomplete | 2.0.0 | ipython | 8.6.0=pyh41d4057_1 | platformdirs | 2.5.3 |
+| argon2-cffi | 21.3.0 | ipython-genutils | 0.2.0 | plotly | g4.14.3=pyh44b312d_0 |
+| argon2-cffi-bindings | 21.2.0 | ipywidgets | 7.7.0 | pmdarima | g2.0.1=py310h5764c6d_0 |
+| arrow-cpp | 9.0.0=py310he7aa4d3_2_cpu | isodate | 0.6.0=py_1 | portalocker | g2.6.0=py310hff52083_1 |
+| asttokens | 2.1.0=pyhd8ed1ab_0 | itsdangerous | 2.1.2=pyhd8ed1ab_0 | pox | g0.3.2=pyhd8ed1ab_0 |
+| astunparse | 1.6.3=pyhd8ed1ab_0 | jdcal | 1.4.1=py_0 | ppft | g1.7.6.6=pyhd8ed1ab_0 |
+| async-timeout | 4.0.2=pyhd8ed1ab_0 | jedi | 0.18.1=pyhd8ed1ab_2 | prettytable | 3.2.0 |
+| attrs | 22.1.0=pyh71513ae_1 | jeepney | 0.8.0 | prometheus-client | 0.15.0 |
+| aws-c-cal | 0.5.11=h95a6274_0 | jinja2 | 3.1.2=pyhd8ed1ab_1 | prompt-toolkit | g3.0.32=pyha770c72_0 |
+| aws-c-common | 0.6.2=h7f98852_0 | jmespath | 1.0.1 | protobuf | g3.20.1=py310hd8f1fbe_0 |
+| aws-c-event-stream | 0.2.7=h3541f99_13 | joblib | 1.2.0=pyhd8ed1ab_0 | psutil | g5.9.4=py310h5764c6d_0 |
+| aws-c-io | 0.10.5=hfb6a706_0 | jpeg | 9e=h166bdaf_2 | pthread-stubs | g0.4=h36c2ea0_1001 |
+| aws-checksums | 0.1.11=ha31a3da_7 | jsonpickle | 2.2.0 | ptyprocess | g0.7.0=pyhd3deb0d_0 |
+| aws-sdk-cpp | 1.8.186=hecaee15_4 | jsonschema | 4.17.0 | pure_eval | g0.2.2=pyhd8ed1ab_0 |
+| azure-common | 1.1.28 | jupyter_client | 7.4.4=pyhd8ed1ab_0 | py-xgboost | g1.7.1=cpu_py310hd1aba9c_0 |
+| azure-core | 1.26.1=pyhd8ed1ab_0 | jupyter_core | 4.11.2=py310hff52083_0 | py4j | g0.10.9.5=pyhd8ed1ab_0 |
+| azure-datalake-store | 0.0.51=pyh9f0ad1d_0 | jupyter-server | 1.23.0 | pyarrow | g9.0.0=py310h9be7b57_2_cpu |
+| azure-graphrbac | 0.61.1 | jupyterlab-pygments | 0.2.2 | pyasn1 | g0.4.8=py_0 |
+| azure-identity | 1.7.0 | jupyterlab-widgets | 3.0.3 | pyasn1-modules | g0.2.7=py_0 |
+| azure-mgmt-authorization | 2.0.0 | jxrlib | 1.1=h7f98852_2 | pycosat | g0.6.4=py310h5764c6d_1 |
+| azure-mgmt-containerregistry | 10.0.0 | keras | 2.8.0 | pycparser | g2.21=pyhd8ed1ab_0 |
+| azure-mgmt-core | 1.3.2 | keras-applications | 1.0.8 | pygments | g2.13.0=pyhd8ed1ab_0 |
+| azure-mgmt-keyvault | 10.1.0 | keras-preprocessing | 1.1.2 | pyjwt | g2.6.0=pyhd8ed1ab_0 |
+| azure-mgmt-resource | 21.2.1 | keras2onnx | 1.6.5=pyhd8ed1ab_0 | pynacl | 1.5.0 |
+| azure-mgmt-storage | 20.1.0 | keyutils | 1.6.1=h166bdaf_0 | pyodbc | g4.0.34=py310hd8f1fbe_1 |
+| azure-storage-blob | 12.13.0 | kiwisolver | 1.4.4=py310hbf28c38_1 | pyopenssl | g22.1.0=pyhd8ed1ab_0 |
+| azureml-core | 1.47.0 | knack | 0.10.0 | pyparsing | g3.0.9=pyhd8ed1ab_0 |
+| azureml-dataprep | 4.5.7 | kqlmagiccustom | 0.1.114.post16 | pyperclip | 1.8.2 |
+| azureml-dataprep-native | 38.0.0 | krb5 | 1.19.3=h3790be6_0 | pyqt | g5.9.2=py310h295c915_6 |
+| azureml-dataprep-rslex | 2.11.4 | lcms2 | 2.14=h6ed2654_0 | pyrsistent | 0.19.2 |
+| azureml-dataset-runtime | 1.47.0 | ld_impl_linux-64 | 2.39=hc81fddc_0 | pysocks | g1.7.1=pyha2e5f31_6 |
+| azureml-mlflow | 1.47.0 | lerc | 4.0.0=h27087fc_0 | pyspark | g3.3.1=pyhd8ed1ab_0 |
+| azureml-opendatasets | 1.47.0 | liac-arff | 2.5.0=pyhd8ed1ab_1 | python | g3.10.6=h582c2e5_0_cpython |
+| azureml-telemetry | 1.47.0 | libabseil | 20220623.0=cxx17_h48a1fff_5 | python_abi | g3.10=2_cp310 |
+| backcall | 0.2.0=pyh9f0ad1d_0 | libaec | 1.0.6=h9c3ff4c_0 | python-dateutil | g2.8.2=pyhd8ed1ab_0 |
+| backports | 1.0=py_2 | libavif | 0.11.1=h5cdd6b5_0 | python-flatbuffers | g2.0=pyhd8ed1ab_0 |
+| backports-tempfile | 1.0 | libblas | 3.9.0=16_linux64_mkl | pytorch | g1.13.0=py3.10_cpu_0 |
+| backports-weakref | 1.0.post1 | libbrotlicommon | 1.0.9=h166bdaf_8 | pytorch-mutex | g1.0=cpu |
+| backports.functools_lru_cache | 1.6.4=pyhd8ed1ab_0 | libbrotlidec | 1.0.9=h166bdaf_8 | pytz | g2022.6=pyhd8ed1ab_0 |
+| bcrypt | 4.0.1 | libbrotlienc | 1.0.9=h166bdaf_8 | pyu2f | g0.1.5=pyhd8ed1ab_0 |
+| beautifulsoup4 | 4.9.3=pyhb0f4dca_0 | libcblas | 3.9.0=16_linux64_mkl | pywavelets | g1.3.0=py310hde88566_2 |
+| blas | 2.116=mkl | libclang | 14.0.6 | pyyaml | g6.0=py310h5764c6d_5 |
+| blas-devel | 3.9.0=16_linux64_mkl | libcrc32c | 1.1.2=h9c3ff4c_0 | pyzmq | g24.0.1=py310h330234f_1 |
+| bleach | 5.0.1 | libcurl | 7.86.0=h7bff187_1 | qt | g5.9.7=h5867ecd_1 |
+| blinker | 1.5=pyhd8ed1ab_0 | libdeflate | 1.14=h166bdaf_0 | re2 | g2022.06.01=h27087fc_0 |
+| blosc | 1.21.1=h83bc5f7_3 | libedit | 3.1.20191231=he28a2e2_2 | readline | g8.1.2=h0f457ee_0 |
+| bokeh | 3.0.1=pyhd8ed1ab_0 | libev | 4.33=h516909a_1 | regex | g2022.10.31=py310h5764c6d_0 |
+| brotli | 1.0.9=h166bdaf_8 | libevent | 2.1.10=h9b69904_4 | requests | g2.28.1=pyhd8ed1ab_1 |
+| brotli-bin | 1.0.9=h166bdaf_8 | libffi | 3.4.2=h7f98852_5 | requests-oauthlib | g1.3.1=pyhd8ed1ab_0 |
+| brotli-python | 1.0.9=py310hd8f1fbe_8 | libgcc-ng | 12.2.0=h65d4601_19 | retrying | g1.3.3=py_2 |
+| brotlipy | 0.7.0=py310h5764c6d_1005 | libgfortran-ng | 12.2.0=h69a702a_19 | rsa | g4.9=pyhd8ed1ab_0 |
+| brunsli | 0.1=h9c3ff4c_0 | libgfortran5 | 12.2.0=h337968e_19 | ruamel_yaml | g0.15.80=py310h5764c6d_1008 |
+| bzip2 | 1.0.8=h7f98852_4 | libglib | 2.74.1=h606061b_1 | ruamel-yaml | 0.17.4 |
+| c-ares | 1.18.1=h7f98852_0 | libgoogle-cloud | 2.1.0=hf2e47f9_1 | ruamel-yaml-clib | 0.2.6 |
+| c-blosc2 | 2.4.3=h7a311fb_0 | libiconv | 1.17=h166bdaf_0 | s2n | g1.0.10=h9b69904_0 |
+| ca-certificates | 2022.9.24=ha878542_0 | liblapack | 3.9.0=16_linux64_mkl | salib | g1.4.6.1=pyhd8ed1ab_0 |
+| cached_property | 1.5.2=pyha770c72_1 | liblapacke | 3.9.0=16_linux64_mkl | scikit-image | g0.19.3=py310h769672d_2 |
+| cached-property | 1.5.2=hd8ed1ab_1 | libllvm11 | 11.1.0=he0ac6c6_5 | scikit-learn | g1.1.3=py310h0c3af53_1 |
+| cachetools | 5.2.0=pyhd8ed1ab_0 | libnghttp2 | 1.47.0=hdcd2b5c_1 | scipy | g1.9.3=py310hdfbd76f_2 |
+| certifi | 2022.9.24=pyhd8ed1ab_0 | libnsl | 2.0.0=h7f98852_0 | seaborn | g0.11.1=hd8ed1ab_1 |
+| cffi | 1.15.1=py310h255011f_2 | libpng | 1.6.38=h753d276_0 | seaborn-base | g0.11.1=pyhd8ed1ab_1 |
+| cfitsio | 4.1.0=hd9d235c_0 | libprotobuf | 3.20.1=h6239696_4 | secretstorage | 3.3.3 |
+| charls | 2.3.4=h9c3ff4c_0 | libsodium | 1.0.18=h36c2ea0_1 | send2trash | 1.8.0 |
+| charset-normalizer | 2.1.1=pyhd8ed1ab_0 | libsqlite | 3.39.4=h753d276_0 | setuptools | g65.5.1=pyhd8ed1ab_0 |
+| click | 8.1.3=unix_pyhd8ed1ab_2 | libssh2 | 1.10.0=haa6b8db_3 | shap | g0.39.0=py310hb5077e9_1 |
+| cloudpickle | 2.2.0=pyhd8ed1ab_0 | libstdcxx-ng | 12.2.0=h46fd767_19 | sip | g4.19.13=py310h295c915_0 |
+| colorama | 0.4.6=pyhd8ed1ab_0 | libthrift | 0.16.0=h491838f_2 | six | g1.16.0=pyh6c4a22f_0 |
+| coloredlogs | 15.0.1=pyhd8ed1ab_3 | libtiff | 4.4.0=h55922b4_4 | skl2onnx | g1.8.0.1=pyhd8ed1ab_1 |
+| conda-package-handling | 1.9.0=py310h5764c6d_1 | libutf8proc | 2.8.0=h166bdaf_0 | sklearn-pandas | g2.2.0=pyhd8ed1ab_0 |
+| configparser | 5.3.0=pyhd8ed1ab_0 | libuuid | 2.32.1=h7f98852_1000 | slicer | g0.0.7=pyhd8ed1ab_0 |
+| contextlib2 | 21.6.0 | libuv | 1.44.2=h166bdaf_0 | smart_open | g6.2.0=pyha770c72_0 |
+| contourpy | 1.0.6=py310hbf28c38_0 | libwebp-base | 1.2.4=h166bdaf_0 | smmap | g3.0.5=pyh44b312d_0 |
+| cryptography | 38.0.3=py310h597c629_0 | libxcb | 1.13=h7f98852_1004 | snappy | g1.1.9=hbd366e4_2 |
+| cycler | 0.11.0=pyhd8ed1ab_0 | libxgboost | 1.7.1=cpu_ha3b9936_0 | sniffio | 1.3.0 |
+| cython | 0.29.32=py310hd8f1fbe_1 | libxml2 | 2.9.9=h13577e0_2 | soupsieve | g2.3.2.post1=pyhd8ed1ab_0 |
+| cytoolz | 0.12.0=py310h5764c6d_1 | libzlib | 1.2.13=h166bdaf_4 | sqlalchemy | 1.4.43 |
+| dash | 1.21.0=pyhd8ed1ab_0 | libzopfli | 1.0.3=h9c3ff4c_0 | sqlite | g3.39.4=h4ff8645_0 |
+| dash_cytoscape | 0.2.0=pyhd8ed1ab_1 | lightgbm | 3.2.1=py310h295c915_0 | sqlparse | g0.4.3=pyhd8ed1ab_0 |
+| dash-core-components | 1.17.1=pyhd8ed1ab_0 | lime | 0.2.0.1=pyhd8ed1ab_1 | stack_data | g0.6.0=pyhd8ed1ab_0 |
+| dash-html-components | 1.1.4=pyhd8ed1ab_0 | llvm-openmp | 15.0.4=he0ac6c6_0 | statsmodels | g0.13.5=py310hde88566_2 |
+| dash-renderer | 1.9.1=pyhd8ed1ab_0 | llvmlite | 0.39.1=py310h58363a5_1 | sympy | g1.11.1=py310hff52083_2 |
+| dash-table | 4.12.0=pyhd8ed1ab_0 | locket | 1.0.0=pyhd8ed1ab_0 | tabulate | g0.9.0=pyhd8ed1ab_1 |
+| dask-core | 2022.10.2=pyhd8ed1ab_0 | lxml | 4.8.0 | tbb | g2021.6.0=h924138e_1 |
+| databricks-cli | 0.17.3=pyhd8ed1ab_0 | lz4-c | 1.9.3=h9c3ff4c_1 | tensorboard | 2.8.0 |
+| dav1d | 1.0.0=h166bdaf_1 | markdown | 3.3.4=gpyhd8ed1ab_0 | tensorboard-data-server | g0.6.0=py310h597c629_3 |
+| dbus | 1.13.6=h5008d03_3 | markupsafe | g2.1.1=py310h5764c6d_2 | tensorboard-plugin-wit | g1.8.1=pyhd8ed1ab_0 |
+| debugpy | 1.6.3=py310hd8f1fbe_1 | matplotlib | g3.6.2=py310hff52083_0 | tensorflow | 2.8.0 |
+| decorator | 5.1.1=pyhd8ed1ab_0 | matplotlib-base | g3.6.2=py310h8d5ebf3_0 | tensorflow-base | g2.10.0=mkl_py310hb9daa73_0 |
+| defusedxml | 0.7.1 | matplotlib-inline | g0.1.6=pyhd8ed1ab_0 | tensorflow-estimator | 2.8.0 |
+| dill | 0.3.6=pyhd8ed1ab_1 | mistune | 2.0.4 | tensorflow-io-gcs-filesystem | 0.27.0 |
+| distlib | 0.3.6 | mkl | g2022.1.0=h84fe81f_915 | termcolor | g2.1.0=pyhd8ed1ab_0 |
+| distro | 1.8.0 | mkl-devel | g2022.1.0=ha770c72_916 | terminado | 0.17.0 |
+| docker | 6.0.1 | mkl-include | g2022.1.0=h84fe81f_915 | textblob | g0.15.3=py_0 |
+| dotnetcore2 | 3.1.23 | mleap | g0.17.0=pyhd8ed1ab_0 | tf-estimator-nightly | 2.8.0.dev2021122109 |
+| entrypoints | 0.4=pyhd8ed1ab_0 | mlflow-skinny | g1.30.0=py310h1d0e22c_0 | threadpoolctl | g3.1.0=pyh8a188c0_0 |
+| et_xmlfile | 1.0.1=py_1001 | mpc | g1.2.1=h9f54685_0 | tifffile | g2022.10.10=pyhd8ed1ab_0 |
+| executing | 1.2.0=pyhd8ed1ab_0 | mpfr | g4.1.0=h9202a9a_1 | tinycss2 | 1.2.1 |
+| expat | 2.5.0=h27087fc_0 | mpmath | g1.2.1=pyhd8ed1ab_0 | tk | g8.6.12=h27826a3_0 |
+| fastjsonschema | 2.16.2 | msal | g2022.09.01=py_0 | toolz | g0.12.0=pyhd8ed1ab_0 |
+| filelock | 3.8.0 | msal-extensions | 0.3.1 | torchvision | 0.14.0 |
+| fire | 0.4.0=pyh44b312d_0 | msrest | 0.7.1 | tornado | g6.2=py310h5764c6d_1 |
+| flask | 2.2.2=pyhd8ed1ab_0 | msrestazure | 0.6.4 | tqdm | g4.64.1=pyhd8ed1ab_0 |
+| flask-compress | 1.13=pyhd8ed1ab_0 | multidict | g6.0.2=py310h5764c6d_2 | traitlets | g5.5.0=pyhd8ed1ab_0 |
+| flatbuffers | 2.0.7=h27087fc_0 | multiprocess | g0.70.14=py310h5764c6d_3 | typed-ast | 1.4.3 |
+| fontconfig | 2.14.1=hc2a2eb6_0 | munkres | g1.1.4=pyh9f0ad1d_0 | typing_extensions | g4.4.0=pyha770c72_0 |
+| fonttools | 4.38.0=py310h5764c6d_1 | mypy | 0.780 | typing-extensions | g4.4.0=hd8ed1ab_0 |
+| freetype | 2.12.1=hca18f0e_0 | mypy-extensions | 0.4.3 | tzdata | g2022fgh191b570_0 |
+| frozenlist | 1.3.3=py310h5764c6d_0 | nbclassic | 0.4.8 | unicodedata2 | g15.0.0gpy310h5764c6d_0 |
+| fsspec | 2022.10.0=pyhd8ed1ab_0 | nbclient | 0.7.0 | unixodbc | g2.3.10gh583eb01_0 |
+| fusepy | 3.0.1 | nbconvert | 7.2.3 | urllib3 | g1.26.4=pyhd8ed1ab_0 |
+| future | 0.18.2=pyhd8ed1ab_6 | nbformat | 5.7.0 | virtualenv | 20.14.0 |
+| gast | 0.4.0=pyh9f0ad1d_0 | ncurses | g6.3=h27087fc_1 | wcwidth | g0.2.5=pyh9f0ad1d_2 |
+| gensim | 4.2.0=py310h769672d_0 | ndg-httpsclient | 0.5.1 | webencodings | g0.5.1=py_1 |
+| geographiclib | 1.52=pyhd8ed1ab_0 | nest-asyncio | g1.5.6=pyhd8ed1ab_0 | websocket-client | 1.4.2 |
+| geopy | 2.1.0=pyhd3deb0d_0 | networkx | g2.8.8=pyhd8ed1ab_0 | werkzeug | g2.2.2=pyhd8ed1ab_0 |
+| gettext | 0.21.1=h27087fc_0 | nltk | g3.6.2=pyhd8ed1ab_0 | wheel | g0.38.3=pyhd8ed1ab_0 |
+| gevent | 22.10.1=py310hab16fe0_1 | notebook | 6.5.2 | widgetsnbextension | 3.6.1 |
+| gflags | 2.2.2=he1b5a44_1004 | notebook-shim | 0.2.2 | wrapt | g1.14.1=py310h5764c6d_1 |
+| giflib | 5.2.1=h36c2ea0_2 | numba | g0.56.3=py310ha5257ce_0 | xgboost | g1.7.1=cpu_py310hd1aba9c_0 |
+| gitdb | 4.0.9=pyhd8ed1ab_0 | numpy | g1.23.4=py310h53a5b5f_1 | xorg-libxau | g1.0.9=h7f98852_0 |
+| gitpython | 3.1.29=pyhd8ed1ab_0 | oauthlib | g3.2.2=pyhd8ed1ab_0 | xorg-libxdmcp | g1.1.3=h7f98852_0 |
+| glib | 2.74.1=h6239696_1 | onnx | g1.12.0=py310h3d64581_0 | xyzservices | g2022.9.0=pyhd8ed1ab_0 |
+| glib-tools | 2.74.1=h6239696_1 | onnxconverter-common | g1.7.0=pyhd8ed1ab_0 | xz | g5.2.6=h166bdaf_0 |
+| glog | 0.6.0=h6f12383_0 | onnxmltools | g1.7.0=pyhd8ed1ab_0 | yaml | g0.2.5=h7f98852_2 |
+| gmp | 6.2.1=h58526e2_0 | onnxruntime | g1.13.1=py310h00a7d45_1 | yarl | g1.8.1=py310h5764c6d_0 |
+| gmpy2 | 2.1.2=py310h3ec546c_1 | openjpeg | g2.5.0=h7d73246_1 | zeromq | g4.3.4=h9c3ff4c_1 |
+| google-auth | 2.14.0=pyh1a96a4e_0 | openpyxl | g3.0.7=pyhd8ed1ab_0 | zfp | g1.0.0=h27087fc_3 |
+| google-auth-oauthlib | 0.4.6=pyhd8ed1ab_0 | openssl | g1.1.1s=h166bdaf_0 | zipp | g3.10.0=pyhd8ed1ab_0 |
+| google-pasta | 0.2.0=pyh8c360ce_0 | opt_einsum | g3.3.0=pyhd8ed1ab_1 | zlib | g1.2.13=h166bdaf_4 |
+| greenlet | 1.1.3.post0=py310hd8f1fbe_0 | orc | g1.7.6=h6c59b99_0 | zlib-ng | g2.0.6=h166bdaf_0 |
+| grpc-cpp | 1.46.4=hbad87ad_7 | packaging | g21.3=pyhd8ed1ab_0 | zope.event | g4.5.0gpyh9f0ad1d_0 |
+| grpcio | 1.46.4=py310h946def9_7 | pandas | g1.5.1=py310h769672d_1 | zope.interface | g5.5.1=py310h5764c6d_0 |
+| gst-plugins-base | 1.14.0=hbbd80ab_1 | pandasql | 0.7.3 | zstd | g1.5.2=h6239696_4 |
+| gstreamer | 1.14.0=h28cd5cc_2 | pandocfilters | 1.5.0 | | |
+| h5py | 3.7.0=nompi_py310h416281c_102 | paramiko | 2.12.0 | | |
### R libraries (Preview)
-| **Library** | **Version** | **Library** | **Version** | **Library** | **Version** |
-|:--:|:--:|:-:|:-:|:-:|:--:|
-| abind | 1.4-5 | gtools | 3.8.2 | RColorBrewer | 1.1-3 |
-| anomalize | 0.2.2 | hardhat | 0.2.0 | Rcpp | 1.0.8.3 |
-| anytime | 0.3.9 | haven | 2.5.0 | RcppArmadillo | 0.11.0.0.0 |
-| arrow | 7.0.0 | highr | 0.9 | RcppEigen | 0.3.3.9.2 |
-| askpass | 1.1 | hms | 1.1.1 | RcppParallel | 5.1.5 |
-| assertthat | 0.2.1 | htmltools | 0.5.2 | RcppRoll | 0.3.0 |
-| backports | 1.4.1 | htmlwidgets | 1.5.4 | readr | 2.1.2 |
-| base64enc | 0.1-3 | httr | 1.4.3 | readxl | 1.4.0 |
-| BH | 1.78.0-0 | hwriter | 1.3.2.1 | recipes | 0.2.0 |
-| bit | 4.0.4 | ids | 1.0.1 | rematch | 1.0.1 |
-| bit64 | 4.0.5 | ini | 0.3.1 | rematch2 | 2.1.2 |
-| blob | 1.2.3 | inline | 0.3.19 | remotes | 2.4.2 |
-| brew | 1.0-7 | ipred | 0.9-12 | reprex | 2.0.1 |
-| brio | 1.1.3 | isoband | 0.2.5 | reshape2 | 1.4.3 |
-| broom | 0.8.0 | iterators | 1.0.14 | reticulate | 1.18 |
-| bslib | 0.3.1 | jquerylib | 0.1.4 | rex | 1.2.1 |
-| cachem | 1.0.6 | jsonlite | 1.7.2 | rlang | 1.0.2 |
-| callr | 3.7.0 | knitr | 1.39 | rmarkdown | 2.14 |
-| car | 3.0-13 | labeling | 0.4.2 | RODBC | 1.3-19 |
-| carData | 3.0-5 | lambda.r | 1.2.4 | roxygen2 | 7.1.2 |
-| caret | 6.0-86 | later | 1.3.0 | rprojroot | 2.0.3 |
-| cellranger | 1.1.0 | lava | 1.6.10 | rsample | 0.1.1 |
-| checkmate | 2.1.0 | lazyeval | 0.2.2 | RSQLite | 2.2.13 |
-| chron | 2.3-56 | lifecycle | 1.0.1 | rstan | 2.21.5 |
-| cli | 3.3.0 | listenv | 0.8.0 | rstantools | 2.2.0 |
-| clipr | 0.8.0 | lme4 | 1.1-29 | rstatix | 0.7.0 |
-| colorspace | 2.0-3 | lmtest | 0.9-40 | rstudioapi | 0.13 |
-| commonmark | 1.8.0 | loo | 2.5.1 | rversions | 2.1.1 |
-| config | 0.3.1 | lubridate | 1.8.0 | rvest | 1.0.2 |
-| corrplot | 0.92 | magrittr | 2.0.3 | sass | 0.4.1 |
-| covr | 3.5.1 | maptools | 1.1-4 | scales | 1.2.0 |
-| cpp11 | 0.4.2 | markdown | 1.1 | selectr | 0.4-2 |
-| crayon | 1.5.1 | MatrixModels | 0.5-0 | sessioninfo | 1.2.2 |
-| credentials | 1.3.2 | matrixStats | 0.62.0 | shape | 1.4.6 |
-| crosstalk | 1.2.0 | memoise | 2.0.1 | slider | 0.2.2 |
-| curl | 4.3.2 | mime | 0.12 | sourcetools | 0.1.7 |
-| data.table | 1.14.2 | minqa | 1.2.4 | sp | 1.4-7 |
-| DBI | 1.1.2 | ModelMetrics | 1.2.2.2 | sparklyr | 1.5.2 |
-| dbplyr | 2.1.1 | modelr | 0.1.8 | SparseM | 1.81 |
-| desc | 1.4.1 | munsell | 0.5.0 | sqldf | 0.4-11 |
-| devtools | 2.3.2 | nloptr | 2.0.1 | SQUAREM | 2021.1 |
-| diffobj | 0.3.5 | notebookutils | 3.1.2-20220721.3 | StanHeaders | 2.21.0-7 |
-| digest | 0.6.29 | numDeriv | 2016.8-1.1 | stringi | 1.7.6 |
-| dplyr | 1.0.9 | openssl | 2.0.0 | stringr | 1.4.0 |
-| DT | 0.22 | padr | 0.6.0 | sweep | 0.2.3 |
-| dtplyr | 1.2.1 | parallelly | 1.31.1 | sys | 3.4 |
-| dygraphs | 1.1.1.6 | pbkrtest | 0.5.1 | testthat | 3.1.4 |
-| ellipsis | 0.3.2 | pillar | 1.7.0 | tibble | 3.1.7 |
-| evaluate | 0.15 | pkgbuild | 1.3.1 | tibbletime | 0.1.6 |
-| extraDistr | 1.9.1 | pkgconfig | 2.0.3 | tidyr | 1.2.0 |
-| fansi | 1.0.3 | pkgload | 1.2.4 | tidyselect | 1.1.2 |
-| farver | 2.1.0 | plogr | 0.2.0 | tidyverse | 1.3.1 |
-| fastmap | 1.1.0 | plotly | 4.10.0 | timeDate | 3043.102 |
-| forcats | 0.5.1 | plotrix | 3.8-1 | timetk | 2.8.0 |
-| foreach | 1.5.2 | plyr | 1.8.7 | tinytex | 0.38 |
-| forecast | 8.13 | praise | 1.0.0 | tseries | 0.10-51 |
-| forge | 0.2.0 | prettyunits | 1.1.1 | tsfeatures | 1.0.2 |
-| formatR | 1.12 | pROC | 1.18.0 | TTR | 0.24.3 |
-| fracdiff | 1.5-1 | processx | 3.5.3 | tzdb | 0.3.0 |
-| fs | 1.5.2 | prodlim | 2019.11.13 | urca | 1.3-0 |
-| furrr | 0.3.0 | progress | 1.2.2 | usethis | 2.1.5 |
-| futile.logger | 1.4.3 | progressr | 0.10.0 | utf8 | 1.2.2 |
-| futile.options | 1.0.1 | promises | 1.2.0.1 | uuid | 1.1-0 |
-| future | 1.25.0 | prophet | 0.6.1 | vctrs | 0.4.1 |
-| future.apply | 1.9.0 | proto | 1.0.0 | viridisLite | 0.4.0 |
-| gargle | 1.2.0 | ps | 1.7.0 | vroom | 1.5.7 |
-| generics | 0.1.2 | purrr | 0.3.4 | waldo | 0.4.0 |
-| gert | 1.6.0 | quadprog | 1.5-8 | warp | 0.2.0 |
-| ggplot2 | 3.3.6 | quantmod | 0.4.20 | whisker | 0.4 |
-| gh | 1.3.0 | quantreg | 5.93 | withr | 2.5.0 |
-| gitcreds | 0.1.1 | R.methodsS3 | 1.8.1 | xfun | 0.30 |
-| glmnet | 4.1-4 | R.oo | 1.24.0 | xml2 | 1.3.3 |
-| globals | 0.14.0 | R.utils | 2.12.0 | xopen | 1.0.0 |
-| glue | 1.6.2 | r2d3 | 0.2.6 | xtable | 1.8-4 |
-| gower | 1.0.0 | R6 | 2.5.1 | xts | 0.12.1 |
-| gridExtra | 2.3 | randomForest | 4.7-1 | yaml | 2.3.5 |
-| gsubfn | 0.7 | rappdirs | 0.3.3 | zip | 2.2.0 |
-| gtable | 0.3.0 | rcmdcheck | 1.4.0 | zoo | 1.8-10 |
+| **Library** | **Version** | ** Library** | **Version** | ** Library** | **Version** |
+|:-:|:--:|::|:--:|::|:--:|
+| askpass | 1.1 | highcharter | 0.9.4 | readr | 2.1.3 |
+| assertthat | 0.2.1 | highr | 0.9 | readxl | 1.4.1 |
+| backports | 1.4.1 | hms | 1.1.2 | recipes | 1.0.3 |
+| base64enc | 0.1-3 | htmltools | 0.5.3 | rematch | 1.0.1 |
+| bit | 4.0.5 | htmlwidgets | 1.5.4 | rematch2 | 2.1.2 |
+| bit64 | 4.0.5 | httpcode | 0.3.0 | remotes | 2.4.2 |
+| blob | 1.2.3 | httpuv | 1.6.6 | reprex | 2.0.2 |
+| brew | 1.0-8 | httr | 1.4.4 | reshape2 | 1.4.4 |
+| brio | 1.1.3 | ids | 1.0.1 | rjson | 0.2.21 |
+| broom | 1.0.1 | igraph | 1.3.5 | rlang | 1.0.6 |
+| bslib | 0.4.1 | infer | 1.0.3 | rlist | 0.4.6.2 |
+| cachem | 1.0.6 | ini | 0.3.1 | rmarkdown | 2.18 |
+| callr | 3.7.3 | ipred | 0.9-13 | RODBC | 1.3-19 |
+| caret | 6.0-93 | isoband | 0.2.6 | roxygen2 | 7.2.2 |
+| cellranger | 1.1.0 | iterators | 1.0.14 | rprojroot | 2.0.3 |
+| cli | 3.4.1 | jquerylib | 0.1.4 | rsample | 1.1.0 |
+| clipr | 0.8.0 | jsonlite | 1.8.3 | rstudioapi | 0.14 |
+| clock | 0.6.1 | knitr | 1.41 | rversions | 2.1.2 |
+| colorspace | 2.0-3 | labeling | 0.4.2 | rvest | 1.0.3 |
+| commonmark | 1.8.1 | later | 1.3.0 | sass | 0.4.4 |
+| config | 0.3.1 | lava | 1.7.0 | scales | 1.2.1 |
+| conflicted | 1.1.0 | lazyeval | 0.2.2 | selectr | 0.4-2 |
+| coro | 1.0.3 | lhs | 1.1.5 | sessioninfo | 1.2.2 |
+| cpp11 | 0.4.3 | lifecycle | 1.0.3 | shiny | 1.7.3 |
+| crayon | 1.5.2 | lightgbm | 3.3.3 | slider | 0.3.0 |
+| credentials | 1.3.2 | listenv | 0.8.0 | sourcetools | 0.1.7 |
+| crosstalk | 1.2.0 | lobstr | 1.1.2 | sparklyr | 1.7.8 |
+| crul | 1.3 | lubridate | 1.9.0 | SQUAREM | 2021.1 |
+| curl | 4.3.3 | magrittr | 2.0.3 | stringi | 1.7.8 |
+| data.table | 1.14.6 | maps | 3.4.1 | stringr | 1.4.1 |
+| DBI | 1.1.3 | memoise | 2.0.1 | sys | 3.4.1 |
+| dbplyr | 2.2.1 | mime | 0.12 | systemfonts | 1.0.4 |
+| desc | 1.4.2 | miniUI | 0.1.1.1 | testthat | 3.1.5 |
+| devtools | 2.4.5 | modeldata | 1.0.1 | textshaping | 0.3.6 |
+| dials | 1.1.0 | modelenv | 0.1.0 | tibble | 3.1.8 |
+| DiceDesign | 1.9 | ModelMetrics | 1.2.2.2 | tidymodels | 1.0.0 |
+| diffobj | 0.3.5 | modelr | 0.1.10 | tidyr | 1.2.1 |
+| digest | 0.6.30 | munsell | 0.5.0 | tidyselect | 1.2.0 |
+| downlit | 0.4.2 | numDeriv | 2016.8-1.1 | tidyverse | 1.3.2 |
+| dplyr | 1.0.10 | openssl | 2.0.4 | timechange | 0.1.1 |
+| dtplyr | 1.2.2 | parallelly | 1.32.1 | timeDate | 4021.106 |
+| e1071 | 1.7-12 | parsnip | 1.0.3 | tinytex | 0.42 |
+| ellipsis | 0.3.2 | patchwork | 1.1.2 | torch | 0.9.0 |
+| evaluate | 0.18 | pillar | 1.8.1 | triebeard | 0.3.0 |
+| fansi | 1.0.3 | pkgbuild | 1.4.0 | TTR | 0.24.3 |
+| farver | 2.1.1 | pkgconfig | 2.0.3 | tune | 1.0.1 |
+| fastmap | 1.1.0 | pkgdown | 2.0.6 | tzdb | 0.3.0 |
+| fontawesome | 0.4.0 | pkgload | 1.3.2 | urlchecker | 1.0.1 |
+| forcats | 0.5.2 | plotly | 4.10.1 | urltools | 1.7.3 |
+| foreach | 1.5.2 | plyr | 1.8.8 | usethis | 2.1.6 |
+| forge | 0.2.0 | praise | 1.0.0 | utf8 | 1.2.2 |
+| fs | 1.5.2 | prettyunits | 1.1.1 | uuid | 1.1-0 |
+| furrr | 0.3.1 | pROC | 1.18.0 | vctrs | 0.5.1 |
+| future | 1.29.0 | processx | 3.8.0 | viridisLite | 0.4.1 |
+| future.apply | 1.10.0 | prodlim | 2019.11.13 | vroom | 1.6.0 |
+| gargle | 1.2.1 | profvis | 0.3.7 | waldo | 0.4.0 |
+| generics | 0.1.3 | progress | 1.2.2 | warp | 0.2.0 |
+| gert | 1.9.1 | progressr | 0.11.0 | whisker | 0.4 |
+| ggplot2 | 3.4.0 | promises | 1.2.0.1 | withr | 2.5.0 |
+| gh | 1.3.1 | proxy | 0.4-27 | workflows | 1.1.2 |
+| gistr | 0.9.0 | pryr | 0.1.5 | workflowsets | 1.0.0 |
+| gitcreds | 0.1.2 | ps | 1.7.2 | xfun | 0.35 |
+| globals | 0.16.2 | purrr | 0.3.5 | xgboost | 1.6.0.1 |
+| glue | 1.6.2 | quantmod | 0.4.20 | XML | 3.99-0.12 |
+| googledrive | 2.0.0 | r2d3 | 0.2.6 | xml2 | 1.3.3 |
+| googlesheets4 | 1.0.1 | R6 | 2.5.1 | xopen | 1.0.0 |
+| gower | 1.0.0 | ragg | 1.2.4 | xtable | 1.8-4 |
+| GPfit | 1.0-8 | rappdirs | 0.3.3 | xts | 0.12.2 |
+| gtable | 0.3.1 | rbokeh | 0.5.2 | yaml | 2.3.6 |
+| hardhat | 1.2.0 | rcmdcheck | 1.4.0 | yardstick | 1.1.0 |
+| haven | 2.5.1 | RColorBrewer | 1.1-3 | zip | 2.2.2 |
+| hexbin | 1.28.2 | Rcpp | 1.0.9 | zoo | 1.8-11 |
## Next steps - [Manage libraries for Apache Spark pools in Azure Synapse Analytics](apache-spark-manage-pool-packages.md)
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
The following table lists the runtime name, Apache Spark version, and release da
|-|-|-|-|-| | [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | Public Preview | - | - | | [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | GA | July 8, 2023 | July 8, 2024 |
-| [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | LTS | January 26, 2023 | January 26, 2024 |
-| [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __End of Life Announced (EOLA)__ | __July 29, 2022__ | __July 28, 2023__ |
+| [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | __End of Life Announced (EOLA)__ | January 26, 2023 | January 26, 2024 |
+| [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __End of Life Announced (EOLA)__ | __July 29, 2022__ | __September 29, 2023__ |
## Runtime release stages
virtual-desktop Deploy Azure Ad Joined Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-azure-ad-joined-vm.md
The default configuration supports connections from Windows 11 or Windows 10 usi
- The local PC is hybrid Azure AD-joined to the same Azure AD tenant as the session host - The local PC is running Windows 11 or Windows 10, version 2004 or later, and is Azure AD registered to the same Azure AD tenant as the session host
-To enable access from Windows devices not joined to Azure AD, add **targetisaadjoined:i:1** as a [custom RDP property](customize-rdp-properties.md) to the host pool. These connections are restricted to entering user name and password credentials when signing in to the session host.
+If your local PC doesn't meet one of these conditions, add **targetisaadjoined:i:1** as a [custom RDP property](customize-rdp-properties.md) to the host pool. These connections are rest