Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Msal Compare Msal Js And Adal Js | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-compare-msal-js-and-adal-js.md | authContext.acquireTokenRedirect("https://graph.microsoft.com", function (error, }); ``` -MSAL.js supports both **v1.0** and **v2.0** endpoints. The **v2.0** endpoint employs a *scope-centric* model to access resources. Thus, when you request an access token for a resource, you also need to specify the scope for that resource: +MSAL.js supports only the **v2.0** endpoint. The **v2.0** endpoint employs a *scope-centric* model to access resources. Thus, when you request an access token for a resource, you also need to specify the scope for that resource: ```javascript msalInstance.acquireTokenRedirect({ |
active-directory | Msal Node Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-node-migration.md | authenticationContext.acquireTokenWithAuthorizationCode( ); ``` -The v2.0 endpoint employs a *scope-centric* model to access resources. Thus, when you request an access token for a resource, you also need to specify the scope for that resource: +MSAL Node supports only the **v2.0** endpoint. The v2.0 endpoint employs a *scope-centric* model to access resources. Thus, when you request an access token for a resource, you also need to specify the scope for that resource: ```javascript const tokenRequest = { |
active-directory | Entitlement Management External Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-external-users.md | When using the [Azure AD B2B](../external-identities/what-is-b2b.md) invite expe With entitlement management, you can define a policy that allows users from organizations you specify to be able to self-request an access package. That policy includes whether approval is required, whether access reviews are required, and an expiration date for the access. If approval is required, you might consider inviting one or more users from the external organization to your directory, designating them as sponsors, and configuring that sponsors are approvers - since they're likely to know which external users from their organization need access. Once you've configured the access package, obtain the access package's request link so you can send that link to your contact person (sponsor) at the external organization. That contact can share with other users in their external organization, and they can use this link to request the access package. Users from that organization who have already been invited into your directory can also use that link. -Typically, when a request is approved, entitlement management will provision the user with the necessary access. If the user isn't already in your directory, entitlement management will first invite the user. When the user is invited, Azure AD will automatically create a B2B guest account for them but won't send the user an email. An administrator may have previously limited which organizations are allowed for collaboration, by setting a [B2B allow or blocklist](../external-identities/allow-deny-list.md) to allow or block invites to other organization's domains. If the user's domain isn't allowed by those lists, then they won't be invited and can't be assigned access until the lists are updated. +Typically, when a request is approved, entitlement management provisions the user with the necessary access. If the user isn't already in your directory, entitlement management will first invite the user. When the user is invited, Azure AD will automatically create a B2B guest account for them but won't send the user an email. An administrator may have previously limited which organizations are allowed for collaboration, by setting a [B2B allow or blocklist](../external-identities/allow-deny-list.md) to allow or block invites to other organization's domains. If the user's domain isn't allowed by those lists, then they won't be invited and can't be assigned access until the lists are updated. -Since you don't want the external user's access to last forever, you specify an expiration date in the policy, such as 180 days. After 180 days, if their access isn't extended, entitlement management will remove all access associated with that access package. By default, if the user who was invited through entitlement management has no other access package assignments, then when they lose their last assignment, their guest account will be blocked from signing in for 30 days, and later removed. This prevents the proliferation of unnecessary accounts. As described in the following sections, these settings are configurable. +Since you don't want the external user's access to last forever, you specify an expiration date in the policy, such as 180 days. After 180 days, if their access isn't extended, entitlement management will remove all access associated with that access package. By default, if the user who was invited through entitlement management has no other access package assignments, then when they lose their last assignment, their guest account is blocked from signing in for 30 days, and later removed. This prevents the proliferation of unnecessary accounts. As described in the following sections, these settings are configurable. ## How access works for external users The following diagram and steps provide an overview of how external users are gr 1. An external user (**Requestor A** in this example) uses the My Access portal link to [request access](entitlement-management-request-access.md) to the access package. How the user signs in depends on the authentication type of the directory or domain that's defined in the connected organization and in the external users settings. -1. An approver [approves the request](entitlement-management-request-approve.md) (or the request is auto-approved). +1. An approver [approves the request](entitlement-management-request-approve.md) (or the request is autoapproved). 1. The request goes into the [delivering state](entitlement-management-process.md). -1. Using the B2B invite process, a guest user account is created in your directory (**Requestor A (Guest)** in this example). If an [allowlist or a blocklist](../external-identities/allow-deny-list.md) is defined, the list setting will be applied. +1. Using the B2B invite process, a guest user account is created in your directory (**Requestor A (Guest)** in this example). If an [allowlist or a blocklist](../external-identities/allow-deny-list.md) is defined, the list setting is applied. 1. The guest user is assigned access to all of the resources in the access package. It can take some time for changes to be made in Azure AD and to other Microsoft Online Services or connected SaaS applications. For more information, see [When changes are applied](entitlement-management-access-package-resources.md#when-changes-are-applied). To ensure people outside of your organization can request access packages and ge - Allowing guests to invite other guests to your directory means that guest invites can occur outside of entitlement management. We recommend setting **Guests can invite** to **No** to only allow for properly governed invitations. - If you have been previously using the B2B allowlist, you must either remove that list, or make sure all the domains of all the organizations you want to partner with using entitlement management are added to the list. Alternatively, if you're using the B2B blocklist, you must make sure no domain of any organization you want to partner with is present on that list.-- If you create an entitlement management policy for **All users** (All connected organizations + any new external users), and a user doesnΓÇÖt belong to a connected organization in your directory, a connected organization will automatically be created for them when they request the package. However, any B2B [allow or blocklist](../external-identities/allow-deny-list.md) settings you have will take precedence. Therefore, you'll want to remove the allowlist, if you were using one, so that **All users** can request access, and exclude all authorized domains from your blocklist if you're using a blocklist.+- If you create an entitlement management policy for **All users** (All connected organizations + any new external users), and a user doesnΓÇÖt belong to a connected organization in your directory, a connected organization will automatically be created for them when they request the package. However, any B2B [allow or blocklist](../external-identities/allow-deny-list.md) settings you have will take precedence. Therefore, you want to remove the allowlist, if you were using one, so that **All users** can request access, and exclude all authorized domains from your blocklist if you're using a blocklist. - If you want to create an entitlement management policy that includes **All users** (All connected organizations + any new external users), you must first enable email one-time passcode authentication for your directory. For more information, see [Email one-time passcode authentication](../external-identities/one-time-passcode.md). - For more information about Azure AD B2B external collaboration settings, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md). To ensure people outside of your organization can request access packages and ge > [!NOTE] > If you create a connected organization for an Azure AD tenant from a different Microsoft cloud, you also need to configure cross-tenant access settings appropriately. For more information on how to configure these settings, see [Configure cross-tenant access settings](../external-identities/cross-cloud-settings.md). -### Review your Conditional Access policies +### Review your Conditional Access policies (Preview) - Make sure to exclude guests from any Conditional Access policies that new guest users won't be able to meet as this will block them from being able to sign in to your directory. For example, guests likely don't have a registered device, aren't in a known location, and don't want to re-register for multi-factor authentication (MFA), so adding these requirements in a Conditional Access policy will block guests from using entitlement management. For more information, see [What are conditions in Azure Active Directory Conditional Access?](../conditional-access/concept-conditional-access-conditions.md).  +- A common policy for entitlement management customers is to block all apps from guests except entitlement management for guests. This policy allows guests to enter MyAccess and request an access package. This package should contain a group (it's called Guests from MyAccess in the example below), which should be excluded from the block all apps policy. Once the package is approved, the guest is in the directory. Given that the end user has the access package assignment and is part of the group, the end user is able to access all other apps. Other common policies include excluding entitlement management app from MFA and compliant device. ++ :::image type="content" source="media/entitlement-management-external-users/exclude-app-guests.png" alt-text="Screenshot of exclude app options."::: ++ :::image type="content" source="media/entitlement-management-external-users/exclude-cloud-apps.png" alt-text="Screenshot of selection to exclude cloud apps."::: ++ :::image type="content" source="media/entitlement-management-external-users/exclude-app-guests-selection.png" alt-text="Screenshot of the exclude guests app selection."::: ++> [!NOTE] +> The entitlement management app includes the entitlement management side of MyAccess, the entitlement management side of Azure Portal and the entitlement management part of MS graph. The latter two require additional permissions for access, hence won't be accessed by guests unless explicit permission is provided. + ### Review your SharePoint Online external sharing settings - If you want to include SharePoint Online sites in your access packages for external users, make sure that your organization-level external sharing setting is set to **Anyone** (users don't require sign in), or **New and existing guests** (guests must sign in or provide a verification code). For more information, see [Turn external sharing on or off](/sharepoint/turn-external-sharing-on-or-off#change-the-organization-level-external-sharing-setting). |
active-directory | How To Connect Fed Saml Idp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-saml-idp.md | Within the SAML Response message, the Signature node contains information about 9. The SignatureMethod Algorithm must match the following sample: `<ds:SignatureMethod Algorithm="https://www.w3.org/2000/09/xmldsig#rsa-sha1"/>` +>[!NOTE] +>In order to improve the security SHA-1 algorithm is deprecated. Ensure to use a more secure algorithm like SHA-256. More information [can be found](https://learn.microsoft.com/lifecycle/announcements/sha-1-signed-content-retired). + ## Supported bindings Bindings are the transport-related communications parameters that are required. The following requirements apply to the bindings |
active-directory | Cross Tenant Synchronization Configure Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure-graph.md | |
active-directory | Cross Tenant Synchronization Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md | |
active-directory | Cross Tenant Synchronization Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview.md | |
active-directory | Cross Tenant Synchronization Topology | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-topology.md | |
active-directory | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/overview.md | |
aks | Azure Csi Files Storage Provision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md | This section provides guidance for cluster administrators who want to provision |server | Specify Azure storage account server address | Existing server address, for example `accountname.privatelink.file.core.windows.net`. | No | If empty, driver uses default `accountname.file.core.windows.net` or other sovereign cloud account address. | |disableDeleteRetentionPolicy | Specify whether disable DeleteRetentionPolicy for storage account created by driver. | `true` or `false` | No | `false` | |allowBlobPublicAccess | Allow or disallow public access to all blobs or containers for storage account created by driver. | `true` or `false` | No | `false` |+|networkEndpointType | Specify network endpoint type for the storage account created by driver. If `privateEndpoint` is specified, a private endpoint will be created for the storage account. For other cases, a service endpoint will be created by default. | "",`privateEndpoint`| No | "" | |requireInfraEncryption | Specify whether or not the service applies a secondary layer of encryption with platform managed keys for data at rest for storage account created by driver. | `true` or `false` | No | `false` | |storageEndpointSuffix | Specify Azure storage endpoint suffix. | `core.windows.net`, `core.chinacloudapi.cn`, etc. | No | If empty, driver uses default storage endpoint suffix according to cloud environment. For example, `core.windows.net`. | |tags | [Tags][tag-resources] are created in new storage account. | Tag format: 'foo=aaa,bar=bbb' | No | "" | |
analysis-services | Analysis Services Create Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-terraform.md | + + Title: 'Quickstart: Create an Azure Analysis Services server using Terraform' +description: 'In this article, you create an Azure Analysis Services server using Terraform' ++ Last updated : 3/10/2023++++++# Quickstart: Create an Azure Analysis Services server using Terraform ++This article shows how to use [Terraform](/azure/terraform) to create an [Azure Analysis Services](/azure/analysis-services/analysis-services-overview) server. ++In this article, you learn how to: ++> [!div class="checklist"] +> * Create a random pet name for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet) +> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group) +> * Create a random string for the Azure Analysis Services server name using [random_string](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string) +> * Create an Azure Analysis Services server using [azurerm_analysis_services_server](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/analysis_services_server) +++## Prerequisites ++- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure) ++## Implement the Terraform code ++> [!NOTE] +> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-analysis-services-create). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-analysis-services-create/TestRecord.md). +> +> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform) ++1. Create a directory in which to test and run the sample Terraform code and make it the current directory. ++1. Create a file named `main.tf` and insert the following code: ++ [!code-terraform[master](~/terraform_samples/quickstart/101-analysis-services-create/main.tf)] ++1. Create a file named `outputs.tf` and insert the following code: ++ [!code-terraform[master](~/terraform_samples/quickstart/101-analysis-services-create/outputs.tf)] ++1. Create a file named `providers.tf` and insert the following code: ++ [!code-terraform[master](~/terraform_samples/quickstart/101-analysis-services-create/providers.tf)] ++1. Create a file named `variables.tf` and insert the following code: ++ [!code-terraform[master](~/terraform_samples/quickstart/101-analysis-services-create/variables.tf)] ++## Initialize Terraform +++## Create a Terraform execution plan +++## Apply a Terraform execution plan +++## Verify the results ++1. Open a PowerShell command prompt. ++1. Get the Azure resource group name. ++ ```console + $resource_group_name=$(terraform output -raw resource_group_name) + ``` ++1. Get the server name. ++ ```console + $analysis_services_server_name=$(terraform output -raw analysis_services_server_name) + ``` ++1. Run [Get-AzAnalysisServicesServer](/powershell/module/az.analysisservices/get-azanalysisservicesserver) to display information about the new server. ++ ```azurepowershell + Get-AzAnalysisServicesServer -ResourceGroupName $resource_group_name ` + -Name $analysis_services_server_name + ``` ++## Clean up resources +++## Troubleshoot Terraform on Azure ++[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot) ++## Next steps ++> [!div class="nextstepaction"] +> [Quickstart: Configure server firewall - Portal](analysis-services-qs-firewall.md) |
app-service | Tutorial Auth Aad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-auth-aad.md | In the Cloud Shell, run the following commands on the frontend app to add the `s ```azurecli-interactive authSettings=$(az webapp auth show -g myAuthResourceGroup -n <front-end-app-name>)-authSettings=$(echo "$authSettings" | jq '.properties' | jq '.identityProviders.azureActiveDirectory.login += {"loginParameters":["scope=openid profile email offline_access api://<back-end-client-id>/user_impersonation"]}') +authSettings=$(echo "$authSettings" | jq '.properties' | jq '.identityProviders.azureActiveDirectory.login += {"loginParameters":["scope==openid offline_access api://<back-end-client-id>/user_impersonation"]}') az webapp auth set --resource-group myAuthResourceGroup --name <front-end-app-name> --body "$authSettings" ``` The commands effectively add a `loginParameters` property with additional custom scopes. Here's an explanation of the requested scopes: -- `openid`, `profile`, and `email` are requested by App Service by default already. For information, see [OpenID Connect Scopes](../active-directory/develop/v2-permissions-and-consent.md#openid-connect-scopes).-- `api://<back-end-client-id>/user_impersonation` is an exposed API in your backend app registration. It's the scope that gives you a JWT token that includes the backend app as a [token audience](https://wikipedia.org/wiki/JSON_Web_Token). +- `openid` is requested by App Service by default already. For information, see [OpenID Connect Scopes](../active-directory/develop/v2-permissions-and-consent.md#openid-connect-scopes). - [offline_access](../active-directory/develop/v2-permissions-and-consent.md#offline_access) is included here for convenience (in case you want to [refresh tokens](#what-happens-when-the-frontend-token-expires)).+- `api://<back-end-client-id>/user_impersonation` is an exposed API in your backend app registration. It's the scope that gives you a JWT token that includes the backend app as a [token audience](https://wikipedia.org/wiki/JSON_Web_Token). > [!TIP] > - To view the `api://<back-end-client-id>/user_impersonation` scope in the Azure portal, go to the **Authentication** page for the backend app, click the link under **Identity provider**, then click **Expose an API** in the left menu. if (bearerToken) { 1. Use the frontend web site in a browser. The URL is in the formate of `https://<front-end-app-name>.azurewebsites.net/`. 1. The browser requests your authentication to the web app. Complete the authentication.++ :::image type="content" source="./media/tutorial-auth-aad/browser-screenshot-authentication-permission-requested-pop-up.png" alt-text="Screenshot of browser authentication pop-up requesting permissions."::: + 1. After authentication completes, the frontend application returns the home page of the app. :::image type="content" source="./media/tutorial-auth-aad/app-home-page.png" alt-text="Screenshot of web browser showing frontend application after successfully completing authentication."::: |
azure-arc | Tutorial Use Gitops Flux2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md | Title: "Tutorial: Deploy applications using GitOps with Flux v2" description: "This tutorial shows how to use GitOps with Flux v2 to manage configuration and application deployment in Azure Arc and AKS clusters." Previously updated : 02/08/2023 Last updated : 03/10/2023 This tutorial describes how to use GitOps in a Kubernetes cluster. Before you di In this tutorial, we use an example GitOps configuration with two kustomizations, so that you can see how one kustomization can have a dependency on another. You can add more kustomizations and dependencies as needed, depending on your scenario. +> [!TIP] +> You can also create Flux configurations by using Bicep, ARM templates, or Terraform AzAPI provider. For more information, see [Microsoft.KubernetesConfiguration fluxConfigurations](/azure/templates/microsoft.kubernetesconfiguration/fluxconfigurations). + > [!IMPORTANT] > The `microsoft.flux` extension released major version 1.0.0. This includes the [multi-tenancy feature](conceptual-gitops-flux2.md#multi-tenancy). If you have existing GitOps Flux v2 configurations that use a previous version of the `microsoft.flux` extension, you can upgrade to the latest extension manually using the Azure CLI: `az k8s-extension create -g <RESOURCE_GROUP> -c <CLUSTER_NAME> -n flux --extension-type microsoft.flux -t <CLUSTER_TYPE>` (use `-t connectedClusters` for Arc clusters and `-t managedClusters` for AKS clusters). |
azure-arc | Network Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/network-requirements.md | The subnet of the IP addresses for Arc resource bridge must lie in the IP addres DNS Server must have internal and external endpoint resolution. The appliance VM and control plane need to resolve the management machine and vice versa. All three must be able to reach the required URLs for deployment. -### Configuration file example --The example below highlights a couple key requirements for Arc resource bridge when creating the configuration files. The IPs for `k8snodeippoolstart` and `k8snodeippoolend` reside in the subnet range designated in `ipaddressprefix`. The `ipaddressprefix` is provided in the format of the subnet's IP address range for the virtual network and subnet mask (IP Mask) in CIDR notation. --``` -azurestackhciprovider: -virtualnetwork: -name: "mgmtvnet -vswitchname: "Default Switch -type: "Transparent" -macpoolname -vlanid: 0 -ipaddressprefix: 172.16.0.0/16 -gateway: 17.16.1.1 -dnsservers: 17.16.0.1 -vippoolstart: 172.16.250.0 -vippoolend: 172.16.250.254 -k8snodeippoolstart: 172.16.30.0 -k8snodeippoolend: 172.16.30.254 -``` ## General network requirements |
azure-cache-for-redis | Cache How To Import Export Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-import-export-data.md | description: Learn how to import and export data to and from blob storage with y Previously updated : 01/31/2023 Last updated : 03/10/2023 For information on which Azure Cache for Redis tiers support import and export, ## Import -Use import to bring Redis compatible RDB files from any Redis server running in any cloud or environment, including Redis running on Linux, Windows, or any cloud provider such as Amazon Web Services and others. Importing data is an easy way to create a cache with pre-populated data. During the import process, Azure Cache for Redis loads the RDB files from Azure storage into memory and then inserts the keys into the cache. +Use import to bring Redis compatible RDB files from any Redis server running in any cloud or environment, including Redis running on Linux, Windows, or any cloud provider such as Amazon Web Services and others. Importing data is an easy way to create a cache with prepopulated data. During the import process, Azure Cache for Redis loads the RDB files from Azure storage into memory and then inserts the keys into the cache. > [!NOTE] > Before beginning the import operation, ensure that your Redis Database (RDB) file or files are uploaded into page or block blobs in Azure storage, in the same region and subscription as your Azure Cache for Redis instance. For more information, see [Get started with Azure Blob storage](../storage/blobs/storage-quickstart-blobs-dotnet.md). If you exported your RDB file using the [Azure Cache for Redis Export](#export) feature, your RDB file is already stored in a page blob and is ready for importing. Export allows you to export the data stored in Azure Cache for Redis to Redis co > - Export works with page blobs that are supported by both classic and Resource Manager storage accounts. > - Azure Cache for Redis does not support exporting to ADLS Gen2 storage accounts. > - Export is not supported by Blob storage accounts at this time.+ > - If your cache data export to Firewall-enabled storage accounts fails, refer to [How to export if I have firewall enabled on my storage account?](#how-to-export-if-i-have-firewall-enabled-on-my-storage-account) > > For more information, see [Azure storage account overview](../storage/common/storage-account-overview.md). > This section contains frequently asked questions about the Import/Export feature - [Can I automate Import/Export using PowerShell, CLI, or other management clients?](#can-i-automate-importexport-using-powershell-cli-or-other-management-clients) - [I received a timeout error during my Import/Export operation. What does it mean?](#i-received-a-timeout-error-during-my-importexport-operation-what-does-it-mean) - [I got an error when exporting my data to Azure Blob Storage. What happened?](#i-got-an-error-when-exporting-my-data-to-azure-blob-storage-what-happened)+- [How to export if I have firewall enabled on my storage account?](#how-to-export-if-i-have-firewall-enabled-on-my-storage-account) ### What pricing tiers can use Import/Export? Azure Cache for Redis supports RDB import up through RDB version 7. ### Can I use Import/Export with Redis cluster? -Yes, and you can import/export between a clustered cache and a non-clustered cache. Since Redis cluster [only supports database 0](cache-how-to-premium-clustering.md#do-i-need-to-make-any-changes-to-my-client-application-to-use-clustering), any data in databases other than 0 isn't imported. When clustered cache data is imported, the keys are redistributed among the shards of the cluster. +Yes, and you can import/export between a clustered cache and a nonclustered cache. Since Redis cluster [only supports database 0](cache-how-to-premium-clustering.md#do-i-need-to-make-any-changes-to-my-client-application-to-use-clustering), any data in databases other than 0 isn't imported. When clustered cache data is imported, the keys are redistributed among the shards of the cluster. ### How does Import/Export work with a custom databases setting? To resolve this error, start the import or export operation before 15 minutes ha Export works only with RDB files stored as page blobs. Other blob types aren't currently supported, including Blob storage accounts with hot and cool tiers. For more information, see [Azure storage account overview](../storage/common/storage-account-overview.md). If you're using an access key to authenticate a storage account, having firewall exceptions on the storage account tends to cause the import/export process to fail. +### How to export if I have firewall enabled on my storage account? ++For firewall enabled storage accounts, we need to check ΓÇ£Allow Azure services on the trusted services list to access this storage accountΓÇ¥ then, use managed identity (System/User assigned) and provision Storage Blob Data Contributor RBAC role for that object ID. ++More information here - [Managed identity for storage accounts - Azure Cache for Redis](cache-managed-identity.md) + ## Next steps Learn more about Azure Cache for Redis features. |
azure-fluid-relay | Azure Function Token Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/azure-function-token-provider.md | Using [Azure Functions](../../azure-functions/functions-overview.md) is a fast w This example demonstrates how to create your own **HTTPTrigger Azure Function** that fetches the token by passing in your tenant key. +# [TypeScript](#tab/typescript) + ```typescript import { AzureFunction, Context, HttpRequest } from "@azure/functions"; import { ScopeType } from "@fluidframework/azure-client"; export default httpTrigger; The `generateToken` function, found in the `@fluidframework/azure-service-utils` package, generates a token for the given user that is signed using the tenant's secret key. This method enables the token to be returned to the client without exposing the secret. Instead, the token is generated server-side using the secret to provide scoped access to the given document. The example ITokenProvider below makes HTTP requests to this Azure Function to retrieve the tokens. +# [C#](#tab/csharp) ++```cs +using System; +using System.IO; +using System.Threading.Tasks; +using Microsoft.AspNetCore.Mvc; +using Microsoft.Azure.WebJobs; +using Microsoft.Azure.WebJobs.Extensions.Http; +using Microsoft.AspNetCore.Http; +using Microsoft.Extensions.Logging; +using Newtonsoft.Json; +using Newtonsoft.Json.Linq; +using System.Text; ++using Microsoft.IdentityModel.Tokens; +using System.IdentityModel.Tokens.Jwt; ++namespace dotnet_tokenprovider_functionsapp +{ + public static class AzureFunction + { + // NOTE: retrieve the key from a secure location. + private static readonly string key = "myTenantKey"; ++ [FunctionName("AzureFunction")] + public static async Task<IActionResult> Run( + [HttpTrigger(AuthorizationLevel.Function, "get", Route = null)] HttpRequest req, ILogger log) + { + string content = await new StreamReader(req.Body).ReadToEndAsync(); + JObject body = !string.IsNullOrEmpty(content) ? JObject.Parse(content) : null; ++ string tenantId = (req.Query["tenantId"].ToString() ?? body["tenantId"]?.ToString()) as string; + string documentId = (req.Query["documentId"].ToString() ?? body["documentId"]?.ToString() ?? null) as string; + string userId = (req.Query["userId"].ToString() ?? body["userId"]?.ToString()) as string; + string userName = (req.Query["userName"].ToString() ?? body["userName"]?.ToString()) as string; + string[] scopes = (req.Query["scopes"].ToString().Split(",") ?? body["scopes"]?.ToString().Split(",") ?? null) as string[]; ++ if (string.IsNullOrEmpty(tenantId)) + { + return new BadRequestObjectResult("No tenantId provided in query params"); + } ++ if (string.IsNullOrEmpty(key)) + { + return new NotFoundObjectResult($"No key found for the provided tenantId: ${tenantId}"); + } ++ // If a user is not specified, the token will not be associated with a user, and a randomly generated mock user will be used instead + var user = (string.IsNullOrEmpty(userName) || string.IsNullOrEmpty(userId)) ? + new { name = Guid.NewGuid().ToString(), id = Guid.NewGuid().ToString() } : + new { name = userName, id = userId }; ++ // Will generate the token and returned by an ITokenProvider implementation to use with the AzureClient. + string token = GenerateToken( + tenantId, + key, + scopes ?? new string[] { "doc:read", "doc:write", "summary:write" }, + documentId, + user + ); ++ return new OkObjectResult(token); + } ++ private static string GenerateToken(string tenantId, string key, string[] scopes, string? documentId, dynamic user, int lifetime = 3600, string ver = "1.0") + { + string docId = documentId ?? ""; + DateTime now = DateTime.Now; ++ SigningCredentials credentials = new SigningCredentials(new SymmetricSecurityKey(Encoding.UTF8.GetBytes(key)), SecurityAlgorithms.HmacSha256); ++ JwtHeader header = new JwtHeader(credentials); + JwtPayload payload = new JwtPayload + { + { "documentId", docId }, + { "scopes", scopes }, + { "tenantId", tenantId }, + { "user", user }, + { "iat", new DateTimeOffset(now).ToUnixTimeSeconds() }, + { "exp", new DateTimeOffset(now.AddSeconds(lifetime)).ToUnixTimeSeconds() }, + { "ver", ver }, + { "jti", Guid.NewGuid() } + }; ++ JwtSecurityToken token = new JwtSecurityToken(header, payload); ++ return new JwtSecurityTokenHandler().WriteToken(token); + } + } +} +``` ++The `GenerateToken` function is built off of the `@fluidframework/azure-service-utils` npm package, and it generates a token for the given user that is signed using the tenant's secret key. This function enables the token to be returned to the client without exposing the secret. Instead, the token is generated server-side using the secret to provide scoped access to the given document. The example ITokenProvider below makes HTTP requests to this Azure Function to retrieve the tokens. +++ ### Deploy the Azure Function Azure Functions can be deployed in several ways. For more information, see the **Deploy** section of the [Azure Functions documentation](../../azure-functions/functions-continuous-deployment.md) for more information about deploying Azure Functions. |
azure-monitor | Action Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md | You might have a limited number of SMS actions per action group. > > If you can't select your country/region code in the Azure portal, SMS isn't supported for your country/region. If your country/region code isn't available, you can vote to have your country/region added at [Share your ideas](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, as a workaround, configure your action group to call a webhook to a third-party SMS provider that offers support in your country/region. -For information about pricing for supported countries/regions, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). - #### Countries with SMS notification support | Country code | Country | You might have a limited number of voice actions per action group. > [!NOTE] > > If you can't select your country/region code in the Azure portal, voice calls aren't supported for your country/region. If your country/region code isn't available, you can vote to have your country/region added at [Share your ideas](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, as a workaround, configure your action group to call a webhook to a third-party voice call provider that offers support in your country/region.-> -> The only country code that action groups currently support for voice notification is +1 for the United States. ++#### Countries with Voice notification support +| Country code | Country | +|:|:| +| 61 | Australia | +| 43 | Austria | +| 32 | Belgium | +| 55 | Brazil | +| 1 |Canada | +| 56 | Chile | +| 420 | Czech Republic | +| 45 | Denmark | +| 358 | Finland | +| 353 | Ireland | +| 972 | Israel | +| 352 | Luxembourg | +| 60 | Malaysia | +| 52 | Mexico | +| 31 | Netherlands | +| 64 | New Zealand | +| 47 | Norway | +| 351 | Portugal | +| 65 | Singapore | +| 27 | South Africa | +| 46 | Sweeden | +| 44 | United Kingdom | +| 1 | United States | For information about pricing for supported countries/regions, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). |
azure-monitor | Azure Monitor Workspace Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-manage.md | Last updated 01/19/2023 This article shows you how to create and delete an Azure Monitor workspace. When you configure Azure Monitor managed service for Prometheus, you can select an existing Azure Monitor workspace or create a new one. +> [!NOTE] +> When you create an Azure Monitor workspace, by default a data collection rule and a data collection endpoint in the form `<azure-workspace-name>` will automatically be created in a resource group in the form `MA_<azure-workspace-name>_<location>_managed`. + ## Create an Azure Monitor workspace ### [Azure portal](#tab/azure-portal) |
azure-monitor | Metrics Supported | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md | -Date list was last updated: 03/01/2023. +Date list was last updated: 03/12/2023. Azure Monitor provides several ways to interact with metrics, including charting them in the Azure portal, accessing them through the REST API, or querying them by using PowerShell or the Azure CLI (Command Line Interface). This latest update adds a new column and reorders the metrics to be alphabetical |PodNetworkIn |Yes |App Network In |Bytes |Average |Cumulative count of bytes received in the app |Deployment, AppName, Pod | |PodNetworkOut |Yes |App Network Out |Bytes |Average |Cumulative count of bytes sent from the app |Deployment, AppName, Pod | |process.cpu.usage |Yes |process.cpu.usage |Percent |Average |The recent CPU usage for the JVM process |Deployment, AppName, Pod |+|Requests |Yes |Requests |Count |Total |Requests processed |containerAppName, podName, statusCodeCategory, statusCode | |requests-per-second |Yes |requests-rate |Count |Average |Request rate |Deployment, AppName, Pod |+|RestartCount |Yes |Restart Count |Count |Maximum |Restart count of Spring App |containerAppName, podName | +|RxBytes |Yes |Network In Bytes |Bytes |Total |Network received bytes |containerAppName, podName | |system.cpu.usage |Yes |system.cpu.usage |Percent |Average |The recent CPU usage for the whole system |Deployment, AppName, Pod | |threadpool-completed-items-count |Yes |threadpool-completed-items-count |Count |Average |ThreadPool Completed Work Items Count |Deployment, AppName, Pod | |threadpool-queue-length |Yes |threadpool-queue-length |Count |Average |ThreadPool Work Items Queue Length |Deployment, AppName, Pod | This latest update adds a new column and reorders the metrics to be alphabetical |tomcat.threads.config.max |Yes |tomcat.threads.config.max |Count |Total |Tomcat Config Max Thread Count |Deployment, AppName, Pod | |tomcat.threads.current |Yes |tomcat.threads.current |Count |Total |Tomcat Current Thread Count |Deployment, AppName, Pod | |total-requests |Yes |total-requests |Count |Average |Total number of requests in the lifetime of the process |Deployment, AppName, Pod |+|TxBytes |Yes |Network Out Bytes |Bytes |Total |Network transmitted bytes |containerAppName, podName | +|UsageNanoCores |Yes |CPU Usage |NanoCores |Average |CPU consumed by Spring App, in nano cores. 1,000,000,000 nano cores = 1 core |containerAppName, podName | |working-set |Yes |working-set |Count |Average |Amount of working set used by the process (MB) |Deployment, AppName, Pod |+|WorkingSetBytes |Yes |Memory Working Set Bytes |Bytes |Average |Spring App working set memory used in bytes. |containerAppName, podName | ## Microsoft.Automation/automationAccounts This latest update adds a new column and reorders the metrics to be alphabetical |UsedLatest |Yes |Datastore Disk Used |Bytes |Average |The total amount of disk used in the datastore |dsname | +## microsoft.azuresphere/catalogs +<!-- Data source : naam--> ++|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| +|||||||| +|DeviceAttestationCount |Yes |Device Attestation Requests |Count |Count |Count of all the requests sent by an Azure Sphere device for authentication and attestation. |DeviceId, CatalogId, StatusCodeClass | +|DeviceErrorCount |Yes |Device Errors |Count |Count |Count of all the errors encountered by an Azure Sphere device. |DeviceId, CatalogId, ErrorCategory, ErrorClass, ErrorType | ++ ## Microsoft.Batch/batchaccounts <!-- Data source : naam--> This latest update adds a new column and reorders the metrics to be alphabetical |Available Memory Bytes |Yes |Available Memory Bytes (Preview) |Bytes |Average |Amount of physical memory, in bytes, immediately available for allocation to a process or for system use in the Virtual Machine |No Dimensions | |CPU Credits Consumed |Yes |CPU Credits Consumed |Count |Average |Total number of credits consumed by the Virtual Machine. Only available on B-series burstable VMs |No Dimensions | |CPU Credits Remaining |Yes |CPU Credits Remaining |Count |Average |Total number of credits available to burst. Only available on B-series burstable VMs |No Dimensions |-|Data Disk Bandwidth Consumed Percentage |Yes |Data Disk Bandwidth Consumed Percentage |Percent |Average |Percentage of data disk bandwidth consumed per minute |LUN | -|Data Disk IOPS Consumed Percentage |Yes |Data Disk IOPS Consumed Percentage |Percent |Average |Percentage of data disk I/Os consumed per minute |LUN | +|Data Disk Bandwidth Consumed Percentage |Yes |Data Disk Bandwidth Consumed Percentage |Percent |Average |Percentage of data disk bandwidth consumed per minute. Only available on VM series that support premium storage. |LUN | +|Data Disk IOPS Consumed Percentage |Yes |Data Disk IOPS Consumed Percentage |Percent |Average |Percentage of data disk I/Os consumed per minute. Only available on VM series that support premium storage. |LUN | |Data Disk Max Burst Bandwidth |Yes |Data Disk Max Burst Bandwidth |Count |Average |Maximum bytes per second throughput Data Disk can achieve with bursting |LUN | |Data Disk Max Burst IOPS |Yes |Data Disk Max Burst IOPS |Count |Average |Maximum IOPS Data Disk can achieve with bursting |LUN | |Data Disk Queue Depth |Yes |Data Disk Queue Depth |Count |Average |Data Disk Queue Depth(or Queue Length) |LUN | This latest update adds a new column and reorders the metrics to be alphabetical |Network In Total |Yes |Network In Total |Bytes |Total |The number of bytes received on all network interfaces by the Virtual Machine(s) (Incoming Traffic) |No Dimensions | |Network Out |Yes |Network Out Billable (Deprecated) |Bytes |Total |The number of billable bytes out on all network interfaces by the Virtual Machine(s) (Outgoing Traffic) (Deprecated) |No Dimensions | |Network Out Total |Yes |Network Out Total |Bytes |Total |The number of bytes out on all network interfaces by the Virtual Machine(s) (Outgoing Traffic) |No Dimensions |-|OS Disk Bandwidth Consumed Percentage |Yes |OS Disk Bandwidth Consumed Percentage |Percent |Average |Percentage of operating system disk bandwidth consumed per minute |LUN | -|OS Disk IOPS Consumed Percentage |Yes |OS Disk IOPS Consumed Percentage |Percent |Average |Percentage of operating system disk I/Os consumed per minute |LUN | +|OS Disk Bandwidth Consumed Percentage |Yes |OS Disk Bandwidth Consumed Percentage |Percent |Average |Percentage of operating system disk bandwidth consumed per minute. Only available on VM series that support premium storage. |LUN | +|OS Disk IOPS Consumed Percentage |Yes |OS Disk IOPS Consumed Percentage |Percent |Average |Percentage of operating system disk I/Os consumed per minute. Only available on VM series that support premium storage. |LUN | |OS Disk Max Burst Bandwidth |Yes |OS Disk Max Burst Bandwidth |Count |Average |Maximum bytes per second throughput OS Disk can achieve with bursting |LUN | |OS Disk Max Burst IOPS |Yes |OS Disk Max Burst IOPS |Count |Average |Maximum IOPS OS Disk can achieve with bursting |LUN | |OS Disk Queue Depth |Yes |OS Disk Queue Depth |Count |Average |OS Disk Queue Depth(or Queue Length) |No Dimensions | This latest update adds a new column and reorders the metrics to be alphabetical |Premium Data Disk Cache Read Miss |Yes |Premium Data Disk Cache Read Miss |Percent |Average |Premium Data Disk Cache Read Miss |LUN | |Premium OS Disk Cache Read Hit |Yes |Premium OS Disk Cache Read Hit |Percent |Average |Premium OS Disk Cache Read Hit |No Dimensions | |Premium OS Disk Cache Read Miss |Yes |Premium OS Disk Cache Read Miss |Percent |Average |Premium OS Disk Cache Read Miss |No Dimensions |-|VM Cached Bandwidth Consumed Percentage |Yes |VM Cached Bandwidth Consumed Percentage |Percent |Average |Percentage of cached disk bandwidth consumed by the VM |No Dimensions | -|VM Cached IOPS Consumed Percentage |Yes |VM Cached IOPS Consumed Percentage |Percent |Average |Percentage of cached disk IOPS consumed by the VM |No Dimensions | +|VM Cached Bandwidth Consumed Percentage |Yes |VM Cached Bandwidth Consumed Percentage |Percent |Average |Percentage of cached disk bandwidth consumed by the VM. Only available on VM series that support premium storage. |No Dimensions | +|VM Cached IOPS Consumed Percentage |Yes |VM Cached IOPS Consumed Percentage |Percent |Average |Percentage of cached disk IOPS consumed by the VM. Only available on VM series that support premium storage. |No Dimensions | |VM Local Used Burst BPS Credits Percentage |Yes |VM Cached Used Burst BPS Credits Percentage |Percent |Average |Percentage of Cached Burst BPS Credits used by the VM. |No Dimensions | |VM Local Used Burst IO Credits Percentage |Yes |VM Cached Used Burst IO Credits Percentage |Percent |Average |Percentage of Cached Burst IO Credits used by the VM. |No Dimensions | |VM Remote Used Burst BPS Credits Percentage |Yes |VM Uncached Used Burst BPS Credits Percentage |Percent |Average |Percentage of Uncached Burst BPS Credits used by the VM. |No Dimensions | |VM Remote Used Burst IO Credits Percentage |Yes |VM Uncached Used Burst IO Credits Percentage |Percent |Average |Percentage of Uncached Burst IO Credits used by the VM. |No Dimensions |-|VM Uncached Bandwidth Consumed Percentage |Yes |VM Uncached Bandwidth Consumed Percentage |Percent |Average |Percentage of uncached disk bandwidth consumed by the VM |No Dimensions | -|VM Uncached IOPS Consumed Percentage |Yes |VM Uncached IOPS Consumed Percentage |Percent |Average |Percentage of uncached disk IOPS consumed by the VM |No Dimensions | +|VM Uncached Bandwidth Consumed Percentage |Yes |VM Uncached Bandwidth Consumed Percentage |Percent |Average |Percentage of uncached disk bandwidth consumed by the VM. Only available on VM series that support premium storage. |No Dimensions | +|VM Uncached IOPS Consumed Percentage |Yes |VM Uncached IOPS Consumed Percentage |Percent |Average |Percentage of uncached disk IOPS consumed by the VM. Only available on VM series that support premium storage. |No Dimensions | |VmAvailabilityMetric |Yes |VM Availability Metric (Preview) |Count |Average |Measure of Availability of Virtual machines over time. |No Dimensions | This latest update adds a new column and reorders the metrics to be alphabetical |TableTableThroughputUpdate |No |AzureTable Table Throughput Updated |Count |Count |AzureTable Table Throughput Updated |ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest | |TableTableUpdate |No |AzureTable Table Updated |Count |Count |AzureTable Table Updated |ResourceName, ApiKind, ApiKindResourceType, IsThroughputRequest, OperationType | |TotalRequests |Yes |Total Requests |Count |Count |Number of requests made |DatabaseName, CollectionName, Region, StatusCode, OperationType, Status, CapacityType |-|TotalRequestsPreview |No |Total Requests (Preview) |Count |Count |Number of requests |DatabaseName, CollectionName, Region, StatusCode, OperationType, Status, IsExternal | -|TotalRequestUnits |Yes |Total Request Units |Count |Total |Request Units consumed |DatabaseName, CollectionName, Region, StatusCode, OperationType, Status, CapacityType | +|TotalRequestsPreview |No |Total Requests (Preview) |Count |Count |Number of SQL requests |DatabaseName, CollectionName, Region, StatusCode, OperationType, Status, IsExternal | +|TotalRequestUnits |Yes |Total Request Units |Count |Total |SQL Request Units consumed |DatabaseName, CollectionName, Region, StatusCode, OperationType, Status, CapacityType | |TotalRequestUnitsPreview |No |Total Request Units (Preview) |Count |Total |Request Units consumed with CapacityType |DatabaseName, CollectionName, Region, StatusCode, OperationType, Status, CapacityType | |UpdateAccountKeys |Yes |Account Keys Updated |Count |Count |Account Keys Updated |KeyType | |UpdateAccountNetworkSettings |Yes |Account Network Settings Updated |Count |Count |Account Network Settings Updated |No Dimensions | This latest update adds a new column and reorders the metrics to be alphabetical |AnalyticsConnectorResourceLatency |Yes |Analytics Connector Process Latency |Milliseconds |Average |The response latency of the service. |No Dimensions | |AnalyticsConnectorSuccessfulDataSize |Yes |Analytics Connector Successful Data Size |Count |Sum |The size of data successfully processed by the analytics connector |No Dimensions | |AnalyticsConnectorSuccessfulResourceCount |Yes |Analytics Connector Successful Resource Count |Count |Sum |The amount of data successfully processed by the analytics connector |No Dimensions |-|AnalyticsConnectorTotalErrors |Yes |Analytics Connector Total Error Count |Count |Sum |The total number of errors logged by the analytics connector |ErrorType, Operation | +|AnalyticsConnectorTotalError |Yes |Analytics Connector Total Error Count |Count |Sum |The total number of errors logged by the analytics connector |ErrorType, Operation | ## Microsoft.HealthcareApis/workspaces/fhirservices This latest update adds a new column and reorders the metrics to be alphabetical |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||+|ApiCallReceived_Count |Yes |Call Received |Count |Count |Number of requests received via Log Ingestion API or from the agent |InputStreamId, ResponseCode | |RowsDropped_Count |Yes |Rows Dropped |Count |Count |Number of rows dropped while running transformation. |InputStreamId | |RowsReceived_Count |Yes |Rows Received |Count |Count |Total number of rows recevied for transformation. |InputStreamId |-|TransformationErrors |Yes |Transformation Errors |Count |Count |The number of rows, where the execution of KQL transformation led to an error, KQL transformation service limit exceeds. |InputStreamId, ErrorType | -|TransformationErrors_Count |Yes |Transformation Errors |Count |Count |The number of rows, where the execution of KQL transformation led to an error like KQL transformation service limit exceeds. |InputStreamId, ErrorType | -|TransformationRuntime_DurationMs |Yes |Transformation Runtime Duration |Count |Count |Total time taken in miliseconds to transform given set of records. |InputStreamId | +|TransformationErrors_Count |Yes |Transformation Errors |Count |Count |The number of times when execution of KQL transformation resulted in an error, e.g. KQL syntax error or going over a service limit. |InputStreamId, ErrorType | +|TransformationRuntime_DurationMs |Yes |Transformation Runtime Duration |Count |Count |Total time taken to transform given set of records, measured in milliseconds. |InputStreamId | ## Microsoft.IoTCentral/IoTApps This latest update adds a new column and reorders the metrics to be alphabetical |Average_Virtual Shared Memory |Yes |Virtual Shared Memory |Count |Average |Average_Virtual Shared Memory. Supported for: Linux. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, ObjectName, InstanceName, CounterPath, SourceSystem | |Event |Yes |Event |Count |Average |Event. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Source, EventLog, Computer, EventCategory, EventLevel, EventLevelName, EventID | |Heartbeat |Yes |Heartbeat |Count |Total |Heartbeat. Supported for: Linux, Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, OSType, Version, SourceComputerId |+|Query Count |No |Query Count |Count |Count |Total number of user queries for this workspace. |IsUserQuery | +|Query Failure Count |No |Query Failure Count |Count |Count |Total number of failed user queries for this workspace. |IsUserQuery | +|Query Success Rate |No |Query Success Rate |Percent |Average |User query success rate for this workspace. |IsUserQuery | |Update |Yes |Update |Count |Average |Update. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, Product, Classification, UpdateState, Optional, Approved | This latest update adds a new column and reorders the metrics to be alphabetical - [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md) -<!--Gen Date: Wed Mar 01 2023 10:07:05 GMT+0200 (Israel Standard Time)--> +<!--Gen Date: Sun Mar 12 2023 11:30:35 GMT+0200 (Israel Standard Time)--> |
azure-monitor | Prometheus Remote Write Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-remote-write-managed-identity.md | This step isn't required if you're using an AKS identity since it will already h ## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write remoteWrite:- - url: 'http://localhost:8081/api/v1/write' + - url: 'http://localhost:8081/api/v1/write' ++ ## Azure Managed Prometheus currently exports some default mixins in Grafana. + ## These mixins are compatible with Azure Monitor agent on your Azure Kubernetes Service cluster. + ## However, these mixins aren't compatible with Prometheus metrics scraped by the Kube Prometheus stack. + ## In order to make these mixins compatible, uncomment remote write relabel configuration below: + ++ ## writeRelabelConfigs: + ## - sourceLabels: [metrics_path] + ## regex: /metrics/cadvisor + ## targetLabel: job + ## replacement: cadvisor + ## action: replace + ## - sourceLabels: [job] + ## regex: 'node-exporter' + ## targetLabel: job + ## replacement: node + ## action: replace containers:- - name: prom-remotewrite - image: <CONTAINER-IMAGE-VERSION> - imagePullPolicy: Always - ports: - - name: rw-port - containerPort: 8081 - livenessProbe: - httpGet: - path: /health - port: rw-port - initialDelaySeconds: 10 - timeoutSeconds: 10 - readinessProbe: - httpGet: - path: /ready - port: rw-port - initialDelaySeconds: 10 - timeoutSeconds: 10 - env: - - name: INGESTION_URL - value: <INGESTION_URL> - - name: LISTENING_PORT - value: '8081' - - name: IDENTITY_TYPE - value: userAssigned - - name: AZURE_CLIENT_ID - value: <MANAGED-IDENTITY-CLIENT-ID> - # Optional parameter - - name: CLUSTER - value: <CLUSTER-NAME> + - name: prom-remotewrite + image: <CONTAINER-IMAGE-VERSION> + imagePullPolicy: Always + ports: + - name: rw-port + containerPort: 8081 + livenessProbe: + httpGet: + path: /health + port: rw-port + initialDelaySeconds: 10 + timeoutSeconds: 10 + readinessProbe: + httpGet: + path: /ready + port: rw-port + initialDelaySeconds: 10 + timeoutSeconds: 10 + env: + - name: INGESTION_URL + value: <INGESTION_URL> + - name: LISTENING_PORT + value: '8081' + - name: IDENTITY_TYPE + value: userAssigned + - name: AZURE_CLIENT_ID + value: <MANAGED-IDENTITY-CLIENT-ID> + # Optional parameter + - name: CLUSTER + value: <CLUSTER-NAME> ``` |
azure-monitor | Resource Logs Categories | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md | Title: Supported categories for Azure Monitor resource logs description: Understand the supported services and event schemas for Azure Monitor resource logs. Previously updated : 03/01/2023 Last updated : 03/12/2023 If you think something is missing, you can open a GitHub comment at the bottom o |vmwaresyslog |VMware Syslog |Yes | +## microsoft.azuresphere/catalogs +<!-- Data source : naam--> ++|Category|Category Display Name|Costs To Export| +|||| +|AuditLogs |Audit Logs |Yes | +|DeviceEvents |Device Events |Yes | ++ ## Microsoft.Batch/batchaccounts <!-- Data source : naam--> If you think something is missing, you can open a GitHub comment at the bottom o |ContainerRegistryRepositoryEvents |RepositoryEvent logs |No | +## Microsoft.ContainerService/fleets +<!-- Data source : naam--> ++|Category|Category Display Name|Costs To Export| +|||| +|cloud-controller-manager |Kubernetes Cloud Controller Manager |Yes | +|cluster-autoscaler |Kubernetes Cluster Autoscaler |Yes | +|csi-azuredisk-controller |csi-azuredisk-controller |Yes | +|csi-azurefile-controller |csi-azurefile-controller |Yes | +|csi-snapshot-controller |csi-snapshot-controller |Yes | +|guard |guard |Yes | +|kube-apiserver |Kubernetes API Server |Yes | +|kube-audit |Kubernetes Audit |Yes | +|kube-audit-admin |Kubernetes Audit Admin Logs |Yes | +|kube-controller-manager |Kubernetes Controller Manager |Yes | +|kube-scheduler |Kubernetes Scheduler |Yes | ++ ## Microsoft.ContainerService/managedClusters <!-- Data source : naam--> If you think something is missing, you can open a GitHub comment at the bottom o * [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace) -<!--Gen Date: Wed Mar 01 2023 10:07:05 GMT+0200 (Israel Standard Time)--> +<!--Gen Date: Sun Mar 12 2023 11:30:35 GMT+0200 (Israel Standard Time)--> |
azure-monitor | Basic Logs Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md | Configure a table for Basic logs if: | Firewalls | [AZFWFlowTrace](/azure/azure-monitor/reference/tables/AZFWFlowTrace) | | Health Data | [AHDSMedTechDiagnosticLogs](/azure/azure-monitor/reference/tables/AHDSMedTechDiagnosticLogs) | | Media Services | [AMSLiveEventOperations](/azure/azure-monitor/reference/tables/AMSLiveEventOperations)<br>[AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests)<br>[AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth)<br>[AMSStreamingEndpointRequests](/azure/azure-monitor/reference/tables/AMSStreamingEndpointRequests) |+ | Redis Cache Enterprise | [REDConnectionEvents](/azure/azure-monitor/reference/tables/REDConnectionEvents) | | Sphere | [ASCAuditLogs](/azure/azure-monitor/reference/tables/ASCAuditLogs)<br>[ASCDeviceEvents](/azure/azure-monitor/reference/tables/ASCDeviceEvents) | | Storage | [StorageBlobLogs](/azure/azure-monitor/reference/tables/StorageBlobLogs)<br>[StorageFileLogs](/azure/azure-monitor/reference/tables/StorageFileLogs)<br>[StorageQueueLogs](/azure/azure-monitor/reference/tables/StorageQueueLogs)<br>[StorageTableLogs](/azure/azure-monitor/reference/tables/StorageTableLogs) | | Storage Mover | [StorageMoverJobRunLogs](/azure/azure-monitor/reference/tables/StorageMoverJobRunLogs)<br>[StorageMoverCopyLogsFailed](/azure/azure-monitor/reference/tables/StorageMoverCopyLogsFailed)<br>[StorageMoverCopyLogsTransferred](/azure/azure-monitor/reference/tables/StorageMoverCopyLogsTransferred)<br> | |
azure-monitor | Tutorial Logs Ingestion Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-api.md | The [DCR](../essentials/data-collection-rule-overview.md) defines the schema of { "name": "AdditionalContext", "type": "string"+ }, + { + "name": "CounterName", + "type": "string" + }, + { + "name": "CounterValue", + "type": "real" } ] } The [DCR](../essentials/data-collection-rule-overview.md) defines the schema of "destinations": [ "clv2ws1" ],- "transformKql": "source | extend jsonContext = parse_json(AdditionalContext) | project TimeGenerated = Time, Computer, AdditionalContext = jsonContext, CounterName=tostring(jsonContext.CounterName), CounterValue=jsonContext.CounterValue", + "transformKql": "source | extend jsonContext = parse_json(AdditionalContext) | project TimeGenerated = Time, Computer, AdditionalContext = jsonContext, CounterName=tostring(jsonContext.CounterName), CounterValue=toreal(jsonContext.CounterValue)", "outputStream": "Custom-MyTable_CL" } ] |
azure-monitor | Tutorial Logs Ingestion Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-portal.md | Title: 'Tutorial: Send data to Azure Monitor Logs by using a REST API (Azure por description: Tutorial on how to send data to a Log Analytics workspace in Azure Monitor by using a REST API (Azure portal version). Last updated 07/15/2022+++ # Tutorial: Send data to Azure Monitor Logs by using a REST API (Azure portal) Before you can send data to the workspace, you need to create the custom table w :::image type="content" source="media/tutorial-logs-ingestion-portal/custom-log-table-name.png" lightbox="media/tutorial-logs-ingestion-portal/custom-log-table-name.png" alt-text="Screenshot that shows the custom log table name."::: ## Parse and filter sample data-Instead of directly configuring the schema of the table, you can use the portal to upload sample data so that Azure Monitor can determine the schema. The sample is expected to be a JSON file that contains one or multiple log records structured in the same way they'll be sent in the body of an HTTP request of the logs ingestion API call. +Instead of directly configuring the schema of the table, you can upload a file with a sample JSON array of data through the portal, and Azure Monitor will set the schema automatically. The sample JSON file must contain one or more log records structured as an array, in the same way they data is sent in the body of an HTTP request of the logs ingestion API call. 1. Select **Browse for files** and locate the *data_sample.json* file that you previously created. |
cost-management-billing | Quick Acm Cost Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-acm-cost-analysis.md | Title: Quickstart - Explore Azure costs with cost analysis + Title: Quickstart - Start using Cost analysis description: This quickstart helps you use cost analysis to explore and analyze your Azure organizational costs. Previously updated : 09/09/2022 Last updated : 03/10/2023 -# Quickstart: Explore and analyze costs with cost analysis -Before you can properly control and optimize your Azure costs, you need to understand where costs originated within your organization. It's also useful to know how much money your services cost, and in support of which environments and systems. Visibility into the full spectrum of costs is critical to accurately understand organizational spending patterns. You can use spending patterns to enforce cost control mechanisms, like budgets. +# Quickstart: Start using Cost analysis -In this quickstart, you use cost analysis to explore and analyze your organizational costs. You can view aggregated costs or break them down to understand where costs occur over time and identify spending trends. You can view accumulated costs over time to estimate monthly, quarterly, or even yearly cost trends against a budget. You can use budgets to get notified as cost exceeds specific thresholds. +Before you can control and optimize your costs, you first need to understand where they originated ΓÇô from the underlying resources used to support your cloud projects to the environments they're deployed in and the owners who manage them. Full visibility backed by a thorough tagging strategy is critical to accurately understand your spending patterns and enforce cost control mechanisms. -In this quickstart, you learn how to: --- Get started in cost analysis-- Select a cost view-- View costs+In this quickstart, you use Cost analysis to explore and get quick answers about your costs. You can see a summary of your cost over time to identify trends and break costs down to understand how you're being charged for the services you use. For advanced reporting, use Power BI or export raw cost details. ## Prerequisites -Cost analysis supports different kinds of Azure account types. To view the full list of supported account types, see [Understand Cost Management data](understand-cost-mgt-data.md). To view cost data, you need at least read access for your Azure account. +Cost Management isn't available for classic Cloud Solution Provider and sponsorship subscriptions. For more information about supported subscription types, see [Understand Cost Management data](understand-cost-mgt-data.md). ++You must have Read access to use Cost Management. You might need to wait 48 hours to view new subscriptions in Cost Management. ++## Get started ++Cost analysis is your tool for interactive analytics and insights. It should be your first stop when you need to explore or get quick answers about your costs. You explore and analyze costs using _views_. A view is a customizable report that summarizes and allows you to drill into your costs. Cost analysis comes with various built-in views that summarize: ++- Cost of your resources at various levels. +- Overarching services spanning all your resources. +- Amortized reservation usage. +- Cost trends over time. ++Depending on how you access Cost analysis, you may see two options. If available, we recommend starting with **Cost analysis (preview)** since you can access all views from one central page. ++The first time you open Cost analysis, you start with either a list of available cost views or a customizable area chart. This section walks through the list of views. If Cost analysis shows an area chart by default, see [Analyze costs with customizable views](#analyze-costs-with-customizable-views). ++Cost analysis has two types of views: **smart views** that offer intelligent insights and more details by default and **customizable views** you can edit, save, and share to meet your needs. Smart views open in tabs in Cost analysis. To open a second view, select the **+** symbol to the right of the list of tabs. You can open up to five tabs at one time. Customizable views open outside of the tabs in the custom view editor. ++As you explore the different views, notice that Cost analysis remembers which views you've used in the **Recent** section. Switch to the **All views** section to explore all of your saved views and the ones Microsoft provides out of the box. If there's a specific view that you want quick access to, select **Pin to recent** from the **All views** list. +++Views in the **Recommended** list may vary based on what users most commonly use across Azure. ++## Analyze costs with smart views ++If you're new to Cost analysis, we recommend starting with a smart view, like the Resources view. Smart views include: ++- Key performance indicators (KPIs) to summarize your cost +- Intelligent insights about your costs like anomaly detection +- Expandable details with the top contributors +- A breakdown of costs at the next logical level in the resource or product hierarchy ++When you first open a smart view, note the date range for the period. Most views show the current calendar month, but some use a different period that better aligns to the goals for the view. As an example, the Reservations view shows the last 30 days by default to give you a clearer picture of reservation utilization over time. To choose a different date range, use the arrows in the date pill to switch to the previous or next period, or select the text to open a menu with other options. ++Check the **Total** cost KPI at the top of the page to confirm it matches your expectations. Note the small percentage next to the total ΓÇô it's the change compared to the previous period. Check the **Average** cost KPI to note whether costs are trending up or down unexpectedly. ++If showing three months or less, the Average cost API compares the cost from the start of the period (up to but not including today) to the same number of days in the previous period. If showing more than three months, the comparison looks at the cost up to but not including the current month. ++We recommend checking your cost weekly to ensure each KPI remains within the expected range. If you recently deployed or changed resources, we recommend checking daily for the first week or two to monitor the cost changes. ++> [!NOTE] +> If you want to monitor your forecasted cost, you can enable the [Forecast KPI preview feature](enable-preview-features-cost-management-labs.md#forecast-in-the-cost-analysis-preview) in Cost Management Labs, available from the **Try preview** command. ++If you don't have a budget, select the **create** link in the **Budget** KPI and specify the amount you expect to stay under each month. To create a quarterly or yearly budget, select the **Configure advanced settings** link. + -If you have a new subscription, you can't immediately use Cost Management features. It might take up to 48 hours before you can use all Cost Management features. +Depending on the view and scope you're using, you may also see cost insights below the KPIs. Cost insights show important datapoints about your cost ΓÇô from discovering top cost contributors to identifying anomalies based on usage patterns. Select the **See insights** link to review and provide feedback on all insights. Here's an insights example. -## Sign in to Azure -- Sign in to the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_CostManagement/Menu/costanalysis).+Lastly, use the table to find your top cost contributors and expand each row to understand how costs are broken down to the next level. Examples include resources with their product meters and services with a breakdown of products. -## Get started in Cost analysis -To review your costs in cost analysis, open the scope in the Azure portal and select **Cost analysis** in the menu. For example, go to **Subscriptions**, select a subscription from the list, and then select **Cost analysis** in the menu. +This view is where you spend most of your time in Cost analysis. To explore further: -Use the **Scope** pill to switch to a different scope in cost analysis. +1. Open other smart views to get different perspectives on your cost. +2. If you want to drill into data further, you might need to [Change scope](understand-work-scopes.md#switch-between-scopes-in-cost-management) to a lower level. For example, you can't view the Subscriptions smart view if your current scope is a subscription. +3. Open a custom view and apply other filters or group the data to explore. -The scope you select is used throughout Cost Management to provide data consolidation and control access to cost information. When you use scopes, you don't multi-select them. Instead, you select a larger scope, which others roll up to, and then filter down to the nested scopes you need. This approach is important to understand because some people may not have access to a single parent scope, which covers multiple nested scopes. +> [!NOTE] +> If you want to visualize and monitor daily trends within the period, enable the [chart preview feature](enable-preview-features-cost-management-labs.md#chartsfeature) in Cost Management Labs, available from the **Try preview** command. ->[!VIDEO https://www.youtube.com/embed/mfxysF-kTFA] +## Analyze costs with customizable views -The initial cost analysis view includes the following areas: +While smart views offer a highly curated experience for targeted scenarios, custom views allow you to drill in further and answer more specific questions. Like smart views, custom views include a specific date range, granularity, group by, and one or more filters. Five custom views are provided for you to show how costs change over time. They're separated by resource and product. All aspects of custom views can be changed to help answer simple questions. If you require more advanced reporting, like grouping by multiple attributes or fully customizable reports, use Power BI or export raw cost details. -**Currently selected view**: Represents the predefined cost analysis view configuration. Each view includes date range, granularity, group by, and filter settings. The default view shows a running total of your costs for the current billing period with the Accumulated costs view, but you can select other built-in views from the menu. The view menu is between the scope pill and the date selector. For details about saved views, see [Save and share customized views](save-share-views.md). +Here's an example of the Accumulated Costs customizable view. -**Filters**: Allow you to limit the results to a subset of your total charges. Filters apply to all summarized totals and charts. -**Cost**: Shows the total usage and purchase costs for the selected period, as they're accrued and will show on your bill. Costs are shown in your billing currency by default. If you have charges in multiple currencies, cost will automatically be converted to USD. +After you customize your view to meet your needs, you may want to save and share it with others. To share views with others: -**Forecast**: Shows the total forecasted costs the selected period. +1. Save the view on a subscription, resource group, management group, or billing account. +2. Share a URL with view configuration details, which they can use on any scope they have access to. +3. Ping the view to an Azure portal dashboard. Pinning requires access to the same scope. +4. Download an image of the chart or summarized cost details in an Excel or CSV file. +5. Subscribe to scheduled alerts on a daily, weekly, or monthly basis. -**Budget (if selected)**: Shows the current budget amount for the selected scope, if already defined. +All saved views are available from the **All views** list discussed previously. -**Granularity**: Indicates how to show data over time. Select **Daily** or **Monthly** to view costs broken down by day or month. Select **Accumulated** to view the running total for the period. Select **None** to view the total cost for the period, with no breakdown. +## Download cost details -**Pivot (donut) charts**: Provide dynamic pivots, breaking down the total cost by a common set of standard properties. They show the largest to smallest costs for the current month. +While all smart and custom views can be downloaded, there are a few differences between them. - +Customizable chart views are downloaded as an image, smart views aren't. To download an image of the chart, use customizable views. -### Understand forecast +When you download table data, smart views include an extra option to include nested details. There are a few extra columns available in smart views. We recommend starting with smart views when you download data. -Based on your recent usage, cost forecasts show a projection of your estimated costs for the selected time period. If a budget is set up in Cost analysis, you can view when forecasted spend is likely to exceed budget threshold. The forecast model can predict future costs for up to a year. Select filters to view the granular forecasted cost for your selected dimension. -## Select a cost view +Although Power BI is available for all Microsoft Customer Agreement billing profiles and Enterprise Agreement billing accounts, you only see the option from the smart view Download pane when using a supported scope. -Cost analysis has many built-in views, optimized for the most common goals: -View | Answer questions like - | -Accumulated cost | How much have I spent so far this month? Will I stay within my budget? -Daily cost | Have there been any increases in the costs per day for the last 30 days? -Cost by service | How has my monthly usage vary over the past three invoices? -Cost by resource | Which resources cost the most so far this month? -Invoice details | What charges did I have on my last invoice? -Resources (preview) | Which resources cost the most so far this month? Are there any subscription cost anomalies? -Resource groups (preview) | Which resource groups cost the most so far this month? -Subscriptions (preview) | Which subscriptions cost the most so far this month? -Services (preview) | Which services cost the most so far this month? -Reservations (preview) | How much are reservations being used? Which resources are utilizing reservations? +Regardless of whether you start on smart or customizable views, if you need more details, we recommend that you export raw details for full flexibility. Smart views include the option under the **Automate the download** section. -The Cost by resource and Resources views are only available for subscriptions and resource groups. - +## Understand your forecast -For more information about views, see: -- [Use built-in views in Cost analysis](cost-analysis-built-in-views.md)-- [Save and share customized views](save-share-views.md)-- [Customize views in cost analysis](customize-cost-analysis-views.md)+Forecast costs are available from both smart and custom views. In either case, the forecast is calculated the same way based on your historical usage patterns for up to a year in the future. -## View costs +Your forecast is a projection of your estimated costs for the selected period. Your forecast changes depending on what data is available for the period, how long of a period you select, and what filters you apply. If you notice an unexpected spike or drop in your forecast, expand the date range and use grouping to identify large increases or decreases in historical cost. You can filter them out to normalize the forecast. -Cost analysis shows **accumulated** costs by default. Accumulated costs include all costs for each day plus the previous days, for a constantly growing view of your daily aggregate costs. This view is optimized to show how you're trending against a budget for the selected time range. +When you select a budget in a custom view, you can also see if or when your forecast would exceed your budget. -Use the forecast chart view to identify potential budget breaches. When there's a potential budget breach, projected overspending is shown in red. An indicator symbol is also shown in the chart. Hovering over the symbol shows the estimated date of the budget breach. +## More information - +For more information about using features in costs analysis, see the following articles: -There's also the **daily** view that shows costs for each day. The daily view doesn't show a growth trend. The view is designed to show irregularities as cost spikes or dips from day to day. If you've selected a budget, the daily view also shows an estimate of your daily budget. +- For built-in views, see [Use built-in views in Cost analysis](cost-analysis-built-in-views.md). +- To learn more about customizing views, see [Customize views in cost analysis](customize-cost-analysis-views.md). +- Afterward you can [Save and share customized views](save-share-views.md). -Here's a daily view of recent spending with spending forecast turned on. - +If you need advanced reporting outside of cost analysis, like grouping by multiple attributes or fully customizable reports, you can use: -When the forecast is disabled, you won't see projected spending for future dates. Also, when you look at costs for past time periods, cost forecast doesn't show costs. +- [Power BI Desktop](/power-bi/connect-data/desktop-connect-azure-cost-management) +- [Cost Management Power BI App](analyze-cost-data-azure-cost-management-power-bi-template-app.md) +- Usage data from exports or APIs + - See [Choose a cost details solution](../automate/usage-details-best-practices.md) to help you determine if exports from the Azure portal or if cost details from APIs are right for you. -Generally, you can expect to see data or notifications for consumed resources within 24 to 48 hours. +Be sure to [configure subscription anomaly alerts](../understand/analyze-unexpected-charges.md#create-an-anomaly-alert) and set up a [budget](tutorial-acm-create-budgets.md) to help drive accountability and cost control. ## Next steps |
cost-management-billing | Tutorial Acm Create Budgets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md | Budgets require at least one cost threshold (% of budget) and a corresponding em ## Configure forecasted budget alerts -Forecasted alerts provide advanced notification that your spending trends are likely to exceed your budget. The alerts use [forecasted cost predictions](quick-acm-cost-analysis.md#understand-forecast). Alerts are generated when the forecasted cost projection exceeds the set threshold. You can configure a forecasted threshold (% of budget). When a forecasted budget threshold is met, notifications are normally sent within an hour of the evaluation. +Forecasted alerts provide advanced notification that your spending trends are likely to exceed your budget. The alerts use forecasted cost predictions. Alerts are generated when the forecasted cost projection exceeds the set threshold. You can configure a forecasted threshold (% of budget). When a forecasted budget threshold is met, notifications are normally sent within an hour of the evaluation. To toggle between configuring an Actual vs Forecasted cost alert, use the `Type` field when configuring the alert as shown in the following image. |
cost-management-billing | Troubleshoot Threshold Billing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-threshold-billing.md | All Microsoft services count towards a customer's billing threshold. ## How do I check my current consumption level? -Azure customers can view their current usage levels in Cost Management. For more information about viewing your current Azure costs, see [View costs](../costs/quick-acm-cost-analysis.md#view-costs). +Azure customers can view their current usage levels in Cost Management. For more information about viewing your current Azure costs, see [Start using Cost analysis](../costs/quick-acm-cost-analysis.md). ## When there are multiple payment methods (with multiple billing profiles) linked to a single billing account, which one is authorized? |
data-factory | Concepts Integration Runtime Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-integration-runtime-performance.md | Data flows are priced at vcore-hrs meaning that both cluster size and execution- > There is a ceiling on how much the size of a cluster affects the performance of a data flow. Depending on the size of your data, there is a point where increasing the size of a cluster will stop improving performance. For example, If you have more nodes than partitions of data, adding additional nodes won't help. A best practice is to start small and scale up to meet your performance needs. +## Custom shuffle partition ++Dataflow divides the data into partitions and transforms it using different processes. If the data size in a partition is more than the process can hold in memory, the process fails with OOM(out of memory) errors. If dataflow contains huge amounts of data having joins/aggregations, you may want to try changing shuffle partitions in incremental way. You can set it from 50 up to 2000, to avoid OOM errors. **Compute Custom properties** in dataflow runtime, is a way to control your compute requirements. Property name is **Shuffle partitions** and it's integer type. This customization should only be used in known scenarios, otherwise it can cause unnecessary dataflow failures. ++While increasing the shuffle partitions, make sure data is spread across well. A rough number is to have approximately 1.5 GB of data per partition. If data is skewed, increasing the "Shuffle partitions" won't be helpful. For example, if you have 500 GB of data, having a value between 400 to 500 should work. Default limit for shuffle partitions is 200 that works well for approximately 300 GB of data. ++Here are the steps on how it's set in a custom integration runtime. You can't set it for autoresolve integrtaion runtime. ++1. From ADF portal under **Manage**, select a custom itegration run time and you go to edit mode. +2. Under dataflow run time tab, go to **Compute Cusotm Properties** section. +3. Select **Shuffle Partitions** under Property name, input value of your choice, like 250, 500 etc. ++You can do same by editing JSON file of runtime by adding array with property name and value after *cleanup* property. + ## Time to live By default, every data flow activity spins up a new Spark cluster based upon the Azure IR configuration. Cold cluster start-up time takes a few minutes and data processing can't start until it is complete. If your pipelines contain multiple **sequential** data flows, you can enable a time to live (TTL) value. Specifying a time to live value keeps a cluster alive for a certain period of time after its execution completes. If a new job starts using the IR during the TTL time, it will reuse the existing cluster and start up time will greatly reduced. After the second job completes, the cluster will again stay alive for the TTL time. |
defender-for-cloud | Alerts Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-overview.md | Security alerts are the notifications generated by Defender for Cloud and Defend - Security alerts are triggered by advanced detections in Defender for Cloud, and are available when you enable Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads). - Each alert provides details of affected resources, issues, and remediation recommendations. - Defender for Cloud classifies alerts and prioritizes them by severity in the Defender for Cloud portal.-- Alerts data is retained for 90 days.+- Alerts are displayed for 90 days, even if the resource related to the alert was deleted during that time. This is because the alert might indicate a potential breach to your organization that needs to be further investigated. - Alerts can be exported to CSV format, or directly injected into Microsoft Sentinel. - Defender for Cloud leverages the [MITRE Attack Matrix](https://attack.mitre.org/matrices/enterprise/) to associate alerts with their perceived intent, helping formalize security domain knowledge. |
defender-for-cloud | Secure Score Security Controls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md | We recommend every organization carefully reviews their assigned Azure Policy in > [!TIP] > For details about reviewing and editing your initiatives, see [Working with security policies](tutorial-security-policy.md). -Even though Defender for Cloud's default security initiative is based on industry best practices and standards, there are scenarios in which the built-in recommendations listed below might not completely fit your organization. It's sometimes necessary to adjust the default initiative - without compromising security - to ensure it's aligned with your organization's own policies, industry standards, regulatory standards, and benchmarks.<br><br> +Even though Defender for Cloud's default security initiative, the Azure Security Benchmark, is based on industry best practices and standards, there are scenarios in which the built-in recommendations listed below might not completely fit your organization. It's sometimes necessary to adjust the default initiative - without compromising security - to ensure it's aligned with your organization's own policies, industry standards, regulatory standards, and benchmarks.<br><br> [!INCLUDE [security-center-controls-and-recommendations](../../includes/asc/security-control-recommendations.md)] |
defender-for-cloud | Upcoming Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md | If you're looking for the latest release notes, you'll find them in the [What's | [Alerts automatic export to Log Analytics workspace will be deprecated](#alerts-automatic-export-to-log-analytics-workspace-will-be-deprecated) | March 2023 | | [Deprecation and improvement of selected alerts for Windows and Linux Servers](#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers) | April 2023 | | [Deprecation of App Service language monitoring policies](#deprecation-of-app-service-language-monitoring-policies) | April 2023 |-| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | August 2023 | +| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | May 2023 | ### Changes in the recommendation "Machines should be configured securely" Defender for Cloud won't include these recommendations as built-in recommendatio ### Multiple changes to identity recommendations -**Estimated date for change: August 2023** +**Estimated date for change: May 2023** We announced previously the [availability of identity recommendations V2 (preview)](release-notes.md#extra-recommendations-added-to-identity), which included enhanced capabilities. |
defender-for-iot | Alert Engine Messages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md | Title: Microsoft Defender for IoT alert reference -description: This article provides a reference of all alerts that are generated by Microsoft Defender for IoT network sensors, inclduing a list of all alert types and descriptions. +description: This article provides a reference of all alerts that are generated by Microsoft Defender for IoT network sensors, including a list of all alert types and descriptions. Last updated 11/23/2022 - # Microsoft Defender for IoT alert reference This article provides a reference of the [alerts](how-to-manage-cloud-alerts.md) > [!IMPORTANT] > The **Alerts** page in the Azure portal is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. + ## OT alerts turned off by default Several alerts are turned off by default, as indicated by asterisks (*) in the tables below. OT sensor **Admin** users can enable or disable alerts from the **Support** page on a specific OT network sensor. Malware engine alerts describe detected malicious network activity. | Title | Description| Severity | Category | MITRE ATT&CK <br> tactics and techniques | |--|--|--|--|--|-| **Connection Attempt to Known Malicious IP** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br>Triggered by both OT and Enterprise IoT network sensors. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0883: Internet Accessible Device <br> - T0884: Connection Proxy | +| **Connection Attempt to Known Malicious IP** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br>Triggered by both OT and Enterprise IoT network sensors. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0883: Internet Accessible Device <br> - T0884: Connection Proxy | | **Invalid SMB Message (DoublePulsar Backdoor Implant)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - LateralMovement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services |-| **Malicious Domain Name Request** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br>Triggered by both OT and Enterprise IoT network sensors. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0883: Internet Accessible Device <br> - T0884: Connection Proxy | +| **Malicious Domain Name Request** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br>Triggered by both OT and Enterprise IoT network sensors. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br> - Command And Control <br><br> **Techniques:** <br> - T0883: Internet Accessible Device <br> - T0884: Connection Proxy | | **Malware Test File Detected - EICAR AV Success** | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly. Demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing | | **Suspicion of Conficker Malware** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Impact <br><br> **Techniques:** <br> - T0826: Loss of Availability <br> - T0828: Loss of Productivity and Revenue <br> - T0847: Replication Through Removable Media | | **Suspicion of Denial Of Service Attack** | A source device attempted to initiate an excessive number of new connections to a destination device. This may indicate a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. <br><br> Threshold: 3000 attempts in 1 minute | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0814: Denial of Service |-| **Suspicion of Malicious Activity** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer | +| **Suspicion of Malicious Activity** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer | | **Suspicion of Malicious Activity (BlackEnergy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0869: Standard Application Layer Protocol | | **Suspicion of Malicious Activity (DarkComet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information | | **Suspicion of Malicious Activity (Duqu)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information | Malware engine alerts describe detected malicious network activity. | **Suspicion of Malicious Activity (Havex)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Collection <br> - Discovery <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0861: Point & Tag Identification <br> - T0846: Remote System Discovery <br> - T0814: Denial of Service | | **Suspicion of Malicious Activity (Karagany)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information | | **Suspicion of Malicious Activity (LightsOut)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Evasion <br><br> **Techniques:** <br> - T0849: Masquerading |-| **Suspicion of Malicious Activity (Name Queries)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br> Threshold: 25 name queries in 1 minute | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0884: Connection Proxy | +| **Suspicion of Malicious Activity (Name Queries)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br> Threshold: 25 name queries in 1 minute | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Command And Control <br><br> **Techniques:** <br> - T0884: Connection Proxy | | **Suspicion of Malicious Activity (Poison Ivy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services | | **Suspicion of Malicious Activity (Regin)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Impact <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0882: Theft of Operational Information | | **Suspicion of Malicious Activity (Stuxnet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br> - Impact <br><br> **Techniques:** <br> - T0818: Engineering Workstation Compromise <br> - T0866: Exploitation of Remote Services <br> - T0831: Manipulation of Control | | **Suspicion of Malicious Activity (WannaCry) [*](#ot-alerts-turned-off-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services <br> - T0867: Lateral Tool Transfer | | **Suspicion of NotPetya Malware - Illegal SMB Parameters Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Initial Access <br> - Lateral Movement <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services | | **Suspicion of NotPetya Malware - Illegal SMB Transaction Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware | **Tactics:** <br> - Lateral Movement <br><br> **Techniques:** <br> - T0867: Lateral Tool Transfer |-| **Suspicion of Remote Code Execution with PsExec** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Lateral Movement <br> - Initial Access <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services | -| **Suspicion of Remote Windows Service Management [*](#ot-alerts-turned-off-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0822: NetworkExternal Remote Services | -| **Suspicious Executable File Detected on Endpoint** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Evasion <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0851: Rootkit | +| **Suspicion of Remote Code Execution with PsExec** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Lateral Movement <br> - Initial Access <br><br> **Techniques:** <br> - T0866: Exploitation of Remote Services | +| **Suspicion of Remote Windows Service Management [*](#ot-alerts-turned-off-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Initial Access <br><br> **Techniques:** <br> - T0822: NetworkExternal Remote Services | +| **Suspicious Executable File Detected on Endpoint** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity | **Tactics:** <br> - Evasion <br> - Inhibit Response Function <br><br> **Techniques:** <br> - T0851: Rootkit | | **Suspicious Traffic Detected [*](#ot-alerts-turned-off-by-default)** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team | Critical | Suspicion of Malicious Activity | **Tactics:** <br> - Discovery <br><br> **Techniques:** <br> - T0842: Network Sniffing | | **Backup Activity with Antivirus Signatures** | Traffic detected between the source device and the destination backup server triggered this alert. The traffic includes backup of antivirus software that might contain malware signatures. This is most likely legitimate backup activity. | Warning | Backup | **Tactics:** <br> - Impact <br><br> **Techniques:** <br> - T0882: Theft of Operational Information | |
defender-for-iot | Device Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/device-inventory.md | Defender for IoT's device inventory supports device types across a variety of in | **Enterprise** | Smart devices, printers, communication devices, or audio/video devices | | **Retail** | Barcode scanners, humidity sensor, punch clocks | -A *transient* device type indicates a device that was detected for only a short time. We recommend investigating these devices carefully to understand their impact on your network. - *Unclassified* devices are devices that don't otherwise have an out-of-the-box category defined. |
defender-for-iot | How To Accelerate Alert Incident Response | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-accelerate-alert-incident-response.md | This article describes the following methods for reducing OT network alert fatig - To create alert comments or custom alert rules on an OT network sensor, you must have an OT network sensor installed and access to the sensor as an **Admin** user. -- To create a DNS allowlist on an OT sensor, you must have an OT network sensor installed and access to the sensor as a **Support** user. - To create alert exclusion rules on an on-premises management console, you must have an on-premises management console installed and access to the on-premises management console as an **Admin** user. For more information, see [Install OT agentless monitoring software](how-to-install-software.md) and [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). Disable custom alert rules to prevent them from running without deleting them al In the **Custom alert rules** page, select one or more rules, and then select **Disable**, **Enable**, or **Delete** in the toolbar as needed. -## Allow internet connections on an OT network --Decrease the number of unauthorized internet alerts by creating an allowlist of domain names on your OT sensor. When a DNS allowlist is configured, the sensor checks each unauthorized internet connectivity attempt against the list before triggering an alert. If the domain's FQDN is included in the allowlist, the sensor doesnΓÇÖt trigger the alert and allows the traffic automatically. --All OT sensor users can view a currently configured list of domains in a [data mining report](how-to-create-data-mining-queries.md), including the FQDNs, resolved IP addresses, and the last resolution time. ---**To define a DNS allowlist:** --1. Sign into your OT sensor as the *support* user and select the **Support** page. --1. In the search box, search for **DNS** and then locate the engine with the **Internet Domain Allowlist** description. --1. Select **Edit** :::image type="icon" source="media/how-to-generate-reports/manage-icon.png" border="false"::: for the **Internet Domain Allowlist** row. For example: -- :::image type="content" source="media/how-to-accelerate-alert-incident-response/dns-edit-configuration.png" alt-text="Screenshot of how to edit configurations for DNS in the sensor console." lightbox="media/how-to-accelerate-alert-incident-response/dns-edit-configuration.png"::: --1. In the **Edit configuration** pane > **Fqdn allowlist** field, enter one or more domain names. Separate multiple domain names with commas. Your sensor won't generate alerts for unauthorized internet connectivity attempts on the configured domains. --1. Select **Submit** to save your changes. ---**To view the current allowlist in a data mining report:** --When selecting a category in your [custom data mining report](how-to-create-data-mining-queries.md#create-an-ot-sensor-custom-data-mining-report), make sure to select **Internet Domain Allowlist** under the **DNS** category. --For example: ---The generated data mining report shows a list of the allowed domains and each IP address thatΓÇÖs being resolved for those domains. The report also includes the TTL, in seconds, during which those IP addresses won't trigger an internet connectivity alert. For example: -- ## Create alert exclusion rules on an on-premises management console Create alert exclusion rules to instruct your sensors to ignore specific traffic on your network that would otherwise trigger an alert. |
defender-for-iot | How To Activate And Set Up Your On Premises Management Console | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-on-premises-management-console.md | After activating an on-premises management console, you'll need to apply new act |Location |Activation process | ||| |**On-premises management console** | Apply a new activation file on your on-premises management console if you've [modified the number of committed devices](how-to-manage-subscriptions.md#edit-a-plan-for-ot-networks) in your subscription. |-|**Cloud-connected and locally-managed sensors** | Cloud-connected and locally-managed sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>If you're [updating an OT sensor from a legacy version](update-ot-software.md#update-legacy-ot-sensor-software), you'll need to re-activate your updated sensor. | +|**Cloud-connected sensors** | Cloud-connected sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>If you're [updating an OT sensor from a legacy version](update-ot-software.md#update-legacy-ot-sensor-software), you'll need to re-activate your updated sensor. | +| **Locally-managed** | Apply a new activation file to locally managed sensors every year. After a sensor's activation file has expired, the sensor will continue to monitor your network, but you'll see a warning message when signing in to the sensor. | For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md). |
defender-for-iot | How To Activate And Set Up Your Sensor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md | For information about uploading a new certificate, supported certificate paramet ### Activation expirations -After activating a sensor, cloud-connected and locally-managed sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. +After activating a sensor, you'll need to apply new activation files as follows: -If you're updating an OT sensor from a legacy version, you'll need to re-activate your updated sensor. For more information, see [Update legacy OT sensor software](update-ot-software.md#update-legacy-ot-sensor-software). +|Location |Activation process | +||| +|**Cloud-connected sensors** | Cloud-connected sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>If you're [updating an OT sensor from a legacy version](update-ot-software.md#update-legacy-ot-sensor-software), you'll need to re-activate your updated sensor. | +| **Locally managed** | Apply a new activation file to locally managed sensors every year. After a sensor's activation file has expired, the sensor will continue to monitor your network, but you'll see a warning message when signing in to the sensor. | For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md) and [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md). |
defender-for-iot | How To Create Data Mining Queries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-data-mining-queries.md | Create your own custom data mining report if you have reporting needs not covere ||| | **Name** / **Description** | Enter a meaningful name for your report and an optional description. | | **Send to CM** | Select to send your report to the on-premises management console. |- | **Choose category** | Select the categories to include in your report. <br><br> For example, select **Internet Domain Allowlist** under **DNS** to create a report of the allowed internet domains and their resolved IP addresses. | + | **Choose category** | Select the categories to include in your report. | | **Order by** | Select to sort your data by category or by activity. | | **Filter by** | Define a filter for your report using any of the following parameters: <br><br> - **Results within the last**: Enter a number and then select **Minutes**, **Hours**, or **Days** <br> - **IP address / MAC address / Port**: Enter one or more IP addresses, MAC addresses, and ports to filter into your report. Enter a value and then select + to add it to the list.<br> - **Device group**: Select one or mode device groups to filter into your report. | | **Add filter type** | Select to add any of the following filter types into your report. <br><br> - Transport (GENERIC) <br> - Protocol (GENERIC) <br> - TAG (GENERIC) <br> - Maximum value (GENERIC) <br> - State (GENERIC) <br> - Minimum value (GENERIC) <br><br> Enter a value in the relevant field and then select + to add it to the list. | |
defender-for-iot | How To Deploy Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-deploy-certificates.md | Verify that your SSL/TLS certificate [meets the required parameters](#verify-cer | **Certificate (CRT file)** | Upload a Certificate (CRT file). | | **Certificate Chain (PEM file)** - *Optional* | Upload a Certificate Chain (PEM file). | - Select **Use CRL (Certificate Revocation List) to check certificate status** to validate the certificate against a [CRL server](#verify-crl-server-access). The certificate is checked once during the import process. - For example: - :::image type="content" source="media/how-to-deploy-certificates/recommended-ssl.png" alt-text="Screenshot of importing a trusted CA certificate." lightbox="media/how-to-deploy-certificates/recommended-ssl.png"::: - + :::image type="content" source="media/how-to-deploy-certificates/old-recommended-ssl.png" alt-text="Screenshot of importing a trusted CA certificate." lightbox="media/how-to-deploy-certificates/old-recommended-ssl.png"::: + # [Locally generated self-signed certificates](#tab/locally-generated-self-signed-certificate) > [!NOTE] Verify that your SSL/TLS certificate [meets the required parameters](#verify-cer -1. In the **Validation for on-premises management console certificates** area, select **Required** if SSL/TLS certificate validation is required. Otherwise, select **None**. +1. Toggle on **Enable certificate validation** to validate the certificate against a [CRL server](#verify-crl-server-access). The certificate is checked once during the import process. 1. Select **Save** to save your certificate settings. |
defender-for-iot | How To Investigate Sensor Detections In A Device Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-sensor-detections-in-a-device-inventory.md | If you're working with a cloud-connected sensor, any edits you make in the senso **To edit device details**: -1. Select a device in the grid, and then select **Edit** in the toolbar at the top of the page. --1. In the **Edit** pane on the right, modify the device fields as needed, and then select **Save** when you're done. --You can also open the edit pane from the device details page: - 1. Select a device in the grid, and then select **View full details** in the pane on the right. 1. In the device details page, select **Edit Properties**. Editable fields include: - Authorized status - Device name+- Description +- OS platform - Device type-- OS - Purdue level-- Description - Scanner or programming device For more information, see [Device inventory column data](device-inventory.md#device-inventory-column-data). You may want to delete devices from your device inventory, such as if they've be Deleted devices are removed from the **Device map** and the device inventories on the Azure portal and on-premises management console, and aren't calculated when generating reports, such as Data Mining, Risk Assessment, or Attack Vector reports. -**To delete one or more devices**: +**To delete a single device**: You can delete a device when it's been inactive for more than 10 minutes. -1. In the **Device inventory** page, select the device or devices you want to delete, and then select **Delete** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/delete-device.png" border="false"::: in the toolbar at the top of the page. --1. At the prompt, select **Confirm** to confirm that you want to delete the device from Defender for IoT. +1. In the **Device inventory** page, select the device you want to delete, and then select **Delete** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/delete-device.png" border="false"::: in the toolbar at the top of the page. A confirmation message appears at the top right. |
defender-for-iot | How To Manage Individual Sensors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md | For more information, see [Update Defender for IoT OT monitoring software](updat Each OT sensor is onboarded as a cloud-connected or locally-managed OT sensor and activated using a unique activation file. For cloud-connected sensors, the activation file is used to ensure the connection between the sensor and Azure. -You'll need to upload a new activation file to your senor if you want to switch sensor management modes, such as moving from a locally-managed sensor to a cloud-connected sensor. Uploading a new activation file to your sensor includes deleting your sensor from the Azure portal and onboarding it again. +A unique activation file is uploaded to each sensor that you deploy. For more information about when and how to use a new file, see [Upload new activation files](#upload-new-activation-files). If you can't upload the file, see [Troubleshoot activation file upload](#troubleshoot-activation-file-upload). ++### About activation files for locally connected sensors ++Locally connected sensors are associated with an Azure subscription. The activation file for your locally connected sensors contains an expiration date. One month before this date, a warning message appears in the System Messages window in the top-right corner of the console. The warning remains until after you've updated the activation file. ++You can continue to work with Defender for IoT features even if the activation file has expired. ++### About activation files for cloud-connected sensors ++Sensors that are cloud connected aren't limited by time periods for their activation file. The activation file for cloud-connected sensors is used to ensure the connection to Defender for IoT. ++### Upload new activation files ++You might need to upload a new activation file for an onboarded sensor when: ++- An activation file expires on a locally connected sensor. ++- You want to work in a different sensor management mode, such as moving from a locally-managed sensor to a cloud-connected sensor. Uploading a new activation file to your sensor includes deleting your sensor from the Azure portal and onboarding it again. **To add a new activation file:** You'll need to upload a new activation file to your senor if you want to switch You'll receive an error message if the activation file couldn't be uploaded. The following events might have occurred: -- **The sensor can't connect to the internet:** Check the sensor's network configuration. If your sensor needs to connect through a web proxy to access the internet, verify that your proxy server is configured correctly on the **Sensor Network Configuration** screen. Verify that the required endpoints are allowed in the firewall and/or proxy.+- **For locally connected sensors**: The activation file isn't valid. If the file isn't valid, go to [Defender for IoT in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). On the **Sensor Management** page, select the sensor with the invalid file, and download a new activation file. ++- **For cloud-connected sensors**: The sensor can't connect to the internet. Check the sensor's network configuration. If your sensor needs to connect through a web proxy to access the internet, verify that your proxy server is configured correctly on the **Sensor Network Configuration** screen. Verify that the required endpoints are allowed in the firewall and/or proxy. For OT sensors version 22.x, download the list of required endpoints from the **Sites and sensors** page on the Azure portal. Select an OT sensor with a supported software version, or a site with one or more supported sensors. And then select **More actions** > **Download endpoint details**. For sensors with earlier versions, see [Sensor access to Azure portal](how-to-set-up-your-network.md#sensor-access-to-azure-portal). -- **The activation file is valid but Defender for IoT rejected it:** If you can't resolve this problem, you can download another activation from the **Sites and Sensors** page in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). If this doesn't work, contact Microsoft Support.+- **For cloud-connected sensors**: The activation file is valid but Defender for IoT rejected it. If you can't resolve this problem, you can download another activation from the **Sites and Sensors** page in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). If this doesn't work, contact Microsoft Support. ## Manage certificates |
defender-for-iot | How To Work With The Sensor Device Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md | Use one of the following options to import and export device data: |Name |Description | |||- |**Edit properties** | Opens the edit pane where you can edit device properties, such as authorization, name, description, OS platform, device type, Purdue level and if it is a scanner or programming device. | - |**View properties** | Opens the device's details page. | + |**View properties** | Opens the device's details page to view and edit device properties. | |**Authorize/Unauthorize** | Changes the device's [authorization status](device-inventory.md#unauthorized-devices). | |**Mark as Important / Non-Important** | Changes the device's [importance](device-inventory.md#important-ot-devices) status, highlighting business critical servers on the map with a star and elsewhere, including OT sensor reports and the Azure device inventory. | |**Show Alerts** / **Show Events** | Opens the **Alerts** or **Event Timeline** tab on the device's details page. | | **Activity Report** | Generates an activity report for the device for the selected timespan. | | **Simulate Attack Vectors** | Generates an [attack vector simulation](how-to-create-attack-vector-reports.md) for the selected device. | | **Add to custom group** | Creates a new [custom group](#create-a-custom-device-group) with the selected device. |- | **Delete** |Deletes the device from the inventory. | + | **Delete** | Deletes the device from the inventory. | ## Merge devices For example, you might receive a notification about an inactive device that need - Handle one notification at a time, selecting a specific mitigation action, or selecting **Dismiss** to close the notification with no activity. - Select **Select All** to show which notifications can be [handled together](#handling-multiple-notifications-together). Clear selections for specific notifications, and then select **Accept All** or **Dismiss All** to handle any remaining selected notifications together. -> [!NOTE] -> Selected notifications are automatically resolved if they aren't dismissed or otherwise handled within 14 days. For more information, see the action indicated in the **Auto-resolve** column in the table [below](#device-notification-responses). -> ### Handling multiple notifications together When you handle multiple notifications together, you may still have remaining no The following table lists available responses for each notification, and when we recommend using each one: -| Type | Description | Available responses | Auto-resolve| -|--|--|--|--| -| **New IP detected** | A new IP address is associated with the device. This may occur in the following scenarios: <br><br>- A new or additional IP address was associated with a device already detected, with an existing MAC address.<br><br> - A new IP address was detected for a device that's using a NetBIOS name. <br /><br /> - An IP address was detected as the management interface for a device associated with a MAC address. <br /><br /> - A new IP address was detected for a device that's using a virtual IP address. | - **Set Additional IP to Device**: Merge the devices <br />- **Replace Existing IP**: Replaces any existing IP address with the new address <br /> - **Dismiss**: Remove the notification. |**Dismiss** | -| **No subnets configured** | No subnets are currently configured in your network. <br /><br /> We recommend configuring subnets for the ability to differentiate between OT and IT devices on the map. | - **Open Subnets Configuration** and [configure subnets](how-to-control-what-traffic-is-monitored.md#configure-subnets). <br />- **Dismiss**: Remove the notification. |**Dismiss** | -| **Operating system changes** | One or more new operating systems have been associated with the device. | - Select the name of the new OS that you want to associate with the device.<br /> - **Dismiss**: Remove the notification. |No automatic handling| -| **New subnets** | New subnets were discovered. |- **Learn**: Automatically add the subnet.<br />- **Open Subnet Configuration**: Add all missing subnet information.<br />- **Dismiss**<br />Remove the notification. |**Dismiss** | -| **Device type changes** | A new device type has been associated with the device. | - **Set as {…}**: Associate the new type with the device.<br />- **Dismiss**: Remove the notification. |No automatic handling| +| Type | Description | Available responses | +|--|--|--| +| **New IP detected** | A new IP address is associated with the device. This may occur in the following scenarios: <br><br>- A new or additional IP address was associated with a device already detected, with an existing MAC address.<br><br> - A new IP address was detected for a device that's using a NetBIOS name. <br /><br /> - An IP address was detected as the management interface for a device associated with a MAC address. <br /><br /> - A new IP address was detected for a device that's using a virtual IP address. | - **Set Additional IP to Device**: Merge the devices <br />- **Replace Existing IP**: Replaces any existing IP address with the new address <br /> - **Dismiss**: Remove the notification. | +| **Inactive devices** | Traffic wasn't detected on a device for more than 60 days. | - **Delete**: If the device isn't part of your network, remove it from the device inventory. <br><br> - **Dismiss**: Remove the notification if the device is part of your network. If the device is inactive, for example, because it's incorrectly disconnected from the network, dismiss the notification and reconnect the device. | +| **New OT devices** | An OT device was detected on a subnet that's not defined as an ICS subnet. | - **Set as ICS Subnet** <br><br> - **Dismiss**: Remove the notification if the device is part of your subnet. | +| **No subnets configured** | No subnets are currently configured in your network. <br /><br /> We recommend configuring subnets for the ability to differentiate between OT and IT devices on the map. | - **Open Subnets Configuration** and [configure subnets](how-to-control-what-traffic-is-monitored.md#configure-subnets). <br />- **Dismiss**: Remove the notification. | +| **Operating system changes** | One or more new operating systems have been associated with the device. | - Select the name of the new OS that you want to associate with the device.<br /> - **Dismiss**: Remove the notification. | +| **New subnets** | New subnets were discovered. |- **Learn**: Automatically add the subnet.<br />- **Open Subnet Configuration**: Add all missing subnet information.<br />- **Dismiss**<br />Remove the notification. | +| **Device type changes** | A new device type has been associated with the device. | - **Set as {…}**: Associate the new type with the device.<br />- **Dismiss**: Remove the notification. | ## View a device map for a specific zone |
defender-for-iot | Iot Advanced Threat Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-advanced-threat-monitoring.md | Title: Investigate and detect threats for IoT devices | Microsoft Docs description: This tutorial describes how to use the Microsoft Sentinel data connector and solution for Microsoft Defender for IoT to secure your entire environment. Detect and respond to threats, including multistage attacks that may cross IT and OT boundaries. Last updated 09/18/2022+ # Tutorial: Investigate and detect threats for IoT devices |
defender-for-iot | Iot Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-solution.md | description: This tutorial describes how to integrate Microsoft Sentinel and Mic Last updated 06/20/2022 + # Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel |
defender-for-iot | References Data Retention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-data-retention.md | The following table lists how long device data is stored in each Defender for Io | Storage type | Details | ||| | **Azure portal** | 90 days from the date of the **Last activity** value. <br><br> For more information, see [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md). |-| **OT network sensor** | 90 days from the date of the **Last activity** value. <br><br> For more information, see [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md). | -| **On-premises management console** | 90 days from the date of the **Last activity** value. <br><br> For more information, see [Manage your OT device inventory from an on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md). | +| **OT network sensor** | The retention of device inventory data isn't limited by time. <br><br> For more information, see [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md). | +| **On-premises management console** | The retention of device inventory data isn't limited by time. <br><br> For more information, see [Manage your OT device inventory from an on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md). | ## Alert data retention |
defender-for-iot | Release Notes Sentinel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-sentinel.md | Title: Microsoft Defender for IoT solution versions in Microsoft Sentinel description: Learn about the updates available in each version of the Microsoft Defender for IoT solution, available from the Microsoft Sentinel content hub. Last updated 09/22/2022 + # Microsoft Defender for IoT solution versions in Microsoft Sentinel |
devtest-labs | Create Lab Windows Vm Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/quickstarts/create-lab-windows-vm-terraform.md | In this article, you learn how to: 1. Get the Azure resource name in which the lab was created. ```console- echo "$(terraform output resource_group_name)" + resource_group_name=$(terraform output -raw resource_group_name) ``` 1. Get the lab name. ```console- echo "$(terraform output lab_name)" + lab_name=$(terraform output -raw lab_name) ``` 1. Run [az lab vm list](/cli/azure/lab/vm#az-lab-vm-list) to list the virtual machines for the lab you created in this article. ```azurecli- az lab vm list --resource-group <resource_group_name> --lab-name <lab_name> + az lab vm list --resource-group $resource_group_name \ + --lab-name $lab_name ``` ## Clean up resources |
education-hub | Add Student Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/add-student-api.md | + + Title: Add a student to a lab in Azure Education Hub through REST APIs +description: Learn how to add students to labs in Azure Education Hub through REST APIs ++++ Last updated : 03/11/2023++++# Add students to a lab in Education Hub using REST APIs ++This article walks through how to add students to a lab. ++## Prerequisites ++- Know billing account ID, Billing profile ID, and Invoice Section ID +- Have an Edu approved Azure account +- Have already created a lab in Education Hub ++## Add students to the lab ++After a lab has been created, call the add students endpoint and make sure to replace the sections that are surrounded by <>. +The invoice section ID must be the same invoice section ID of the lab you want to add this student to. +++```json +PUT https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<BillingAccountID>/billingProfiles/<BillingProfileID>/invoiceSections/<InvoiceSectionID>/providers/Microsoft.Education/labs/default/students/<StudentID>?api-version=2021-12-01-preview +``` ++Call the API with a body similar to the following. Change the body to include details of the student you want to add to the lab. ++```json +{ + "properties": { + "firstName": "string", + "lastName": "string", + "email": "string", + "role": "Student", + "budget": { + "currency": "string", + "value": 0 + }, + "expirationDate": "2021-12-21T23:01:41.943Z", + "subscriptionAlias": "string", + "subscriptionInviteLastSentDate": "string" + } +} +``` ++The API response returns details of the newly added student. ++```json +{ + "id": "string", + "name": "string", + "type": "string", + "systemData": { + "createdBy": "string", + "createdByType": "User", + "createdAt": "2021-12-21T23:02:20.163Z", + "lastModifiedBy": "string", + "lastModifiedByType": "User", + "lastModifiedAt": "2021-12-21T23:02:20.163Z" + }, + "properties": { + "firstName": "string", + "lastName": "string", + "email": "string", + "role": "Student", + "budget": { + "currency": "string", + "value": 0 + }, + "subscriptionId": "string", + "expirationDate": "2021-12-21T23:02:20.163Z", + "status": "Active", + "effectiveDate": "2021-12-21T23:02:20.163Z", + "subscriptionAlias": "string", + "subscriptionInviteLastSentDate": "string" + } +} +``` ++## Check the details of the students in a lab ++Calling this API allows you to see all of the students that are in the specified lab. ++```json +GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<BillingAccountID/billingProfiles/<BillingProfileID>/invoiceSections/<InvoiceSectionID>/providers/Microsoft.Education/labs/default/students?includeDeleted=true&api-version=2021-12-01-preview +``` ++The API response includes information about the students in the lab. ++```json +{ + "value": [ + { + "id": "string", + "name": "string", + "type": "string", + "systemData": { + "createdBy": "string", + "createdByType": "User", + "createdAt": "2021-12-21T23:15:45.430Z", + "lastModifiedBy": "string", + "lastModifiedByType": "User", + "lastModifiedAt": "2021-12-21T23:15:45.430Z" + }, + "properties": { + "firstName": "string", + "lastName": "string", + "email": "string", + "role": "Student", + "budget": { + "currency": "string", + "value": 0 + }, + "subscriptionId": "string", + "expirationDate": "2021-12-21T23:15:45.430Z", + "status": "Active", + "effectiveDate": "2021-12-21T23:15:45.430Z", + "subscriptionAlias": "string", + "subscriptionInviteLastSentDate": "string" + } + } + ], + "nextLink": "string" +} +``` ++## Next steps +- [Manage your Academic Grant using the Overview page](hub-overview-page.md) ++- [Support options](educator-service-desk.md) |
education-hub | Create Lab Education Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/create-lab-education-hub.md | -This article will walk you through how to create a lab, add students to that lab and verify that the lab has been created. +This article walks you through how to create a lab and verify that the lab has been created. ## Prerequisites This article will walk you through how to create a lab, add students to that lab PUT https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<BillingAccountID>/billingProfiles/<BillingProfileID>/invoiceSections/<InvoiceSectionID>/providers/Microsoft.Education/labs/default?api-version=2021-12-01-preview ``` -Call the above API with the body similar to the one below. Include your details for what the display name will be and how much budget you will allocate for this lab. +Call the create lab API with the body similar to the following. Include your details for the display name and how much budget you allocate for this lab. ```json { The API response returns details of the newly created lab. Congratulations, you } ``` -## Add students to the lab --Now that the lab has been successfully created, you can begin to add students to the lab. --Call the endpoint below and make sure to replace the sections that are surrounded by <>. --```json -PUT https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<BillingAccountID>/billingProfiles/<BillingProfileID>/invoiceSections/<InvoiceSectionID>/providers/Microsoft.Education/labs/default/students/<StudentID>?api-version=2021-12-01-preview -``` --Call the above API with a body similar to the one below. Change the body to include details of the student you want to add to the lab. --```json -{ - "properties": { - "firstName": "string", - "lastName": "string", - "email": "string", - "role": "Student", - "budget": { - "currency": "string", - "value": 0 - }, - "expirationDate": "2021-12-21T23:01:41.943Z", - "subscriptionAlias": "string", - "subscriptionInviteLastSentDate": "string" - } -} -``` --The API response returns details of the newly added student. --```json -{ - "id": "string", - "name": "string", - "type": "string", - "systemData": { - "createdBy": "string", - "createdByType": "User", - "createdAt": "2021-12-21T23:02:20.163Z", - "lastModifiedBy": "string", - "lastModifiedByType": "User", - "lastModifiedAt": "2021-12-21T23:02:20.163Z" - }, - "properties": { - "firstName": "string", - "lastName": "string", - "email": "string", - "role": "Student", - "budget": { - "currency": "string", - "value": 0 - }, - "subscriptionId": "string", - "expirationDate": "2021-12-21T23:02:20.163Z", - "status": "Active", - "effectiveDate": "2021-12-21T23:02:20.163Z", - "subscriptionAlias": "string", - "subscriptionInviteLastSentDate": "string" - } -} -``` - ## Check the details of a lab -Now that the lab has been created and a student has been added to the lab, let's get the details for the lab. Getting the lab details will provide you with meta data like when the lab was created and how much budget it has. It will not include information about students in the lab. +Now that the lab has been created and a student has been added to the lab, let's get the details for the lab. Getting the lab details will provide you with meta data like when the lab was created and how much budget it has. ```json GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<BillingAccountID>/billingProfiles/<BillingProfileID>/invoiceSections/<InvoiceSectionID>/providers/Microsoft.Education/labs/default?includeBudget=true&api-version=2021-12-01-preview ``` -The API response will include information about the lab and budget information (if the include budget flag is set to true) +The API response includes information about the lab and budget information. ```json { The API response will include information about the lab and budget information ( } ``` -## Check the details of the students in a lab --Calling this API will allow us to see all of the students that are in the specified lab. --```json -GET https://management.azure.com/providers/Microsoft.Billing/billingAccounts/<BillingAccountID/billingProfiles/<BillingProfileID>/invoiceSections/<InvoiceSectionID>/providers/Microsoft.Education/labs/default/students?includeDeleted=true&api-version=2021-12-01-preview -``` --The API response will include information about the students in the lab and will even show student that have been deleted from the lab (if the includeDeleted flag is set to true) --```json -{ - "value": [ - { - "id": "string", - "name": "string", - "type": "string", - "systemData": { - "createdBy": "string", - "createdByType": "User", - "createdAt": "2021-12-21T23:15:45.430Z", - "lastModifiedBy": "string", - "lastModifiedByType": "User", - "lastModifiedAt": "2021-12-21T23:15:45.430Z" - }, - "properties": { - "firstName": "string", - "lastName": "string", - "email": "string", - "role": "Student", - "budget": { - "currency": "string", - "value": 0 - }, - "subscriptionId": "string", - "expirationDate": "2021-12-21T23:15:45.430Z", - "status": "Active", - "effectiveDate": "2021-12-21T23:15:45.430Z", - "subscriptionAlias": "string", - "subscriptionInviteLastSentDate": "string" - } - } - ], - "nextLink": "string" -} -``` - ## Next steps - [Manage your Academic Grant using the Overview page](hub-overview-page.md) |
event-grid | Custom Topics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-topics.md | Title: Custom topics in Azure Event Grid description: Describes custom topics in Azure Event Grid. Previously updated : 02/23/2022 Last updated : 03/10/2023 # Custom topics in Azure Event Grid -An Event Grid topic provides an endpoint where the source sends events. The publisher creates the Event Grid topic, and decides whether an event source needs one topic or more than one topic. A topic is used for a collection of related events. To respond to certain types of events, subscribers decide which topics to subscribe to. +An Event Grid topic provides an endpoint where the source sends events. The publisher creates an Event Grid topic, and decides whether an event source needs one topic or more than one topic. A topic is used for a collection of related events. To respond to certain types of events, subscribers decide which topics to subscribe to. -**Custom topics** are application and third-party topics. When you create or are assigned access to a custom topic, you see that custom topic in your subscription. +**Custom topics** are application and third-party topics. When you create or are given access to a custom topic, you see that custom topic in your subscription. When designing your application, you have flexibility when deciding how many topics to create. For large solutions, create a **custom topic** for **each category of related events**. For example, consider an application that sends events related to modifying user accounts and processing orders. It's unlikely any event handler wants both categories of events. Create two custom topics and let event handlers subscribe to the one that interests them. For small solutions, you might prefer to send all events to a single topic. Event subscribers can filter for the event types they want. |
frontdoor | Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/domain.md | For more information on how Azure Front Door works with TLS, see [End-to-end TLS Azure Front Door can automatically manage TLS certificates for subdomains and apex domains. When you use managed certificates, you don't need to create keys or certificate signing requests, and you don't need to upload, store, or install the certificates. Additionally, Azure Front Door can automatically rotate (renew) managed certificates without any human intervention. This process avoids downtime caused by a failure to renew your TLS certificates in time. -Azure Front Door's certificates are issued by our partner certification authority, DigiCert. - The process of generating, issuing, and installing a managed TLS certificate can take from several minutes to an hour to complete, and occasionally it can take longer. #### Domain types The following table summarizes the features available with managed TLS certifica When you use Azure Front Door-managed TLS certificates with apex domains, the automated certificate rotation might require you to revalidate your domain ownership. For more information, see [Apex domains in Azure Front Door](apex-domain.md#azure-front-door-managed-tls-certificate-rotation). +#### Managed certificate issuance ++Azure Front Door's certificates are issued by our partner certification authority, DigiCert. For some domains, you must explicitly allow DigiCert as a certificate issuer by creating a [CAA domain record](https://wikipedia.org/wiki/DNS_Certification_Authority_Authorization) with the value: `0 issue digicert.com`. ++Azure fully manages the certificates on your behalf, so any aspect of the managed certificate, including the root issuer, can change at any time. These changes are outside your control. Make sure to avoid hard dependencies on any aspect of a managed certificate, such as checking the certificate thumbprint, or pinning to the managed certificate or any part of the certificate hierarchy. If you need to pin certificates, you should use a customer-managed TLS certificate, as explained in the next section. + ### Customer-managed TLS certificates Sometimes, you might need to provide your own TLS certificates. Common scenarios for providing your own certificates include: |
healthcare-apis | Deploy New Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-arm.md | -To implement infrastructure as code for your Azure solutions, use Azure Resource Manager templates (ARM templates). The template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources. +To implement infrastructure as code for your Azure solutions, use Azure Resource Manager templates (ARM templates). The template is a [JavaScript Object Notation (JSON)](https://www.json.org/) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources. In this quickstart, you'll learn how to: -> [!div class="checklist"] -> - Open an ARM template in the Azure portal. -> - Configure the ARM template for your deployment. -> - Deploy the ARM template. +- Open an ARM template in the Azure portal. +- Configure the ARM template for your deployment. +- Deploy the ARM template. > [!TIP] > To learn more about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md) To begin deployment in the Azure portal, select the **Deploy to Azure** button: - **Resource group** - An existing resource group, or you can create a new resource group. - - **Region** - The Azure region of the resource group that's used for the deployment. Region auto-fills by using the resource group region. + - **Region** - The Azure region of the resource group that's used for the deployment. Region autofills by using the resource group region. - **Basename** - A value that's appended to the name of the Azure resources and services that are deployed. |
healthcare-apis | Deploy New Bicep Powershell Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-bicep-powershell-cli.md | -Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. In a Bicep file, you define the infrastructure you want to deploy to Azure, and then use that file throughout the development lifecycle to repeatedly deploy your infrastructure. Your resources are deployed in a consistent manner. +Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. In a Bicep file, you define the infrastructure you want to deploy to Azure, and then use that file throughout the development lifecycle to repeatedly deploy your infrastructure. Your resources are deployed in a consistent manner Bicep provides concise syntax, reliable type safety, and support for code reuse. Bicep offers a first-class authoring experience for your infrastructure-as-code solutions in Azure. -Bicep provides concise syntax, reliable type safety, and support for code reuse. Bicep offers a first-class authoring experience for your infrastructure-as-code solutions in Azure. --In this quickstart, you'll learn how to: --> [!div class="checklist"] -> - Use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using a Bicep file. +In this quickstart, you'll learn how to use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using a Bicep file. > [!TIP] > To learn more about Bicep, see [What is Bicep?](../../azure-resource-manager/bicep/overview.md?tabs=bicep) |
healthcare-apis | Deploy New Choose | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-choose.md | The MedTech service provides multiple methods for deployment into Azure. Each de In this quickstart, you'll learn about these deployment methods: -> [!div class="checklist"] -> - Azure Resource Manager template (ARM template) including an Azure Iot Hub using the **Deploy to Azure** button. -> - ARM template using the **Deploy to Azure** button. -> - ARM template using Azure PowerShell or the Azure CLI. -> - Manually in the Azure portal. +- Azure Resource Manager template (ARM template) including an Azure Iot Hub using the **Deploy to Azure** button. +- ARM template using the **Deploy to Azure** button. +- ARM template using Azure PowerShell or the Azure CLI. +- Manually in the Azure portal. ## ARM template including an Azure Iot Hub using the Deploy to Azure button The following diagram outlines the basic steps of the MedTech service deployment In this quickstart, you learned about the different types of deployment methods for the MedTech service. -To learn more about the MedTech service, see +To learn about the MedTech service, see > [!div class="nextstepaction"] > [What is the MedTech service?](overview.md) |
healthcare-apis | Deploy New Powershell Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-powershell-cli.md | -In this quickstart, you'll learn how to: --> [!div class="checklist"] -> - Use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using an Azure Resource Manager template (ARM template). +In this quickstart, you'll learn how use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using an Azure Resource Manager template (ARM template). > [!TIP] > To learn more about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md) |
healthcare-apis | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started.md | Title: Get started with the MedTech service in Azure Health Data Services -description: This article describes how to get started with the MedTech service in Azure Health Data Services. + Title: Get started with the MedTech service - Azure Health Data Services +description: This article describes how to get started with the MedTech service. Previously updated : 02/27/2023 Last updated : 03/10/2023 -# Get started with the MedTech service in the Azure Health Data Services +# Get started with the MedTech service > [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. -This article will show you how to get started with the Azure MedTech service in the [Azure Health Data Services](../healthcare-apis-overview.md). There are six steps you need to follow to be able to deploy and process MedTech service to ingest data from a device using Azure Event Hubs service, persist the data to Azure FHIR service as Observation resources, and link FHIR service Observations to user and device resources. This article provides an architecture overview to help you follow the six steps of the implementation process. +This article will show you how to get started with the MedTech service in the [Azure Health Data Services](../healthcare-apis-overview.md). There are six steps you need to follow to be able to deploy the MedTech service. -## Architecture overview of the MedTech service +The following diagram outlines the basic architectural path that enables the MedTech service to receive data from a device and send it to the FHIR service. This diagram shows how the six-step implementation process is divided into three key deployment stages: deployment, post-deployment, and data processing. -The following diagram outlines the basic architectural path that enables the MedTech service to receive data from a device and send it to the FHIR service. This diagram shows how the six-step implementation process is divided into three key development stages: deployment, post-deployment, and data processing. - Follow these six steps to set up and start using the MedTech service. The MedTech service must be configured to ingest data it will receive from an ev Once you have starting using the portal and added the MedTech service to your workspace, you must then configure the MedTech service to ingest data from an event hub. For more information about configuring the MedTech service to ingest data, see [Configure the MedTech service to ingest data](deploy-new-config.md). -### Configuring device mappings +### Configuring the device mapping -You must configure the MedTech service to map it to the device you want to receive data from. Each device has unique settings that the MedTech service must use. For more information on how to use device mappings, see [How to use device mappings](how-to-configure-device-mappings.md). +You must configure the MedTech service to map it to the device you want to receive data from. Each device has unique settings that the MedTech service must use. For more information on how to use the device mapping, see [How to use device mappings](how-to-configure-device-mappings.md). - Azure Health Data Services provides an open source tool you can use called [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/main/tools/data-mapper). The IoMT Connector Data Mapper will help you map your device's data structure to a form that the MedTech service can use. For more information on device content mapping, see [Device Content Mapping](https://github.com/microsoft/iomt-fhir/blob/main/docs/Configuration.md#device-content-mapping). - When you're deploying the MedTech service, you must set specific device mapping properties. For more information on device mapping properties, see [Configure the device mapping properties](deploy-new-config.md). -### Configuring destination mappings +### Configuring the FHIR destination mapping -Once your device's data is properly mapped to your device's data format, you must then map it to an Observation in the FHIR service. For an overview of FHIR destination mappings, see [How to use the FHIR destination mappings](how-to-configure-fhir-mappings.md). +Once your device's data is properly mapped to your device's data format, you must then map it to an Observation in the FHIR service. For an overview of the FHIR destination mapping, see [How to use the FHIR destination mappings](how-to-configure-fhir-mappings.md). For step-by-step destination property mapping, see [Configure destination properties](deploy-new-config.md). |
iot-edge | Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md | Azure IoT Edge runs on most operating systems that can run containers; however, * Microsoft has done informal testing on the platforms or knows of a partner successfully running Azure IoT Edge on the platform * Installation packages for other platforms may work on these platforms -The family of the host OS must always match the family of the guest OS used inside a module's container. --IoT Edge for Linux on Windows uses IoT Edge in a Linux virtual machine running on a Windows host. In this way, you can run Linux modules on a Windows device. - ### Tier 1 The systems listed in the following tables are supported by Microsoft, either generally available or in public preview, and are tested with each new release. |
postgresql | Concepts Business Continuity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-business-continuity.md | Last updated 11/30/2021 - Earthquake causes a power outage and temporary disables a data center or an availability zone. - Database patching required to fix a bug or security issue. -Flexible server provides features that protect data and mitigates downtime for your mission critical databases in the event of planned and unplanned downtime events. Built on top of the Azure infrastructure that already offers robust resiliency and availability, flexible server has business continuity features that provide another fault-protection, address recovery time requirements, and reduce data loss exposure. As you architect your applications, you should consider the downtime tolerance - which is the recovery time objective (RTO) and data loss exposure - which is the recovery point objective (RPO). For example, your business-critical database requires stricter uptime requirements compared to a test database. +The flexible server provides features that protect data and mitigates downtime for your mission-critical databases during planned and unplanned downtime events. Built on top of the Azure infrastructure that offers robust resiliency and availability, the flexible server has business continuity features that provide another fault-protection, address recovery time requirements, and reduce data loss exposure. As you architect your applications, you should consider the downtime tolerance - the recovery time objective (RTO), and data loss exposure - the recovery point objective (RPO). For example, your business-critical database requires stricter uptime than a test database. The table below illustrates the features that Flexible server offers. Though we continuously strive to provide high availability, there are times when In the event of the Azure Database for PostgreSQL - Flexible Server service outage, you'll be able to see additional details related to the outage in the following places. - * **Azure Portal Banner** +* **Azure portal banner** If your subscription is identified to be impacted, there will be an outage alert of a Service Issue in your Azure portal **Notifications**.- :::image type="content" source="./media/business-continuity/notification-service-issue-example.png" alt-text=" Screenshot showing notifications in Azure Portal."::: * **Help + support** or **Support + troubleshooting** When you create support ticket from **Help + support** or **Support + troubleshooting**, there will be information about any issues impacting your resources. Select View outage details for more information and a summary of impact. There will also be an alert in the New support request page. * **Service Help** The **Service Health** page in the Azure portal contains information about Azure data center status globally. Search for "service health" in the search bar in the Azure portal, then view Service issues in the Active events category. You can also view the health of individual resources in the **Resource health** page of any resource under the Help menu. A sample screenshot of the Service Health page follows, with information about an active service issue in Southeast Asia. :::image type="content" source="./media/business-continuity/service-health-service-issues-example-map.png" alt-text=" Screenshot showing service outage in Service Health portal."::: The **Service Health** page in the Azure portal contains information about Azure Below are some unplanned failure scenarios and the recovery process. -| **Scenario** | **Recovery process** <br> [Servers configured without zone-redundant HA] | **Recovery process** <br> [Servers configured with Zone-redundant HA] | +| **Scenario** | **Recovery process** <br> [Servers configured without zone-redundant HA]| **Recovery process** <br> [Servers configured with Zone-redundant HA] | | - || - | | <B>Database server failure | If the database server is down, Azure will attempt to restart the database server. If that fails, the database server will be restarted on another physical node. <br /> <br /> The recovery time (RTO) is dependent on various factors including the activity at the time of fault such as large transaction and the volume of recovery to be performed during the database server startup process. <br /> <br /> Applications using the PostgreSQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. | If the database server failure is detected, the server is failed over to the standby server, thus reducing downtime. For more information, see [HA concepts page](./concepts-high-availability.md). RTO is expected to be 60-120s, with zero data loss. | | <B>Storage failure | Applications don't see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in three copies, the copy of the data is served by the surviving storage. The corrupted data block is automatically repaired and a new copy of the data is automatically created. | For any rare and non-recoverable errors such as the entire storage is inaccessible, the flexible server is failed over to the standby replica to reduce the downtime. For more information, see [HA concepts page](./concepts-high-availability.md). | |
postgresql | Concepts Read Replicas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas.md | psql -h myreplica.postgres.database.azure.com -U myadmin postgres At the prompt, enter the password for the user account. ## Monitor replication+Read replica feature in Azure Database for PostgreSQL - Flexible Server relies on replication slots mechanism. The main advantage of replication slots is the ability to automatically adjust the number of transaction logs (WAL segments) needed by all replica servers and therefore avoid situations when one or more replicas going out of sync because WAL segments that were not yet sent to the replicas are being removed on the primary. The disadvantage of the approach is the risk of going out of space on the primary in case replication slot remains inactive for a long period of time. In such situations primary will accumulate WAL files causing incremental growth of the storage usage. When the storage usage reaches 95% or if the available capacity is less than 5 GiB, the server is automatically switched to read-only mode to avoid errors associated with disk-full situations. +Therefore, monitoring the replication lag and replication slots status is crucial for read replicas. ++We recommend setting alert rules for storage used or storage percentage, as well as for replication lags, when they exceed certain thresholds so that you can proactively act, increase the storage size and delete lagging read replicas. For example, you can set an alert if the storage percentage exceeds 80% usage, as well on the replica lag being higher than 1h. The [Transaction Log Storage Used](concepts-monitoring.md#list-of-metrics) metric will show you if the WAL files accumulation is the main reason of the excessive storage usage. Azure Database for PostgreSQL - Flexible Server provides [two metrics](concepts-monitoring.md#replication) for monitoring replication. The two metrics are **Max Physical Replication Lag** and **Read Replica Lag**. To learn how to view these metrics, see the **Monitor a replica** section of the [read replica how-to article](how-to-read-replicas-portal.md#monitor-a-replica). -The **Max Physical Replication Lag** metric shows the lag in bytes between the primary and the most-lagging replica. This metric is applicable and available on the primary server only, and will be available only if at least one of the read replicas is connected to the primary. The lag information is present also when the replica is in the process of catching up with the primary, during replica creation, or when replication becomes inactive. The lag information will not be available in case replication switches from using streaming replication to the archive recovery mode using archived files from primary. +The **Max Physical Replication Lag** metric shows the lag in bytes between the primary and the most-lagging replica. This metric is applicable and available on the primary server only, and will be available only if at least one of the read replicas is connected to the primary. The lag information is present also when the replica is in the process of catching up with the primary, during replica creation, or when replication becomes inactive. The **Read Replica Lag** metric shows the time since the last replayed transaction. For instance if there are no transactions occurring on your primary server, and the last transaction was replayed 5 seconds ago, then the Read Replica Lag will show 5 second delay. This metric is applicable and available on replicas only. A read replica is created as a new Azure Database for PostgreSQL server. An exis During creation of read replicas firewall rules and data encryption method can be changed. Server parameters and authentication method are inherited from the primary server and cannot be changed during creation. After a replica is created, several settings can be changed including storage, compute, backup retention period, server parameters, authentication method, firewall rules etc. +### Server parameters ++You are free to change server parameters on your read replica server and set different values than on the primary server. The only exception are parameters that might affect recovery of the replica, mentioned also in the "Scaling" section below: max_connections, max_prepared_transactions, max_locks_per_transaction, max_wal_senders, max_worker_processes. Please ensure these parameters ale always [greater than or equal to the setting on the primary](https://www.postgresql.org/docs/current/hot-standby.html#HOT-STANDBY-ADMIN) to ensure that the standby does not run out of shared memory during recovery. + ### Scaling Scaling vCores or between General Purpose and Memory Optimized: |
postgresql | Concepts Connectivity Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connectivity-architecture.md | description: Describes the connectivity architecture of your Azure Database for -+ Previously updated : 06/24/2022 Last updated : 03/09/2023 # Connectivity architecture in Azure Database for PostgreSQL [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)] -This article explains the Azure Database for PostgreSQL connectivity architecture as well as how the traffic is directed to your Azure Database for PostgreSQL database instance from clients both within and outside Azure. +This article explains the Azure Database for PostgreSQL connectivity architecture and how the traffic is directed to your Azure Database for PostgreSQL database instance from clients both within and outside Azure. ## Connectivity architecture Connection to your Azure Database for PostgreSQL is established through a gatewa :::image type="content" source="./media/concepts-connectivity-architecture/connectivity-architecture-overview-proxy.png" alt-text="Overview of the connectivity architecture"::: -As client connects to the database, the connection string to the server resolves to the gateway IP address. The gateway listens on the IP address on port 5432. Inside the database cluster, traffic is forwarded to appropriate Azure Database for PostgreSQL. Therefore, in order to connect to your server, such as from corporate networks, it is necessary to open up the **client-side firewall to allow outbound traffic to be able to reach our gateways**. Below you can find a complete list of the IP addresses used by our gateways per region. +As client connects to the database, the connection string to the server resolves to the gateway IP address. The gateway listens on the IP address on port 5432. Inside the database cluster, traffic is forwarded to appropriate Azure Database for PostgreSQL. Therefore, in order to connect to your server, such as from corporate networks, it's necessary to open up the **client-side firewall to allow outbound traffic to be able to reach our gateways**. Below you can find a complete list of the IP addresses used by our gateways per region. ## Azure Database for PostgreSQL gateway IP addresses -The gateway service is hosted on group of stateless compute nodes sitting behind an IP address, which your client would reach first when trying to connect to an Azure Database for PostgreSQL server. +The gateway service is hosted on group of stateless compute nodes located behind an IP address. This is an address your client would reach first when trying to connect to an Azure Database for PostgreSQL server. -As part of ongoing service maintenance, we'll periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant connectivity experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for PostgreSQL servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. The older gateway hardware continues serving existing servers but are planned for decommissioning in future. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal, three months in advance before decommissioning. The decommissioning of gateways can impact the connectivity to your servers if +**As part of ongoing service maintenance, we'll periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant connectivity experience.** When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for PostgreSQL servers and it has a different IP address from older gateway rings in the same region to differentiate the traffic. The older gateway hardware continues serving existing servers but are planned for decommissioning in future. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings are notified via email and in the Azure portal, three months in advance before decommissioning. The decommissioning of gateways can impact the connectivity to your servers if * You hard code the gateway IP addresses in the connection string of your application. It is **not recommended**.You should use fully qualified domain name (FQDN) of your server in the format `<servername>.postgres.database.azure.com`, in the connection string for your application. -* You do not update the newer gateway IP addresses in the client-side firewall to allow outbound traffic to be able to reach our new gateway rings. +* You don't update the newer gateway IP addresses in the client-side firewall to allow outbound traffic to be able to reach our new gateway rings. ++> [!IMPORTANT] +> We strongly encourage customers to use the Gateway IP address **subnets** in order to not be impacted by this activity in a region. The following table lists the gateway IP addresses of the Azure Database for PostgreSQL gateway for all data regions. The most up-to-date information of the gateway IP addresses for each region is maintained in the table below. In the table below, the columns represent following: -* **Gateway IP addresses:** This column lists the current IP addresses of the gateways hosted on the latest generation of hardware. If you are provisioning a new server, we recommend that you open the client-side firewall to allow outbound traffic for the IP addresses listed in this column. -* **Gateway IP addresses (decommissioning):** This column lists the IP addresses of the gateways hosted on an older generation of hardware that is being decommissioned right now. If you are provisioning a new server, you can ignore these IP addresses. If you have an existing server, continue to retain the outbound rule for the firewall for these IP addresses as we have not decommissioned it yet. If you drop the firewall rules for these IP addresses, you may get connectivity errors. Instead, you are expected to proactively add the new IP addresses listed in Gateway IP addresses column to the outbound firewall rule as soon as you receive the notification for decommissioning. This will ensure when your server is migrated to latest gateway hardware, there is no interruptions in connectivity to your server. -* **Gateway IP addresses (decommissioned):** This column lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule. --| **Region name** | **Gateway IP addresses** |**Gateway IP addresses (decommissioning)** | **Gateway IP addresses (decommissioned)** | -|:-|:-|:-|:| -| Australia Central| 20.36.105.0 | | | -| Australia Central2 | 20.36.113.0 | | | -| Australia East | 13.75.149.87, 40.79.161.1 | | | -| Australia South East |13.77.48.10, 13.77.49.32, 13.73.109.251 | | | -| Brazil South |191.233.201.8, 191.233.200.16 | | 104.41.11.5| -| Canada Central |40.85.224.249, 52.228.35.221 | | | -| Canada East | 40.86.226.166, 52.242.30.154 | | | -| Central US | 23.99.160.139, 52.182.136.37, 52.182.136.38 | 13.67.215.62 | | -| China East | 139.219.130.35 | | | -| China East 2 | 40.73.82.1, 52.130.120.89 | -| China East 3 | 52.131.155.192 | -| China North | 139.219.15.17 | | | -| China North 2 | 40.73.50.0 | | | -| China North 3 | 52.131.27.192 | | | -| East Asia | 13.75.33.20, 52.175.33.150, 13.75.33.20, 13.75.33.21 | | | -| East US |40.71.8.203, 40.71.83.113 |40.121.158.30|191.238.6.43 | -| East US 2 | 40.70.144.38, 52.167.105.38 | 52.177.185.181 | | -| France Central | 40.79.137.0, 40.79.129.1 | | | -| France South | 40.79.177.0 | | | -| Germany Central | 51.4.144.100 | | | -| Germany North | 51.116.56.0 | | -| Germany North East | 51.5.144.179 | | | -| Germany West Central | 51.116.152.0 | | -| India Central | 104.211.96.159 | | | -| India South | 104.211.224.146 | | | -| India West | 104.211.160.80 | | | -| Japan East | 40.79.192.23, 40.79.184.8 | 13.78.61.196 | | -| Japan West | 104.214.148.156, 40.74.96.6, 40.74.96.7 | 104.214.148.156 | | -| Korea Central | 52.231.17.13 | 52.231.32.42 | | -| Korea South | 52.231.145.3 | 52.231.151.97 | | -| North Central US | 52.162.104.35, 52.162.104.36 | 23.96.178.199 | | -| North Europe | 52.138.224.6, 52.138.224.7 | 40.113.93.91 |191.235.193.75 | -| South Africa North | 102.133.152.0 | | | -| South Africa West | 102.133.24.0 | | | -| South Central US |104.214.16.39, 20.45.120.0 |13.66.62.124 |23.98.162.75 | -| South East Asia | 40.78.233.2, 23.98.80.12 | 104.43.15.0 | | -| Switzerland North | 51.107.56.0 || -| Switzerland West | 51.107.152.0| || -| UAE Central | 20.37.72.64 | | | -| UAE North | 65.52.248.0 | | | -| UK South | 51.140.184.11, 51.140.144.32, 51.105.64.0 | | | -| UK West | 51.141.8.11 | | | -| West Central US | 13.78.145.25, 52.161.100.158 | | | -| West Europe |13.69.105.208, 104.40.169.187 | 40.68.37.158 | 191.237.232.75 | -| West US |13.86.216.212, 13.86.217.212 |104.42.238.205 | 23.99.34.75| -| West US 2 | 13.66.226.202, 13.66.136.192,13.66.136.195 | | | -| West US 3 | 20.150.184.2 | | | +* **Gateway IP addresses:** This column lists the current IP addresses of the gateways. As hardware is refreshed we'll remove these and recommend that you open the client-side firewall to allow outbound traffic for the IP address subnets listed in the next column. +* **Gateway IP address subnets:** This column lists the IP address subnets of the gateway rings located in the particular region. As we retire older gateway hardware, we recommend that you open the client-side firewall to allow outbound traffic for the IP address subnets in the region you are operating. ++| **Region name** | **Gateway IP addresses** | **Gateway IP address subnets** | +|:-|:-|:| +| Australia Central| 20.36.105.0 | 20.36.105.32/29 | +| Australia Central2 | 20.36.113.0 | 20.36.113.32/29 | +| Australia East | 13.75.149.87, 40.79.161.1 | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29 | +| Australia South East |13.77.48.10, 13.77.49.32, 13.73.109.251 |13.77.49.32/29 | +| Brazil South |191.233.201.8, 191.233.200.16 | 191.233.200.32/29, 191.234.144.32/29| +| Canada Central |40.85.224.249, 52.228.35.221 | 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29| +| Canada East | 40.86.226.166, 52.242.30.154 | 40.69.105.32/29 | +| Central US | 23.99.160.139, 52.182.136.37, 52.182.136.38 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29| +| China East | 139.219.130.35 | 52.130.112.136/29 | +| China East 2 | 40.73.82.1, 52.130.120.89 | 52.130.120.88/29| +| China East 3 | 52.131.155.192 | 52.130.128.88/29| +| China North | 139.219.15.17 | 52.130.128.88/29 | +| China North 2 | 40.73.50.0 | 52.130.40.64/29| +| China North 3 | 52.131.27.192 | 13.75.32.192/29, 13.75.33.192/29 | +| East Asia | 13.75.33.20, 52.175.33.150, 13.75.33.20, 13.75.33.21 | 13.75.32.192/29, 13.75.33.192/29| +| East US |40.71.8.203, 40.71.83.113 |20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29| +| East US 2 | 40.70.144.38, 52.167.105.38 |104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29| +| France Central | 40.79.137.0, 40.79.129.1 | 40.79.136.32/29, 40.79.144.32/29 | +| France South | 40.79.177.0 | 40.79.176.40/29, 40.79.177.32/29| +| Germany West Central | 51.116.152.0 | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29| +| India Central | 104.211.96.159 | 104.211.86.32/29, 20.192.96.32/29| +| India South | 104.211.224.146 | 40.78.192.32/29, 40.78.193.32/29| +| India West | 104.211.160.80 | 104.211.144.32/29, 104.211.145.32/29 | +| Japan East | 40.79.192.23, 40.79.184.8 | 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29 | +| Japan West | 104.214.148.156, 40.74.96.6, 40.74.96.7 | 40.74.96.32/29 | +| Korea Central | 52.231.17.13 | 20.194.64.32/29,20.44.24.32/29, 52.231.16.32/29 | +| Korea South | 52.231.145.3 | | +| North Central US | 52.162.104.35, 52.162.104.36 | 52.162.105.192/29| +| North Europe | 52.138.224.6, 52.138.224.7 |13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29 | +| South Africa North | 102.133.152.0 | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29 | +| South Africa West | 102.133.24.0 | 102.133.25.32/29| +| South Central US |104.214.16.39, 20.45.120.0 |20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29| +| South East Asia | 40.78.233.2, 23.98.80.12 | 13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29 | +| Switzerland North | 51.107.56.0 |51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27| +| Switzerland West | 51.107.152.0| 51.107.153.32/29| +| UAE Central | 20.37.72.64 | 20.37.72.96/29, 20.37.73.96/29 | +| UAE North | 65.52.248.0 | 40.120.72.32/29, 65.52.248.32/29 | +| UK South | 51.140.184.11, 51.140.144.32, 51.105.64.0 |51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29 | +| UK West | 51.141.8.11 | 51.140.208.96/29, 51.140.209.32/29 | +| West Central US | 13.78.145.25, 52.161.100.158 | 13.71.193.32/29 | +| West Europe |13.69.105.208, 104.40.169.187 | 104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29| +| West US |13.86.216.212, 13.86.217.212 |13.86.217.224/29| +| West US 2 | 13.66.226.202, 13.66.136.192,13.66.136.195 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29| +| West US 3 | 20.150.184.2 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29 | ## Frequently asked questions ### What you need to know about this planned maintenance? -This is a DNS change only which makes it transparent to clients. While the IP address for FQDN is changed in the DNS server, the local DNS cache will be refreshed within 5 minutes, and it is automatically done by the operating systems. After the local DNS refresh, all the new connections will connect to the new IP address, all existing connections will remain connected to the old IP address with no interruption until the old IP addresses are fully decommissioned. The old IP address will roughly take three to four weeks before getting decommissioned; therefore, it should have no effect on the client applications. +This is a DNS change only, which makes it transparent to clients. While the IP address for FQDN is changed in the DNS server, the local DNS cache is refreshed within 5 minutes, and it is automatically done by the operating systems. After the local DNS refresh, all the new connections will connect to the new IP address, all existing connections will remain connected to the old IP address with no interruption until the old IP addresses are fully decommissioned. The old IP address takes roughly three to four weeks before getting decommissioned; therefore, it should have no effect on the client applications. ### What are we decommissioning? -Only Gateway nodes will be decommissioned. When users connect to their servers, the first stop of the connection is to gateway node, before connection is forwarded to server. We are decommissioning old gateway rings (not tenant rings where the server is running) refer to the [connectivity architecture](#connectivity-architecture) for more clarification. +Only Gateway nodes are decommissioned. When users connect to their servers, the first stop of the connection is to gateway node, before connection is forwarded to server. We are decommissioning old gateway rings (not tenant rings where the server is running) refer to the [connectivity architecture](#connectivity-architecture) for more clarification. ### How can you validate if your connections are going to old gateway nodes or new gateway nodes? -Ping your server's FQDN, for example ``ping xxx.postgres.database.azure.com``. If the returned IP address is one of the IPs listed under Gateway IP addresses (decommissioning) in the document above, it means your connection is going through the old gateway. Contrarily, if the returned Ip address is one of the IPs listed under Gateway IP addresses, it means your connection is going through the new gateway. +Ping your server's FQDN, for example ``ping xxx.postgres.database.azure.com``. If the returned IP address is one of the IPs listed under Gateway IP addresses (decommissioning) in the document above, it means your connection is going through the old gateway. On the contrary, if the returned Ip-address is one of the IPs listed under Gateway IP addresses, it means your connection is going through the new gateway. You may also test by [PSPing](/sysinternals/downloads/psping) or TCPPing the database server from your client application with port 5432 and ensure that return IP address isn't one of the decommissioning IP addresses ### How do I know when the maintenance is over and will I get another notification when old IP addresses are decommissioned? -You will receive an email to inform you when we'll start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Please prepare your client to connect to the database server using the FQDN or using the new IP address from the table above. +You receive an email to inform you when we start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Please prepare your client to connect to the database server using the FQDN or using the new IP address from the table above. ### What do I do if my client applications are still connecting to old gateway server? This indicates that your applications connect to server using static IP address ### Is there any impact for my application connections? -This maintenance is just a DNS change, so it is transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connections will connect to the new IP address and all the existing connections will still be working fine until the old IP address gets fully decommissioned, which is usually several weeks later. And the retry logic is not required for this case, but it is good to see the application have retry logic configured. Please either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string. +This maintenance is just a DNS change, so it is transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connections connect to the new IP address and all the existing connections will still be working fine until the old IP address gets fully decommissioned, which is several weeks later. And the retry logic is not required for this case, but it is good to see the application have retry logic configured. Either use FQDN to connect to the database server in your application connection string. This maintenance operation will not drop the existing connections. It only makes the new connection requests go to new gateway ring. ### Can I request for a specific time window for the maintenance? |
purview | Concept Policies Devops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-policies-devops.md | Bob and Alice are involved with the DevOps process at their company. Given their | |Supports the Principle of Least Privilege via data resource scopes and the role definitions.| ||| -## Mapping of popular DMVs/DMFs +## Mapping of popular DMVs and DMFs SQL dynamic metadata includes a list of more than 700 DMVs/DMFs. We list here as an illustration some of the most popular ones, mapped to their role definition in Microsoft Purview DevOps policies and linked to the URL, along with their description. | **Accessible by DevOps role** | **Popular DMV / DMF** | **Description**| For more on these DMVs/DMFs you can check these docs ## Next steps To get started with DevOps policies, consult the following blogs, videos and guides:-* Blog: [Microsoft Purview DevOps policies enter General Availability](https://techcommunity.microsoft.com/t5/security-compliance-and-identity/microsoft-purview-devops-policies-enter-ga-simplify-access/ba-p/3674057) -* Blog: [Inexpensive solution for managing access to SQL health, performance and security information](https://techcommunity.microsoft.com/t5/security-compliance-and-identity/inexpensive-solution-for-managing-access-to-sql-health/ba-p/3750512) -* Blog: [Enable IT personnel to monitor SQL health and performance while reducing the insider risk](https://techcommunity.microsoft.com/t5/security-compliance-and-identity/enable-it-personnel-to-monitor-sql-health-and-performance-while/ba-p/3740363) -* Video: [DevOps policies quick overview](https://aka.ms/Microsoft-Purview-DevOps-Policies-Video) -* Video: [DevOps policies deep dive](https://youtu.be/UvClpdIb-6g) -* Doc: [Microsoft Purview DevOps policies on Azure Arc-enabled SQL Server](./how-to-policies-devops-arc-sql-server.md) -* Doc: [Microsoft Purview DevOps policies on Azure SQL DB](./how-to-policies-devops-azure-sql-db.md) -* Doc: [Microsoft Purview DevOps policies on resource groups and subscriptions](./how-to-policies-devops-resource-group.md) -* Blog: [New granular permissions for SQL Server 2022 and Azure SQL to help PoLP](https://techcommunity.microsoft.com/t5/sql-server-blog/new-granular-permissions-for-sql-server-2022-and-azure-sql-to/ba-p/3607507) +* Try DevOps policies for Azure SQL Database: [Quick start guide](https://aka.ms/quickstart-DevOps-policies) +See [other videos, blogs and documents](./how-to-policies-devops-authoring-generic.md#next-steps) |
purview | How To Policies Devops Arc Sql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-arc-sql-server.md | |
purview | How To Policies Devops Authoring Generic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-authoring-generic.md | To delete a DevOps policy, ensure first that you have the Microsoft Purview Poli  ## Test the DevOps policy-After creating the policy, any of the Azure AD users in the Subject should now be able to connect to the data sources in the scope of the policy. To test, use SSMS or any SQL client and try to query some DMVs/DMFs. We list here some examples. For more, you can consult the [Microsoft Purview DevOps policies concept guide](/azure/purview/concept-policies-devops.md#mapping-of-popular-dmvsdmfs) +After creating the policy, any of the Azure AD users in the Subject should now be able to connect to the data sources in the scope of the policy. To test, use SSMS or any SQL client and try to query some DMVs/DMFs. We list here a few examples. For more, you can consult the mapping of popular DMVs/DMFs in the [Microsoft Purview DevOps policies concept guide](./concept-policies-devops.md#mapping-of-popular-dmvs-and-dmfs) ### Testing SQL Performance Monitor access If you provided the Subject(s) of the policy SQL Performance Monitor role, you can issue the following commands Check the blogs, videos and related documents * Doc: [Microsoft Purview DevOps policies concept guide](./concept-policies-devops.md) * Doc: [Microsoft Purview DevOps policies on Azure Arc-enabled SQL Server](./how-to-policies-devops-arc-sql-server.md) * Doc: [Microsoft Purview DevOps policies on Azure SQL Database](./how-to-policies-devops-azure-sql-db.md)-* Doc: [Microsoft Purview DevOps policies on entire resource groups or subscriptions](./how-to-policies-devops-resource-group.md) +* Doc: [Microsoft Purview DevOps policies on entire resource groups or subscriptions](./how-to-policies-devops-resource-group.md) +* Doc: [Troubleshoot Microsoft Purview policies for SQL data sources](./troubleshoot-policy-sql.md) |
purview | How To Policies Devops Azure Sql Db | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-azure-sql-db.md | |
purview | How To Policies Devops Resource Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-resource-group.md | |
service-fabric | Service Fabric Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started.md | For latest Runtime and SDK you can download from below: | Package |Version| | | |-|[Install Service Fabric Runtime for Windows](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabric.9.1.1436.9590.exe) | 9.1.1436 | -|[Install Service Fabric SDK](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabricSDK.6.1.1436.msi) | 6.1.1436 | +|[Install Service Fabric Runtime for Windows](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabric.9.1.1583.9590.exe) | 9.1.1583 | +|[Install Service Fabric SDK](https://download.microsoft.com/download/b/8/a/b8a2fb98-0ec1-41e5-be98-9d8b5abf7856/MicrosoftServiceFabricSDK.6.1.1583.msi) | 6.1.1583 | You can find direct links to the installers for previous releases on [Service Fabric Releases](https://github.com/microsoft/service-fabric/tree/master/release_notes) |
service-fabric | Service Fabric Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-versions.md | If you want to find a list of all the available Service Fabric runtime versions ### Current versions | Service Fabric runtime |Can upgrade directly from|Can downgrade to*|Compatible SDK or NuGet package version|Supported .NET runtimes** |OS Version |End of support | | | | | | | | |+| 9.1 CU2<br>9.1.1583.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 7, .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version | | 9.1 CU1<br>9.1.1436.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version | | 9.1 RTO<br>9.1.1390.9590 | 8.2 CU6<br>8.2.1686.9590 | 8.2 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |+| 9.0 CU7<br>9.0.1309.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 | +| 9.0 CU6<br>9.0.1254.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 | | 9.0 CU5<br>9.0.1155.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 | | 9.0 CU4<br>9.0.1121.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 | | 9.0 CU3<br>9.0.1107.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 | | 9.0 CU2<br>9.0.1048.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 | | 9.0 CU1<br>9.0.1028.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 | | 9.0 RTO<br>9.0.1017.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 |+| 8.2 CU9<br>8.2.1748.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6, All, <br> >= .NET Framework 4.6.2 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023 | +| 8.2 CU8<br>8.2.1723.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023 | | 8.2 CU7<br>8.2.1692.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023 | | 8.2 CU6<br>8.2.1686.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023 | | 8.2 CU4<br>8.2.1659.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023 | Support for Service Fabric on a specific OS ends when support for the OS version ### Current versions | Service Fabric runtime | Can upgrade directly from |Can downgrade to*|Compatible SDK or NuGet package version | Supported .NET runtimes** | OS version | End of support | | | | | | | | |+| 9.1 CU2<br>9.1.1388.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | .NET 7, .NET 6, All | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version | | 9.1 CU1<br>9.1.1230.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version | | 9.1 RTO<br>9.1.1206.1 | 8.2 CU6<br>8.2.1485.1 | 8.2 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |+| 9.0 CU7<br>9.0.1260.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | .NET 6 | N/A | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2023 | | 9.0 CU5<br>9.0.1148.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2023 | | 9.0 CU4<br>9.0.1114.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2023 | | 9.0 CU3<br>9.0.1103.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2023 | Support for Service Fabric on a specific OS ends when support for the OS version | 9.0 CU2<br>9.0.1056.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 | | 9.0 CU1<br>9.0.1035.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 | | 9.0 RTO<br>9.0.1018.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |+| 8.2 CU8<br>8.2.1723.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 | | 8.2 CU6<br>8.2.1485.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 | | 8.2 CU5.1<br>8.2.1483.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 | | 8.2 CU4<br>8.2.1458.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 | The following table lists the version names of Service Fabric and their correspo | Version name | Windows version number | Linux version number | | | | |+| 9.1 CU2 | 9.1.1583.9590| 9.1.1388.1 | | 9.1 CU1 | 9.1.1436.9590| 9.1.1230.1 | | 9.1 RTO | 9.1.1390.9590| 9.1.1206.1 |+| 9.0 CU7 | 9.0.1309.9590 | 9.0.1260.1 | +| 9.0 CU6 | 9.0.1254.9590 | Not applicable | | 9.0 CU5 | 9.0.1155.9590 | 9.0.1148.1 | | 9.0 CU4 | 9.0.1121.9590 | 9.0.1114.1 | | 9.0 CU3 | 9.0.1107.9590 | 9.0.1103.1 | | 9.0 CU2.1 | Not applicable | 9.0.1086.1 |+| 8.2 CU9 | 8.2.1748.9590 | Not applicable | +| 8.2 CU8 | 8.2.1723.9590 | 8.2.1521.1 | | 8.2 CU7 | 8.2.1692.9590 | Not applicable | | 8.2 CU6 | 8.2.1686.9590 | 8.2.1485.1 | | 8.2 CU5.1 | Not applicable | 8.2.1483.1 | |
service-health | Impacted Resources Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/impacted-resources-security.md | + + Title: Resource impact from Azure security incidents +description: This article details where to find information from Azure Service Health about how Azure security incidents impact your resources. + Last updated : 3/3/2023+++# Resource impact from Azure security incidents ++In support of the experience of viewing impacted resources, Service Health has enabled a new feature to: ++- Display resources impacted by a security incident +- Enabling role-based access control (RBAC) for viewing security incident impacted resource information ++This article details what is communicated to users and where they can view information about their impacted resources. ++>[!Note] +>This feature will be rolled out in phases. The rollout will gradually expand to 100 percent of subscription and tenant customers. ++## Role Based Access (RBAC) For Security Incident Resource Impact ++[Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to. Given the sensitive nature of security incidents, role-based access is leveraged to limit the audience of their impacted resource information. Along with resource information, Service Health provides the below information to users whose resources are impacted by a security incident: ++Users authorized with the following roles can view security impacted resource information: ++**Subscription level** +- Subscription Owner +- Subscription Admin +- Service Health Security Reader (New custom role) ++**Tenant level** +- Security Admin/Security Reader +- Global Admin/Tenant Admin +- Azure Service Health Privacy reader (New custom role) ++## Viewing Impacted Resources for Security Incidents on the Service Health Portal ++In the Azure portal, the **Impacted Resources** tab under **Service Health** > **Security Advisories** displays resources that are impacted by a security incident. Along with resource information, Service Health provides the below information to users whose resources are impacted by a security incident: ++|Column |Description | +||| +|**Subscription ID**|Unique ID for the subscription that contains the impacted resource| +|**Subscription Name**|Subscription name for the subscription that contains the impacted resource| +|**Tenant Name**|Unique ID for the tenant that contains the impacted resource| +|**Tenant ID**|Unique ID for the tenant that contains the impacted resource| ++The following examples show a security incident with impacted resources from the subscription and tenant scope. ++**Subscription** +++**Tenant** ++++## Accessing Impacted Resources programmatically via an API ++Impacted resource information for security incidents can be retrieved programmatically using the Events API. To access the list of resources impacted by a security incident, users authorized with the above-mentioned roles can use the following endpoints. ++**Subscription** ++```HTTP +https://management.azure.com/subscriptions/(ΓÇ£Subscription IDΓÇ¥)/providers/microsoft.resourcehealth/events/("Tracking ID")/listSecurityAdvisoryImpactedResources?api-version=2022-10-01 +``` ++**Tenant** ++```HTTP +https://management.azure.com/providers/microsoft.resourcehealth/events/("Tracking ID")/listSecurityAdvisoryImpactedResources?api-version=2022-10-01 +``` ++## Next steps +- [Introduction to the Azure Service Health dashboard](service-health-overview.md) +- [Introduction to Azure Resource Health](resource-health-overview.md) +- [Frequently asked questions about Azure Resource Health](resource-health-faq.yml) |
virtual-desktop | Store Fslogix Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/store-fslogix-profile.md | The following tables compare the storage solutions Azure Storage offers for Azur |Redundancy|Locally redundant/zone-redundant/geo-redundant/geo-zone-redundant|Locally redundant/geo-redundant [with cross-region replication](../azure-netapp-files/cross-region-replication-introduction.md)|Locally redundant/zone-redundant/geo-redundant| |Tiers and performance| Standard (Transaction optimized)<br>Premium<br>Up to max 100K IOPS per share with 10 GBps per share at about 3-ms latency|Standard<br>Premium<br>Ultra<br>Up to max 460K IOPS per volume with 4.5 GBps per volume at about 1 ms latency. For IOPS and performance details, see [Azure NetApp Files performance considerations](../azure-netapp-files/azure-netapp-files-performance-considerations.md) and [the FAQ](../azure-netapp-files/faq-performance.md#how-do-i-convert-throughput-based-service-levels-of-azure-netapp-files-to-iops).|Standard HDD: up to 500 IOPS per-disk limits<br>Standard SSD: up to 4k IOPS per-disk limits<br>Premium SSD: up to 20k IOPS per-disk limits<br>We recommend Premium disks for Storage Spaces Direct| |Capacity|100 TiB per share, Up to 5 PiB per general purpose account |100 TiB per volume, up to 12.5 PiB per NetApp account|Maximum 32 TiB per disk|-|Required infrastructure|Minimum share size 1 GiB|Minimum capacity pool 4 TiB, min volume size 100 GiB|Two VMs on Azure IaaS (+ Cloud Witness) or at least three VMs without and costs for disks| +|Required infrastructure|Minimum share size 1 GiB|Minimum capacity pool 2 TiB, min volume size 100 GiB|Two VMs on Azure IaaS (+ Cloud Witness) or at least three VMs without and costs for disks| |Protocols|SMB 3.0/2.1, NFSv4.1 (preview), REST|[NFSv3, NFSv4.1](../azure-netapp-files/azure-netapp-files-create-volumes.md), [SMB 3.x/2.x](../azure-netapp-files/azure-netapp-files-create-volumes-smb.md), [dual-protocol](../azure-netapp-files/create-volumes-dual-protocol.md)|NFSv3, NFSv4.1, SMB 3.1| ## Azure management details |
virtual-machines | Disk Bursting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disk-bursting.md | Title: Managed disk bursting description: Learn about disk bursting for Azure disks and Azure virtual machines. Previously updated : 09/10/2022 Last updated : 02/22/2023 Bursting for Azure VMs and disk resources aren't dependent on each other. You do ## Common scenarios The following scenarios can benefit greatly from bursting:-- **Improve startup times** ΓÇô With bursting, your instance will startup at a faster rate. For example, the default OS disk for premium enabled VMs is the P4 disk, which is a provisioned performance of up to 120 IOPS and 25 MB/s. With bursting, the P4 can go up to 3500 IOPS and 170 MB/s allowing for startup to accelerate by up to 6X.+- **Improve startup times** ΓÇô With bursting, your instance will start up at a faster rate. For example, the default OS disk for premium enabled VMs is the P4 disk, which is a provisioned performance of up to 120 IOPS and 25 MB/s. With bursting, the P4 can go up to 3500 IOPS and 170 MB/s allowing for startup to accelerate by up to 6X. - **Handle batch jobs** ΓÇô Some application workloads are cyclical in nature. They require a baseline performance most of the time, and higher performance for short periods of time. An example of this is an accounting program that processes daily transactions that require a small amount of disk traffic. At the end of the month this program would complete reconciling reports that need a much higher amount of disk traffic. - **Traffic spikes** ΓÇô Web servers and their applications can experience traffic surges at any time. If your web server is backed by VMs or disks that use bursting, the servers would be better equipped to handle traffic spikes. |
virtual-machines | Disks Shared | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared.md | description: Learn about sharing Azure managed disks across multiple Linux VMs. Previously updated : 01/26/2023 Last updated : 02/22/2023 When you share a disk, your billing could be impacted in two different ways, dep For shared premium SSD disks, in addition to cost of the disk's tier, there's an extra charge that increases with each VM the SSD is mounted to. See [managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) for details. -Ultra disks don't have an extra charge for each VM that they're mounted to. They're billed on the total IOPS and MBps that the disk is configured for. Normally, an ultra disk has two performance throttles that determine its total IOPS/MBps. However, when configured as a shared ultra disk, two more performance throttles are exposed, for a total of four. These two additional throttles allow for increased performance at an extra expense and each meter has a default value, which raises the performance and cost of the disk. +Ultra disks don't have an extra charge for each VM that they're mounted to. They're billed on the total IOPS and MB/s that the disk is configured for. Normally, an ultra disk has two performance throttles that determine its total IOPS/MB/s. However, when configured as a shared ultra disk, two more performance throttles are exposed, for a total of four. These two additional throttles allow for increased performance at an extra expense and each meter has a default value, which raises the performance and cost of the disk. -The four performance throttles a shared ultra disk has are diskMBpsReadWrite, diskIOPSReadOnly and diskMBpsReadOnly. Each performance throttle can be configured to change the performance of your disk. The performance for shared ultra disk is calculated in the following ways: total provisioned IOPS (diskIOPSReadWrite + diskIOPSReadOnly) and for total provisioned throughput MBps (diskMBpsReadWrite + diskMBpsReadOnly). +The four performance throttles a shared ultra disk has are diskMB/sReadWrite, diskIOPSReadOnly and diskMB/sReadOnly. Each performance throttle can be configured to change the performance of your disk. The performance for shared ultra disk is calculated in the following ways: total provisioned IOPS (diskIOPSReadWrite + diskIOPSReadOnly) and for total provisioned throughput MB/s (diskMB/sReadWrite + diskMB/sReadOnly). Once you've determined your total provisioned IOPS and total provisioned throughput, you can use them in the [pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=managed-disks) to determine the cost of an ultra shared disk. Both Ultra Disks and Premium SSD v2 managed disks have the unique capability of |Attribute |Description | ||| |DiskIOPSReadWrite (Read/write disk IOPS) |The total number of IOPS allowed across all VMs mounting the shared disk with write access. |-|DiskMBpsReadWrite (Read/write disk throughput) |The total throughput (MB/s) allowed across all VMs mounting the shared disk with write access. | +|DiskMB/sReadWrite (Read/write disk throughput) |The total throughput (MB/s) allowed across all VMs mounting the shared disk with write access. | |DiskIOPSReadOnly* (Read-only disk IOPS) |The total number of IOPS allowed across all VMs mounting the shared disk as `ReadOnly`. |-|DiskMBpsReadOnly* (Read-only disk throughput) |The total throughput (MB/s) allowed across all VMs mounting the shared disk as `ReadOnly`. | +|DiskMB/sReadOnly* (Read-only disk throughput) |The total throughput (MB/s) allowed across all VMs mounting the shared disk as `ReadOnly`. | \* Applies to shared Ultra Disks and shared Premium SSD v2 managed disks only The following formulas explain how the performance attributes can be set, since - Has a baseline minimum IOPS of 100, for disks 100 GiB and smaller. - For disks larger than 100 GiB, the baseline minimum IOPS you can set increases by 1 per GiB. So the lowest you can set DiskIOPSReadWrite for a 101 GiB disk is 101 IOPS. - The maximum you can set this attribute is determined by the size of your disk, the formula is 300 * GiB, up to a maximum of 160,000.-- DiskMBpsReadWrite (Read/write disk throughput)- - The minium throughput (MB/s) of this attribute is determined by your IOPS, the formula is 4 KiB per second per IOPS. So if you had 101 IOPS, the minium MB/s you can set is 1. +- DiskMB/sReadWrite (Read/write disk throughput) + - The minimum throughput (MB/s) of this attribute is determined by your IOPS, the formula is 4 KiB per second per IOPS. So if you had 101 IOPS, the minimum MB/s you can set is 1. - The maximum you can set this attribute is determined by the amount of IOPS you set, the formula is 256 KiB per second per IOPS, up to a maximum of 4,000 MB/s. - DiskIOPSReadOnly (Read-only disk IOPS) - The minimum baseline IOPS for this attribute is 100. For DiskIOPSReadOnly, the baseline doesn't increase with disk size. - The maximum you can set this attribute is determined by the size of your disk, the formula is 300 * GiB, up to a maximum of 160,000.-- DiskMBpsReadOnly (Read-only disk throughput)- - The minimum throughput (MB/s) for this attribute is 1. For DiskMBpsReadOnly, the baseline doesn't increase with IOPS. +- DiskMB/sReadOnly (Read-only disk throughput) + - The minimum throughput (MB/s) for this attribute is 1. For DiskMB/sReadOnly, the baseline doesn't increase with IOPS. - The maximum you can set this attribute is determined by the amount of IOPS you set, the formula is 256 KiB per second per IOPS, up to a maximum of 4,000 MB/s. #### Examples The following is an example of a 4-node Linux cluster with a single writer and t ##### Shared Ultra Disk and Premium SSD v2 pricing -Both shared Ultra Disks and shared Premium SSD v2 managed disks are priced based on provisioned capacity, total provisioned IOPS (diskIOPSReadWrite + diskIOPSReadOnly) and total provisioned Throughput MBps (diskMBpsReadWrite + diskMBpsReadOnly). There's no extra charge for each additional VM mount. For example, a shared Ultra Disk with the following configuration (diskSizeGB: 1024, DiskIOPSReadWrite: 10000, DiskMBpsReadWrite: 600, DiskIOPSReadOnly: 100, DiskMBpsReadOnly: 1) is charged with 1024 GiB, 10100 IOPS, and 601 MBps regardless of whether it is mounted to two VMs or five VMs. +Both shared Ultra Disks and shared Premium SSD v2 managed disks are priced based on provisioned capacity, total provisioned IOPS (diskIOPSReadWrite + diskIOPSReadOnly) and total provisioned Throughput MB/s (diskMB/sReadWrite + diskMB/sReadOnly). There's no extra charge for each additional VM mount. For example, a shared Ultra Disk with the following configuration (diskSizeGB: 1024, DiskIOPSReadWrite: 10000, DiskMB/sReadWrite: 600, DiskIOPSReadOnly: 100, DiskMB/sReadOnly: 1) is charged with 1024 GiB, 10100 IOPS, and 601 MB/s regardless of whether it is mounted to two VMs or five VMs. ## Next steps |
virtual-machines | Disks Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-types.md | The following table provides a comparison of the five disk types to help you dec | - | - | -- | | | | | **Disk type** | SSD | SSD |SSD | SSD | HDD | | **Scenario** | IO-intensive workloads such as [SAP HANA](workloads/sap/hana-vm-operations-storage.md), top tier databases (for example, SQL, Oracle), and other transaction-heavy workloads. | Production and performance-sensitive workloads that consistently require low latency and high IOPS and throughput | Production and performance sensitive workloads | Web servers, lightly used enterprise applications and dev/test | Backup, non-critical, infrequent access |-| **Max disk size** | 65,536 gibibyte (GiB) | 65,536 GiB |32,767 GiB | 32,767 GiB | 32,767 GiB | +| **Max disk size** | 65,536 gigabytes (GiB) | 65,536 GiB |32,767 GiB | 32,767 GiB | 32,767 GiB | | **Max throughput** | 4,000 MB/s | 1,200 MB/s | 900 MB/s | 750 MB/s | 500 MB/s | | **Max IOPS** | 160,000 | 80,000 | 20,000 | 6,000 | 2,000 | | **Usable as OS Disk?** | No | No | Yes | Yes | Yes | Azure ultra disks offer up to 32-TiB per region per subscription by default, but The following table provides a comparison of disk sizes and performance caps to help you decide which to use. -|Disk Size (GiB) |IOPS Cap |Throughput Cap (MBps) | +|Disk Size (GiB) |IOPS Cap |Throughput Cap (MB/s) | |||| |4 |1,200 |300 | |8 |2,400 |600 | For more information about IOPS, see [Virtual machine and disk performance](disk ### Ultra disk throughput -The throughput limit of a single ultra disk is 256-KiB/s for each provisioned IOPS, up to a maximum of 4000 MBps per disk (where MBps = 10^6 Bytes per second). The minimum guaranteed throughput per disk is 4KiB/s for each provisioned IOPS, with an overall baseline minimum of 1 MBps. +The throughput limit of a single ultra disk is 256-KiB/s for each provisioned IOPS, up to a maximum of 4000 MB/s per disk (where MB/s = 10^6 Bytes per second). The minimum guaranteed throughput per disk is 4KiB/s for each provisioned IOPS, with an overall baseline minimum of 1 MB/s. You can adjust ultra disk IOPS and throughput performance at runtime without detaching the disk from the virtual machine. After a performance resize operation has been issued on a disk, it can take up to an hour for the change to take effect. Up to four performance resize operations are permitted during a 24-hour window. |
virtual-machines | Expand Disks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/expand-disks.md | Filesystem Type Size Used Avail Use% Mounted on <truncated> /dev/sdd1 ext4 32G 30G 727M 98% /opt/db/data /dev/sde1 ext4 32G 49M 30G 1% /opt/db/log++ > [!NOTE] + > If you are using an ext3 file system, you can use the resize2fs command instead`. ``` Here we can see, for example, the `/opt/db/data` filesystem is nearly full, and is located on the `/dev/sdd1` partition. The output of `df` will show the device path regardless of whether the disk is mounted by device path or the (preferred) UUID in the fstab. Also take note of the Type column, indicating the format of the filesystem. This will be important later. |