Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Data Residency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/data-residency.md | Azure AD B2C is **generally available worldwide** with the option for **data res If you enable [Go-Local add-on](#go-local-add-on), you can store your data exclusively in a specific country/region. +> [!NOTE] +> Azure AD B2C is generally available in the Microsoft Azure global cloud and Microsoft Azure operated by 21Vianet. Azure AD B2C is not available in Microsoft Azure Government. ## Region availability |
api-management | Api Management Howto App Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-app-insights.md | To emit custom metrics, perform the following configuration steps. ### Limits for custom metrics -Azure Monitor imposes [usage limits](/azure/azure-monitor/essentials/metrics-custom-overview#quotas-and-limits) for custom metrics that may affect your ability to emit metrics from API Management. For example, Azure Monitor currently sets a limit of 10 dimension keys per metric, and a limit of 50,000 total active time series per region in a subscription (within a 12 hour period). --These limits have the following implications for configuring custom metrics in API Management: --* You can configure a maximum of 10 custom dimensions per `emit-metric` policy. --* The number of active time series generated by the `emit-metric` policy within a 12 hour period is the product of the number of unique values of each configured dimension during the period. For example, if three custom dimensions were configured in the policy, and each dimension had 10 possible values within the period, the `emit-metric` policy would contribute 1,000 (10 x 10 x 10) active time series. --* If you configure the `emit-metric` policy in multiple API Management instances that are in the same region in a subscription, all instances can contribute to the regional active time series limit. + ## Performance implications and log sampling |
api-management | Azure Openai Emit Token Metric Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-emit-token-metric-policy.md | -The `azure-openai-emit-token-metric` policy sends metrics to Application Insights about consumption of large language model tokens through Azure OpenAI Service APIs. Token count metrics include: Total Tokens, Prompt Tokens, and Completion Tokens. +The `azure-openai-emit-token-metric` policy sends custom metrics to Application Insights about consumption of large language model tokens through Azure OpenAI Service APIs. Token count metrics include: Total Tokens, Prompt Tokens, and Completion Tokens. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)] [!INCLUDE [api-management-azure-openai-models](../../includes/api-management-azure-openai-models.md)] +## Limits for custom metrics ++ ## Prerequisites * One or more Azure OpenAI Service APIs must be added to your API Management instance. For more information, see [Add an Azure OpenAI Service API to Azure API Management](./azure-openai-api-from-specification.md).-* Your API Management instance must be integrated with Application insights. For more information, see [How to integrate Azure API Management with Azure Application Insights](./api-management-howto-app-insights.md#create-a-connection-using-the-azure-portal). +* Your API Management instance must be integrated with Application insights. For more information, see [How to integrate Azure API Management with Azure Application Insights](./api-management-howto-app-insights.md). * Enable Application Insights logging for your Azure OpenAI APIs. * Enable custom metrics with dimensions in Application Insights. For more information, see [Emit custom metrics](api-management-howto-app-insights.md#emit-custom-metrics). The `azure-openai-emit-token-metric` policy sends metrics to Application Insight | Attribute | Description | Required | Default value | | | -- | | -- | | namespace | A string. Namespace of metric. Policy expressions aren't allowed. | No | API Management |-| value | Value of metric expressed as a double. Policy expressions are allowed. | No | 1 | ## Elements The `azure-openai-emit-token-metric` policy sends metrics to Application Insight ## Example -The following example sends Azure OpenAI token count metrics to Application Insights along with User ID, Client IP, and API ID as dimensions. +The following example sends Azure OpenAI token count metrics to Application Insights along with API ID as a custom dimension. ```xml <policies> <inbound> <azure-openai-emit-token-metric namespace="AzureOpenAI"> - <dimension name="User ID" /> - <dimension name="Client IP" value="@(context.Request.IpAddress)" /> <dimension name="API ID" /> </azure-openai-emit-token-metric> </inbound> |
api-management | Emit Metric Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/emit-metric-policy.md | -> [!NOTE] -> * Custom metrics are a [preview feature](/azure/azure-monitor/essentials/metrics-custom-overview) of Azure Monitor and subject to [limitations](/azure/azure-monitor/essentials/metrics-custom-overview#design-limitations-and-considerations). -> * For more information about the API Management data added to Application Insights, see [How to integrate Azure API Management with Azure Application Insights](./api-management-howto-app-insights.md#what-data-is-added-to-application-insights). - [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)] +## Limits for custom metrics +++## Prerequisites ++* Your API Management instance must be integrated with Application insights. For more information, see [How to integrate Azure API Management with Azure Application Insights](./api-management-howto-app-insights.md). +* Enable Application Insights logging for your APIs. +* Enable custom metrics with dimensions in Application Insights. For more information, see [Emit custom metrics](api-management-howto-app-insights.md#emit-custom-metrics). + ## Policy statement ```xml The `emit-metric` policy sends custom metrics in the specified format to Applica * You can configure at most 10 custom dimensions for this policy. -* Invoking the `emit-metric` policy counts toward the usage limits for custom metrics per region in a subscription. [Learn more](api-management-howto-app-insights.md#limits-for-custom-metrics) - ## Example -The following example sends a custom metric to count the number of API requests along with user ID, client IP, and API ID as custom dimensions. +The following example sends a custom metric to count the number of API requests along with API ID as a custom dimension. ```xml <policies> <inbound> <emit-metric name="Request" value="1" namespace="my-metrics"> - <dimension name="User ID" /> - <dimension name="Client IP" value="@(context.Request.IpAddress)" /> <dimension name="API ID" /> </emit-metric> </inbound> |
api-management | Llm Emit Token Metric Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/llm-emit-token-metric-policy.md | -The `llm-emit-token-metric` policy sends metrics to Application Insights about consumption of large language model (LLM) tokens through LLM APIs. Token count metrics include: Total Tokens, Prompt Tokens, and Completion Tokens. +The `llm-emit-token-metric` policy sends custom metrics to Application Insights about consumption of large language model (LLM) tokens through LLM APIs. Token count metrics include: Total Tokens, Prompt Tokens, and Completion Tokens. > [!NOTE] > Currently, this policy is in preview. The `llm-emit-token-metric` policy sends metrics to Application Insights about c [!INCLUDE [api-management-llm-models](../../includes/api-management-llm-models.md)] +## Limits for custom metrics ++ ## Prerequisites * One or more LLM APIs must be added to your API Management instance. The `llm-emit-token-metric` policy sends metrics to Application Insights about c | Attribute | Description | Required | Default value | | | -- | | -- | | namespace | A string. Namespace of metric. Policy expressions aren't allowed. | No | API Management |-| value | Value of metric expressed as a double. Policy expressions are allowed. | No | 1 | ## Elements The `llm-emit-token-metric` policy sends metrics to Application Insights about c ## Example -The following example sends LLM token count metrics to Application Insights along with User ID, Client IP, and API ID as dimensions. +The following example sends LLM token count metrics to Application Insights along with API ID as a custom dimension. ```xml <policies> <inbound> <llm-emit-token-metric namespace="MyLLM"> - <dimension name="User ID" /> - <dimension name="Client IP" value="@(context.Request.IpAddress)" /> <dimension name="API ID" /> </llm-emit-token-metric> </inbound> |
app-service | Deploy Intelligent Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-intelligent-apps.md | zone_pivot_groups: app-service-openai :::zone pivot="openai-java" [!INCLUDE [deploy-intelligent-apps-linux-java-pivot.md](includes/deploy-intelligent-apps/deploy-intelligent-apps-linux-java-pivot.md)] ::: zone-end+ |
application-gateway | Application Gateway Backend Health Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-backend-health-troubleshooting.md | The status retrieved by any of these methods can be any one of the following sta The Application Gateway forwards a request to a server from the backend pool if its status is healthy. If all the servers in a backend pool are unhealthy or unknown, the clients could encounter problems accessing the backend application. Read further to understand the different messages reported by Backend Health, their causes, and their resolution. > [!NOTE]-> If your user doesn't have permission to see backend health statuses, `No results.` will be shown. +> If your user doesn't have permission to see backend health statuses, the output `No results.` is displayed. ## Backend health status: Unhealthy The message displayed in the **Details** column provides more detailed insights **Message:** Time taken by the backend to respond to application gateway's health probe is more than the timeout threshold in the probe setting. -**Cause:** After Application Gateway sends an HTTP(S) probe request to the -backend server, it waits for a response from the backend server for a configured period. If the backend server doesn't -respond within the configured period (the timeout value), it's marked as Unhealthy until it starts responding within the configured timeout period again. +**Cause:** After Application Gateway sends an HTTP(S) probe request to the backend server, it waits for a response from the backend server for a configured period. If the backend server doesn’t respond within this period (the timeout value), it is marked as Unhealthy until it responds within the configured timeout period again. **Resolution:** Check why the backend server or application isn't responding within the configured timeout period, and also check the application dependencies. For example, check whether the database has any issues that might trigger a delay in response. If you're aware of the application's behavior and it should respond only after the timeout value, increase the timeout value from the custom probe settings. You must have a custom probe to change the timeout value. For information about how to configure a custom probe, [see the documentation page](./application-gateway-create-probe-portal.md). To increase the timeout value, follow these steps: 1. Access the backend server directly and check the time taken for the server to respond on that page. You can use any tool to access the backend server, including a browser using developer tools.-2. After you've figured out the time taken for the application to respond, select the **Health Probes** tab, then select the probe that's associated with your HTTP settings. +2. After you figure out the time taken for the application to respond, select the **Health Probes** tab, then select the probe associated with your HTTP settings. 3. Enter any timeout value that's greater than the application response time, in seconds. 4. Save the custom probe settings and check whether the backend health shows as Healthy now. ### DNS resolution error -**Message:** Application Gateway could not create a probe for this backend. This usually happens when the FQDN of the backend has not been entered correctly.  +**Message:** Application Gateway could not create a probe for this backend. This usually happens when the FQDN of the backend is not entered correctly.  **Cause:** If the backend pool is of type IP Address, FQDN (fully qualified domain name) or App Service, Application Gateway resolves to the IP address of the FQDN entered through DNS (custom or Azure default). The application gateway then tries to connect to the server on the TCP port mentioned in the HTTP settings. But if this message is displayed, it suggests that Application Gateway couldn't successfully resolve the IP address of the FQDN entered. To increase the timeout value, follow these steps: 1. Verify that the FQDN entered in the backend pool is correct and that it's a public domain, then try to resolve it from your local machine. 2. If you can resolve the IP address, there might be something wrong with the DNS configuration in the virtual network. 3. Check whether the virtual network is configured with a custom DNS server. If it is, check the DNS server about why it can't resolve to the IP address of the specified FQDN.-4. If you're using Azure default DNS, check with your domain name registrar about whether proper A record or CNAME record mapping has been completed. +4. If you're using Azure default DNS, verify with your domain name registrar that proper A record or CNAME record mapping is complete. 5. If the domain is private or internal, try to resolve it from a VM in the same virtual network. If you can resolve it, restart Application Gateway and check again. To restart Application Gateway, you need to [stop](/powershell/module/az.network/stop-azapplicationgateway) and [start](/powershell/module/az.network/start-azapplicationgateway) by using the PowerShell commands described in these linked resources. ### TCP connect error **Message:** Application Gateway could not connect to the backend. Check that the backend responds on the port used for the probe. Also check whether any NSG/UDR/Firewall is blocking access to the Ip and port of this backend. -**Cause:** After the DNS resolution phase, Application Gateway tries to connect to the backend server on the TCP port that's configured in the HTTP settings. If Application Gateway can't establish a TCP session on the port specified, the probe is marked as Unhealthy with this message. +**Cause:** After the DNS resolution phase, Application Gateway tries to connect to the backend server on the TCP port configured in the HTTP settings. If Application Gateway can't establish a TCP session on the port specified, the probe is marked as Unhealthy with this message. **Solution:** If you receive this error, follow these steps: To increase the timeout value, follow these steps: a. Open a command prompt (Win+R -> cmd), enter **netstat**, and select Enter. - b. Check whether the server is listening on the port that's configured. For example: + b. Check whether the server is listening on the configured port. For example: ``` Proto Local Address Foreign Address State PID To increase the timeout value, follow these steps: **Message:** Status code of the backend's HTTP response did not match the probe setting. Expected:{HTTPStatusCode0} Received:{HTTPStatusCode1}. -**Cause:** After the TCP connection has been established and a TLS handshake is done (if TLS is enabled), Application Gateway will send the probe as an HTTP GET request to the backend server. As described earlier, the default probe will be to `<protocol>://127.0.0.1:<port>/`, and it considers response status codes in the range 200 through 399 as Healthy. If the server returns any other status code, it is marked as Unhealthy with this message. +**Cause:** After the TCP connection is established and a TLS handshake is done (if TLS is enabled), Application Gateway sends the probe as an HTTP GET request to the backend server. As described earlier, the default probe is set to `<protocol>://127.0.0.1:<port>/`, and it considers response status codes in the range 200 through 399 as Healthy. If the server returns any other status code, it is marked as Unhealthy with this message. **Solution:** Depending on the backend server's response code, you can take the following steps. A few of the common status codes are listed here: To increase the timeout value, follow these steps: | Probe status code mismatch: Received 500 | Internal server error. Check the backend server's health and whether the services are running. | | Probe status code mismatch: Received 503 | Service unavailable. Check the backend server's health and whether the services are running. | -Or, if you think the response is legitimate and you want Application Gateway to accept other status codes as Healthy, you can create a custom probe. This approach is useful in situations where the backend website needs authentication. Because the probe requests don't carry any user credentials, they will fail, and an HTTP 401 status code will be returned by the backend server. +Or, if you think the response is legitimate and you want Application Gateway to accept other status codes as Healthy, you can create a custom probe. This approach is useful in situations where the backend website needs authentication. Because the probe requests don't carry any user credentials, they will fail, and an HTTP 401 status code is returned by the backend server. To create a custom probe, follow [these steps](./application-gateway-create-probe-portal.md). Learn more about [Application Gateway probe matching](./application-gateway-prob (For V1) The Common Name (CN) of the backend certificate doesn’t match. **Cause:**-(For V2) This occurs when you have selected HTTPS protocol in the backend setting, and neither the Custom Probe’s nor Backend Setting’s hostname (in that order) matches the Common Name (CN) of the backend server’s certificate.</br> +(For V2) This occurs when you select HTTPS protocol in the backend setting, and neither the Custom Probe’s nor Backend Setting’s hostname (in that order) matches the Common Name (CN) of the backend server’s certificate.</br> (For V1) The FQDN of the backend pool target doesn’t match the Common Name (CN) of the backend server’s certificate. **Solution:** The hostname information is critical for backend HTTPS connection since that value is used to set the Server Name Indication (SNI) during TLS handshake. You can fix this problem in the following ways based on your gateway’s configuration. For V2, For V1, verify the backend pool target's FQDN is same the Common Name (CN). -**Tips:** To determine the Common Name (CN) of the backend server(s)’ certificate, you can use any of these methods. Also note, as per [**RFC 6125**](https://www.rfc-editor.org/rfc/rfc6125#section-6.4.4) if a SAN exists the SNI verification is done only against that field. The common name field is matched if there's no SAN in the certificate. +**Tips:** To determine the Common Name (CN) of the backend server certificate, you can use any of these methods. Also note, as per [**RFC 6125**](https://www.rfc-editor.org/rfc/rfc6125#section-6.4.4) if a SAN exists the SNI verification is done only against that field. The common name field is matched if there's no SAN in the certificate. * By using browser or any client:-Access the backend server directly (not through Application Gateway) and click on the certificate padlock in the address bar to view the certificate details. You will find it under the “Issued To” section. +Access the backend server directly (not through Application Gateway) and click on the certificate padlock in the address bar to view the certificate details. You can find it under the “Issued To” section. [ ![Screenshot that shows certificate details in a browser.](./media/application-gateway-backend-health-troubleshooting/browser-cert.png) ](./media/application-gateway-backend-health-troubleshooting/browser-cert.png#lightbox) * By logging into the backend server (Windows): Run this OpenSSL command by specifying the right certificate filename ` openssl **Cause:** An expired certificate is deemed unsafe and hence the application gateway marks the backend server with an expired certificate as unhealthy. -**Solution:** The solution depends on which part of the certificate chain has expired on the backend server. +**Solution:** The solution depends on which part of the certificate chain expired on the backend server. For V2 SKU,-* Expired Leaf (also known as Domain or Server) certificate – Renew the server certificate with certificate provider and install the new certificate on the backend server. Ensure that you have installed the complete certificate chain comprising of `Leaf (topmost) > Intermediate(s) > Root`. Based on the type of Certificate Authority (CA), you may take the following actions on your gateway. +* Expired Leaf (also known as Domain or Server) certificate – Renew the server certificate with certificate provider and install the new certificate on the backend server. Ensure that you install the complete certificate chain comprised of `Leaf (topmost) > Intermediate(s) > Root`. Based on the type of Certificate Authority (CA), you may take the following actions on your gateway. * Publicly known CA: If the certificate issuer is a well-known CA, you need not take any action on the application gateway. * Private CA: If the leaf certificate is issued by a private CA, you need to check if the signing Root CA certificate has changed. In such cases, you must upload the new Root CA certificate (.CER) to the associated Backend setting of your gateway. -* Expired Intermediate or Root certificate – Typically, these certificates have relatively extended validity periods (a decade or two). When Root/Intermediate certificate expires, we recommend you check with your certificate provider for the renewed certificate files. Ensure you have installed this updated and complete certificate chain comprising `Leaf (topmost) > Intermediate(s) > Root` on the backend server. +* Expired Intermediate or Root certificate – Typically, these certificates have relatively extended validity periods (a decade or two). When Root/Intermediate certificate expires, we recommend you check with your certificate provider for the renewed certificate files. Ensure you install this updated and complete certificate chain comprising `Leaf (topmost) > Intermediate(s) > Root` on the backend server. * If the Root certificate remains unchanged or if the issuer is a well-known CA, you need NOT take any action on the application gateway. * When using a Private CA, if the Root CA certificate itself or the root of the renewed Intermediate certificate has changed, you must upload the new Root certificate to the application gateway’s Backend Setting. For V1 SKU, **Solution:** An Intermediate certificate is used to sign the Leaf certificate and is thus needed to complete the chain. Check with your Certificate Authority (CA) for the necessary Intermediate certificate(s) and install them on your backend server. This chain must start with the Leaf Certificate, then the Intermediate certificate(s), and finally, the Root CA certificate. We recommend installing the complete chain on the backend server, including the Root CA certificate. For reference, look at the certificate chain example under [Leaf must be topmost in chain](application-gateway-backend-health-troubleshooting.md#leaf-must-be-topmost-in-chain). > [!NOTE] -> A self-signed certificate which is NOT a Certificate Authority will also result in the same error. This is because application gateway considers such self-signed certificate as "Leaf" certificate and looks for its signing Intermediate certificate. You can follow this article to correctly [generate a self-signed certificate](./self-signed-certificates.md). +> A self-signed certificate which is NOT a Certificate Authority also results in the same error. This is because application gateway considers such self-signed certificate as "Leaf" certificate and looks for its signing Intermediate certificate. You can follow this article to correctly [generate a self-signed certificate](./self-signed-certificates.md). These images show the difference between the self-signed certificates. [ ![Screenshot showing difference between self-signed certificates.](./media/application-gateway-backend-health-troubleshooting/self-signed-types.png) ](./media/application-gateway-backend-health-troubleshooting/self-signed-types.png#lightbox) These images show the difference between the self-signed certificates. **Tips:** To identify and download the root certificate, you can use any of these methods. * Using a browser: Access the backend server directly (not through Application Gateway) and click on the certificate padlock in the address bar to view the certificate details. - 1. Choose the root certificate in the chain and click on Export. By default, this will be a .CRT file. + 1. Choose the root certificate in the chain and click on Export. By default, this is a .CRT file. 2. Open that .CRT file. 3. Go to the Details tab and click on “Copy to File”, 4. On Certificate Export Wizard page, click Next, OR </br> Check and fix the DNS servers to ensure it's serving a response for the given FDQN's DNS lookup. You must also check if the DNS servers are reachable through your application gateway's Virtual Network. ### Other reasons-If the backend health is shown as Unknown, the portal view will resemble the following screenshot: +If the backend health is shown as Unknown, the portal view resembles the following screenshot: ![Application Gateway backend health - Unknown](./media/application-gateway-backend-health-troubleshooting/appgwunknown.png) This behavior can occur for one or more of the following reasons: 1. Check whether your NSG is blocking access to the ports 65503-65534 (v1 SKU) or 65200-65535 (v2 SKU) from **Internet**: a. On the Application Gateway **Overview** tab, select the **Virtual Network/Subnet** link.- b. On the **Subnets** tab of your virtual network, select the subnet where Application Gateway has been deployed. + b. On the **Subnets** tab of your virtual network, select the subnet where Application Gateway is deployed. c. Check whether any NSG is configured. d. If an NSG is configured, search for that NSG resource on the **Search** tab or under **All resources**. e. In the **Inbound Rules** section, add an inbound rule to allow destination port range 65503-65534 for v1 SKU or 65200-65535 v2 SKU with the **Source** set as **GatewayManager** service tag. This behavior can occur for one or more of the following reasons: a. Follow steps 1a and 1b to determine your subnet. b. Check to see if a UDR is configured. If there is, search for the resource on the search bar or under **All resources**.- c. Check to see if there are any default routes (0.0.0.0/0) with the next hop not set as **Internet**. If the setting is either **Virtual Appliance** or **Virtual Network Gateway**, you must make sure that your virtual appliance, or the on-premises device, can properly route the packet back to the Internet destination without modifying the packet. If probes are routed through a virtual appliance and modified, the backend resource will display a **200** status code and the Application Gateway health status can display as **Unknown**. This doesn't indicate an error. Traffic should still be routing through the Application Gateway without issue. + c. Check to see if there are any default routes (0.0.0.0/0) with the next hop not set as **Internet**. If the setting is either **Virtual Appliance** or **Virtual Network Gateway**, you must make sure that your virtual appliance, or the on-premises device, can properly route the packet back to the Internet destination without modifying the packet. If probes are routed through a virtual appliance and modified, the backend resource displays a **200** status code and the Application Gateway health status can display as **Unknown**. This doesn't indicate an error. Traffic should still be routing through the Application Gateway without issue. d. Otherwise, change the next hop to **Internet**, select **Save**, and verify the backend health. 3. Default route advertised by the ExpressRoute/VPN connection to the virtual network over BGP (Border Gateway Protocol): This behavior can occur for one or more of the following reasons: Address Prefix: Backend pool subnet<br> Next hop: Azure Firewall private IP address +> [!NOTE] +> If the application gateway is not able to access the CRL endpoints, it marks the backend health status as "unknown" and cause fast update failures. To prevent these issues, check that your application gateway subnet is able to access `crl.microsoft.com` and `crl3.digicert.com`. This can be done by configuring your Network Security Groups to send traffic to the CRL endpoints. + ## Next steps Learn more about [Application Gateway diagnostics and logging](./application-gateway-diagnostics.md). |
application-gateway | Configuration Http Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-http-settings.md | The application gateway routes traffic to the backend servers by using the confi ## Cookie-based affinity -Azure Application Gateway uses gateway-managed cookies for maintaining user sessions. When a user sends the first request to Application Gateway, it sets an affinity cookie in the response with a hash value which contains the session details, so that the subsequent requests carrying the affinity cookie will be routed to the same backend server for maintaining stickiness. +Azure Application Gateway uses gateway-managed cookies for maintaining user sessions. When a user sends the first request to Application Gateway, it sets an affinity cookie in the response with a hash value which contains the session details, so that the subsequent requests carrying the affinity cookie are routed to the same backend server for maintaining stickiness. This feature is useful when you want to keep a user session on the same server and when session state is saved locally on the server for a user session. If the application can't handle cookie-based affinity, you can't use this feature. To use it, make sure that the clients support cookies. > [!NOTE] The [Chromium browser](https://www.chromium.org/Home) [v80 update](https://chrom To support this change, starting February 17 2020, Application Gateway (all the SKU types) will inject another cookie called *ApplicationGatewayAffinityCORS* in addition to the existing *ApplicationGatewayAffinity* cookie. The *ApplicationGatewayAffinityCORS* cookie has two more attributes added to it (*"SameSite=None; Secure"*) so that sticky sessions are maintained even for cross-origin requests. -Note that the default affinity cookie name is *ApplicationGatewayAffinity* and you can change it. In case you're using a custom affinity cookie name, an additional cookie is added with CORS as suffix. For example, *CustomCookieNameCORS*. +Note that the default affinity cookie name is *ApplicationGatewayAffinity* and you can change it. If you deploy multiple application gateway instances in the same network topology, you must set unique cookie names for each instance. If you're using a custom affinity cookie name, an additional cookie is added with `CORS` as suffix. For example: *CustomCookieNameCORS*. > [!NOTE] > If the attribute *SameSite=None* is set, it is mandatory that the cookie also contains the *Secure* flag, and must be sent over HTTPS. If session affinity is required over CORS, you must migrate your workload to HTTPS. You can apply this setting to all backend pool members by enabling Connection Dr |Default value when Connection Draining is not enabled in Backend Setting| 30 seconds | |User-defined value when Connection Draining is enabled in Backend Setting | 1 to 3600 seconds | -The only exception to this are requests bound for deregistering instances because of gateway-managed session affinity and will continue to be forwarded to the deregistering instances. +The only exception to this are requests bound for deregistering instances because of gateway-managed session affinity. These requests continue to be forwarded to the deregistering instances. ## Protocol This setting specifies the port where the backend servers listen to traffic from ## Trusted root certificate -If you select HTTPS as the backend protocol, the Application Gateway requires a trusted root certificate to trust the backend pool for end-to-end SSL. By default, the **Use well known CA certificate** option is set to **No**. If you plan to use a self-signed certificate, or a certificate signed by an internal Certificate Authority, then you must provide the Application Gateway the matching public certificate that the backend pool will be using. This certificate must be uploaded directly to the Application Gateway in .CER format. +If you select HTTPS as the backend protocol, the Application Gateway requires a trusted root certificate to trust the backend pool for end-to-end SSL. By default, the **Use well known CA certificate** option is set to **No**. If you plan to use a self-signed certificate, or a certificate signed by an internal Certificate Authority, then you must provide the Application Gateway the matching public certificate used by the backend pool. This certificate must be uploaded directly to the Application Gateway in .CER format. If you plan to use a certificate on the backend pool that is signed by a trusted public Certificate Authority, then you can set the **Use well known CA certificate** option to **Yes** and skip uploading a public certificate. This setting associates a [custom probe](application-gateway-probe-overview.md#c ## Configuring the host name -Application Gateway allows for the connection established to the backend to use a *different* hostname than the one used by the client to connect to Application Gateway. While this configuration can be useful in some cases, overriding the hostname to be different between the client and application gateway and application gateway to backend target, should be done with care. +Application Gateway allows for the connection established to the backend to use a *different* hostname than the one used by the client to connect to Application Gateway. While this configuration can be useful in some cases, exercise caution when overriding the hostname such that it is different between the application gateway and the client compared to the backend target. In production, it is recommended to keep the hostname used by the client towards the application gateway as the same hostname used by the application gateway to the backend target. This avoids potential issues with absolute URLs, redirect URLs, and host-bound cookies. |
application-gateway | Powershell Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/powershell-samples.md | - Title: Azure PowerShell examples for Azure Application Gateway -description: This article has links to Azure PowerShell examples so you can quickly deploy Azure Application Gateway configured in various ways. ----- Previously updated : 11/16/2019---# Azure PowerShell examples for Azure Application Gateway (AG) --The following table includes links to Azure PowerShell script examples for Azure Application Gateway. --| Example | Description | -|-- | -- | -| [Manage web traffic](./scripts/create-vmss-powershell.md) | Creates an Application Gateway and all related resources.| -| [Restrict web traffic](./scripts/create-vmss-waf-powershell.md) | Creates an Application Gateway that restricts traffic using OWASP rules.| -|[WAF v2 custom rules](scripts/waf-custom-rules-powershell.md)|Creates an Application Gateway Web Application Firewall v2 with custom rules.| |
application-gateway | Rewrite Http Headers Url | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-http-headers-url.md | description: This article provides an overview of rewriting HTTP headers and URL Previously updated : 09/10/2024 Last updated : 09/30/2024 Application gateway supports the following server variables: | server_port | The port of the server that accepted a request. | | ssl_connection_protocol | The protocol of an established TLS connection. | | ssl_enabled | "On" if the connection operates in TLS mode. Otherwise, an empty string. |-| uri_path | Identifies the specific resource in the host that the web client wants to access. This is the part of the request URI without the arguments. Example: In the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, uri_path value is `/article.aspx` | +| uri_path | Identifies the specific resource in the host that the web client wants to access. The variable refers to the original URL path, prior to any manipulation. This is the part of the request URI without the arguments. For example, in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, the uri_path value is `/article.aspx`. | ### Mutual authentication server variables |
azure-government | Compare Azure Government Global Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md | Table below lists API endpoints in Azure vs. Azure Government for accessing and ||API Management Portal|portal.azure-api.net|portal.azure-api.us|| ||App Configuration|azconfig.io|azconfig.azure.us|| ||App Service|azurewebsites.net|azurewebsites.us||-||Azure AI Search|search.windows.net|search.windows.us|| +||Azure AI Search|search.windows.net|search.azure.us|| ||Azure Functions|azurewebsites.net|azurewebsites.us|| ## Service availability |
azure-health-insights | Deploy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/deploy-portal.md | Once the Azure AI services account is successfully created, configure private en To get started using Azure AI Health Insights, get started with one of the following models: ->[!div class="nextstepaction"] -> [Onco-Phenotype](oncophenotype/index.yml) >[!div class="nextstepaction"] > [Trial Matcher](trial-matcher/index.yml) |
azure-health-insights | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/faq.md | - Title: Onco-Phenotype frequently asked questions- -description: Onco-Phenotype frequently asked questions ----- Previously updated : 02/02/2023-----# Onco-Phenotype Frequently Asked Questions --- What does inference value `None` mean?-- `None` implies that the model couldn't find enough relevant information to make a meaningful prediction. --- How is the `description` property populated for tumor site inference?-- It is populated on ICD-O-3 SEER Site/Histology Validation List [here](https://seer.cancer.gov/icd-o-3/). --- Do you support behavior code along with histology code?-- No, only four digit histology code is supported. --- What does inference value `N+` mean for clinical/pathologic N category? Why don't you have `N1, N2, N3` inference values?-- `N+` means there's involvement of regional lymph nodes without explicitly mentioning the extent of spread. Microsoft trained the models to classify whether or not there's regional lymph node involvement but not the extent of spread and hence `N1, N2, N3` inference values aren't supported. --- Do you support subcategories for clinical/pathologic TNM categories?-- No, subcategories or isolated tumor cell modifiers aren't supported. For instance, 'T3 a' would be predicted as T3, and N0(i+) would be predicted as N0. --- Do you have plans to support I-IV stage grouping?-- No, Microsoft doesn't have any plans to support I-IV stage grouping at this time. --- Do you check if the tumor site and histology inference values are a valid combination?-- No, the OncoPhenotype API doesn't validate if the tumor site and histology inference values are a valid combination. --- Are the inference values exhaustive for tumor site and histology?-- No, the inference values are only as exhaustive as the training data set labels. |
azure-health-insights | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/get-started.md | - Title: Use Onco-Phenotype - -description: This article describes how to use the Onco-Phenotype ----- Previously updated : 05/05/2024-----# Quickstart: Use the Onco-Phenotype model --This quickstart provides an overview on how to use the Onco-Phenotype. --## Prerequisites -To use the Onco-Phenotype model, you must have an Azure AI services account created. If you haven't already created an Azure AI services account, see [Deploy Azure AI Health Insights using the Azure portal.](../deploy-portal.md) --Once deployment is complete, you use the Azure portal to navigate to the newly created Azure AI services account to see the details, including your Service URL. The Service URL to access your service is: https://```YOUR-NAME```.cognitiveservices.azure.com/. ---## Example request and results --To send an API request, you need your Azure AI services account endpoint and key. You can also find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/onco-phenotype/create-job). --![[Screenshot of the Keys and Endpoints for the Onco-Phenotype.](../media/keys-and-endpoints.png)](../media/keys-and-endpoints.png#lightbox) --> [!IMPORTANT] -> Prediction is performed upon receipt of the API request and the results will be returned asynchronously. The API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval. --## Example request --### Starting with a request that contains a case --You can use the data from this example, to test your first request to the Onco-Phenotype model. --```url -POST http://{cognitive-services-account-endpoint}/healthinsights/oncophenotype/jobs?api-version=2023-03-01-preview -Content-Type: application/json -Ocp-Apim-Subscription-Key: {cognitive-services-account-key} -``` -```json -{ - "configuration": { - "checkForCancerCase": true, - "includeEvidence": false - }, - "patients": [ - { - "id": "patient1", - "data": [ - { - "kind": "note", - "clinicalType": "pathology", - "id": "document1", - "language": "en", - "createdDateTime": "2022-01-01T00:00:00", - "content": { - "sourceType": "inline", - "value": "Laterality: Left \n Tumor type present: Invasive duct carcinoma; duct carcinoma in situ \n Tumor site: Upper inner quadrant \n Invasive carcinoma \n Histologic type: Ductal \n Size of invasive component: 0.9 cm \n Histologic Grade - Nottingham combined histologic score: 1 out of 3 \n In situ carcinoma (DCIS) \n Histologic type of DCIS: Cribriform and solid \n Necrosis in DCIS: Yes \n DCIS component of invasive carcinoma: Extensive \n" - } - } - ] - } - ] -} -``` -### Evaluating a response that contains a case --You get the status of the job by sending a request to the Onco-Phenotype model and adding the job ID from the initial request in the URL, as seen in the code snippet: --```url -GET http://{cognitive-services-account-endpoint}/healthinsights/oncophenotype/jobs/385903b2-ab21-4f9e-a011-43b01f78f04e?api-version=2023-03-01-preview -``` --```json -{ - "results": { - "patients": [ - { - "id": "patient1", - "inferences": [ - { - "kind": "tumorSite", - "value": "C50.2", - "description": "BREAST", - "confidenceScore": 0.9214 - }, - { - "kind": "histology", - "value": "8500", - "confidenceScore": 0.9973 - }, - { - "kind": "clinicalStageT", - "value": "T1", - "confidenceScore": 0.9956 - }, - { - "kind": "clinicalStageN", - "value": "N0", - "confidenceScore": 0.9931 - }, - { - "kind": "clinicalStageM", - "value": "None", - "confidenceScore": 0.5217 - }, - { - "kind": "pathologicStageT", - "value": "T1", - "confidenceScore": 0.9477 - }, - { - "kind": "pathologicStageN", - "value": "N0", - "confidenceScore": 0.7927 - }, - { - "kind": "pathologicStageM", - "value": "M0", - "confidenceScore": 0.9208 - } - ] - } - ], - "modelVersion": "2023-03-01-preview" - }, - "jobId": "385903b2-ab21-4f9e-a011-43b01f78f04e", - "createdDateTime": "2023-03-08T17:02:46Z", - "expirationDateTime": "2023-03-08T17:19:26Z", - "lastUpdateDateTime": "2023-03-08T17:02:53Z", - "status": "succeeded" -} -``` --You can also find a full view of the [response parameters here](/rest/api/cognitiveservices/healthinsights/onco-phenotype/get-job) ---## Request validation --Every request has required and optional fields that should be provided to the Onco-Phenotype model. -When you're sending data to the model, make sure that you take the following properties into account: --Within a request: -- ```patients``` should be set-- ```patients``` should contain at least one entry-- ```id``` in patients entries should be unique--For each patient: -- ```data``` should be set-- ```data``` should contain at least one document of clinical type ```pathology```-- ```id``` in data entries should be unique--For each clinical document within a patient: -- ```createdDateTime``` should be set-- if set, ```language``` should be ```en``` (default is ```en``` if not set)-- ```documentType``` should be set to ```Note```-- ```clinicalType``` should be set to one of ```imaging```, ```pathology```, ```procedure```, ```progress```-- content ```sourceType``` should be set to ```inline```--## Data limits --| **Limit** | **Value** | -| - | -- | -| Maximum # patients per request | 1 | -| Maximum # characters per patient | 50,000 for data[i].content.value all combined | ---## Next steps --To get better insights into the request and responses, you can read more on following pages: -->[!div class="nextstepaction"] -> [Model configuration](model-configuration.md) -->[!div class="nextstepaction"] -> [Inference information](inferences.md) |
azure-health-insights | Inferences | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/inferences.md | - Title: Onco-Phenotype inference information- -description: This article provides Onco-Phenotype inference information. ----- Previously updated : 05/05/2024-----# Onco-Phenotype inference information --Azure AI Health Insights Onco-Phenotype model was trained with labels that conform to the following standards. -- Tumor site and histology inferences: **WHO ICD-O-3** representation.-- Clinical and pathologic stage TNM category inferences: **American Joint Committee on Cancer (AJCC)'s 7th edition** of the cancer staging manual.--You can find an overview of the response values here: --**Inference type** |**Description** |**Values** --|--|--tumorSite |The tumor site |`None, ICD-O-3 tumor site code (e.g. C34.2)` -histology |The histology code |`None, 4-digit ICD-O-3 histology code` -clinicalStageT |The T category of the clinical stage |`None, T0, Tis, T1, T2, T3, T4` -clinicalStageN |The N category of the clinical stage |`None, N0, N+` -clinicalStageM |The M category of the clinical stage |`None, M0, M1` -pathologicStageT |The T category of the pathologic stage|`None, T0, Tis, T1, T2, T3, T4` -pathologicStageN |The N category of the pathologic stage|`None, N0, N+` -pathologicStageM |The M category of the pathologic stage|`None, M0, M1` ---## Confidence score --Each inference has an attribute called ```confidenceScore``` that expresses the confidence level for the inference value, ranging from 0 to 1. The higher the confidence score is, the more certain the model was about the inference value provided. The inference values should **not** be consumed without human review, no matter how high the confidence score is. --## Importance --When you set the ```includeEvidence``` property to ```true```, each evidence property has an ```importance``` attribute that expresses how important that evidence was to predicting the inference value, ranging from 0 to 1. A higher importance value indicates that the model relied more on that specific evidence. --## Next steps --To get better insights into the request and responses, read more on following page: -->[!div class="nextstepaction"] -> [Model configuration](model-configuration.md) |
azure-health-insights | Model Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/model-configuration.md | - Title: Onco-Phenotype model configuration- -description: This article provides Onco-Phenotype model configuration information. ----- Previously updated : 05/05/2024-----# Onco-Phenotype model configuration --To interact with the Onco-Phenotype model, you can provide several model configurations parameters that modify the outcome of the responses and reflects the preferences of the user. --> [!NOTE] -> The examples in this article are based on API version: 2023-03-01-preview. For a specific API version, please follow the reference to the REST API to see full description of each API version. ---> [!IMPORTANT] -> Model configuration is applied to ALL the patients within a request. --```json -"configuration": { - "checkForCancerCase": false, - "includeEvidence": false -} -``` --## Case finding ---The Onco-Phenotype model configuration helps you find if any cancer cases exist. The API allows you to explicitly check if a cancer case exists in the provided clinical documents. --**Check for cancer case** |**Did the model find a case?** |**Behavior** -- |--|--true |Yes |Inferences are returned -true |No |No inferences are returned -false |N/A |Inferences are always returned but they aren't meaningful if there's no cancer case. --Set ```checkForCancerCase``` to ```false``` if -- you're sure that the provided clinical documents definitely contain a case-- the model is unable to find a case in a valid scenario--If a case is found in the provided clinical documents and the model is able to find that case, the inferences are always returned. --## Case finding examples --### With case finding --The following example represents a case finding. The ```checkForCancerCase``` is set to ```true``` and ```includeEvidence``` is set to ```false```. Meaning the model checks for a cancer case but not include the evidence. --Request that contains a case: -```json -{ - "configuration": { - "checkForCancerCase": true, - "includeEvidence": false - }, - "patients": [ - { - "id": "patient1", - "data": [ - { - "kind": "note", - "clinicalType": "pathology", - "id": "document1", - "language": "en", - "createdDateTime": "2022-01-01T00:00:00", - "content": { - "sourceType": "inline", - "value": "Laterality: Left \n Tumor type present: Invasive duct carcinoma; duct carcinoma in situ \n Tumor site: Upper inner quadrant \n Invasive carcinoma \n Histologic type: Ductal \n Size of invasive component: 0.9 cm \n Histologic Grade - Nottingham combined histologic score: 1 out of 3 \n In situ carcinoma (DCIS) \n Histologic type of DCIS: Cribriform and solid \n Necrosis in DCIS: Yes \n DCIS component of invasive carcinoma: Extensive \n" - } - } - ] - } - ] -} -``` -Response: -```json -{ - "results": { - "patients": [ - { - "id": "patient1", - "inferences": [ - { - "kind": "tumorSite", - "value": "C50.2", - "description": "BREAST", - "confidenceScore": 0.9214 - }, - { - "kind": "histology", - "value": "8500", - "confidenceScore": 0.9973 - }, - { - "kind": "clinicalStageT", - "value": "T1", - "confidenceScore": 0.9956 - }, - { - "kind": "clinicalStageN", - "value": "N0", - "confidenceScore": 0.9931 - }, - { - "kind": "clinicalStageM", - "value": "None", - "confidenceScore": 0.5217 - }, - { - "kind": "pathologicStageT", - "value": "T1", - "confidenceScore": 0.9477 - }, - { - "kind": "pathologicStageN", - "value": "N0", - "confidenceScore": 0.7927 - }, - { - "kind": "pathologicStageM", - "value": "M0", - "confidenceScore": 0.9208 - } - ] - } - ], - "modelVersion": "2023-03-01-preview" - }, - "jobId": "385903b2-ab21-4f9e-a011-43b01f78f04e", - "createdDateTime": "2023-03-08T17:02:46Z", - "expirationDateTime": "2023-03-08T17:19:26Z", - "lastUpdateDateTime": "2023-03-08T17:02:53Z", - "status": "succeeded" -} -``` -Request that does not contain a case: -```json -{ - "configuration": { - "checkForCancerCase": true, - "includeEvidence": false - }, - "patients": [ - { - "id": "patient1", - "data": [ - { - "kind": "note", - "clinicalType": "pathology", - "id": "document1", - "language": "en", - "createdDateTime": "2022-01-01T00:00:00", - "content": { - "sourceType": "inline", - "value": "Test document" - } - } - ] - } - ] -} -``` -Response: -```json -{ - "results": { - "patients": [ - { - "id": "patient1", - "inferences": [] - } - ], - "modelVersion": "2023-03-01-preview" - }, - "jobId": "abe71219-b3ce-4def-9e12-3dc511096c88", - "createdDateTime": "2023-03-08T17:05:23Z", - "expirationDateTime": "2023-03-08T17:22:03Z", - "lastUpdateDateTime": "2023-03-08T17:05:23Z", - "status": "succeeded" -} -``` --## Evidence --Through the model configuration, the API allows you to seek evidence from the provided clinical documents as part of the inferences. --**Include evidence** | **Behavior** -- | --true | Evidence is returned as part of each inference -false | No evidence is returned ---## Evidence example --The following example represents a case finding. The ```checkForCancerCase``` is set to ```true``` and ```includeEvidence``` is set to ```true```. Meaning the model checks for a cancer case and include the evidence. --Request that contains a case: -```json -{ - "configuration": { - "checkForCancerCase": true, - "includeEvidence": true - }, - "patients": [ - { - "id": "patient1", - "data": [ - { - "kind": "note", - "clinicalType": "pathology", - "id": "document1", - "language": "en", - "createdDateTime": "2022-01-01T00:00:00", - "content": { - "sourceType": "inline", - "value": "Laterality: Left \n Tumor type present: Invasive duct carcinoma; duct carcinoma in situ \n Tumor site: Upper inner quadrant \n Invasive carcinoma \n Histologic type: Ductal \n Size of invasive component: 0.9 cm \n Histologic Grade - Nottingham combined histologic score: 1 out of 3 \n In situ carcinoma (DCIS) \n Histologic type of DCIS: Cribriform and solid \n Necrosis in DCIS: Yes \n DCIS component of invasive carcinoma: Extensive \n" - } - } - ] - } - ] -} -``` -Response: -```json -{ - "results": { - "patients": [ - { - "id": "patient1", - "inferences": [ - { - "type": "tumorSite", - "evidence": [ - { - "patientDataEvidence": { - "id": "document1", - "text": "Upper inner", - "offset": 108, - "length": 11 - }, - "importance": 0.5563 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "duct", - "offset": 68, - "length": 4 - }, - "importance": 0.0156 - } - ], - "value": "C50.2", - "description": "BREAST", - "confidenceScore": 0.9214 - }, - { - "type": "histology", - "evidence": [ - { - "patientDataEvidence": { - "id": "document1", - "text": "Ductal", - "offset": 174, - "length": 6 - }, - "importance": 0.2937 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Invasive duct", - "offset": 43, - "length": 13 - }, - "importance": 0.2439 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "invasive", - "offset": 193, - "length": 8 - }, - "importance": 0.1588 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "duct", - "offset": 68, - "length": 4 - }, - "importance": 0.1483 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "solid", - "offset": 368, - "length": 5 - }, - "importance": 0.0694 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Cribriform", - "offset": 353, - "length": 10 - }, - "importance": 0.043 - } - ], - "value": "8500", - "confidenceScore": 0.9973 - }, - { - "type": "clinicalStageT", - "evidence": [ - { - "patientDataEvidence": { - "id": "document1", - "text": "Invasive duct carcinoma; duct", - "offset": 43, - "length": 29 - }, - "importance": 0.2613 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "invasive", - "offset": 193, - "length": 8 - }, - "importance": 0.1341 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Laterality: Left", - "offset": 0, - "length": 17 - }, - "importance": 0.0874 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Invasive", - "offset": 133, - "length": 8 - }, - "importance": 0.0722 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "situ", - "offset": 86, - "length": 4 - }, - "importance": 0.0651 - } - ], - "value": "T1", - "confidenceScore": 0.9956 - }, - { - "type": "clinicalStageN", - "evidence": [ - { - "patientDataEvidence": { - "id": "document1", - "text": "Invasive duct carcinoma; duct carcinoma in situ", - "offset": 43, - "length": 47 - }, - "importance": 0.1529 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "invasive carcinoma: Extensive", - "offset": 423, - "length": 30 - }, - "importance": 0.0782 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Invasive", - "offset": 133, - "length": 8 - }, - "importance": 0.0715 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Tumor", - "offset": 95, - "length": 5 - }, - "importance": 0.0513 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Left", - "offset": 13, - "length": 4 - }, - "importance": 0.0325 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Tumor", - "offset": 22, - "length": 5 - }, - "importance": 0.0174 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Histologic", - "offset": 156, - "length": 10 - }, - "importance": 0.0066 - } - ], - "value": "N0", - "confidenceScore": 0.9931 - }, - { - "type": "clinicalStageM", - "evidence": [ - { - "patientDataEvidence": { - "id": "document1", - "text": "Laterality: Left", - "offset": 0, - "length": 17 - }, - "importance": 0.1579 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Invasive duct", - "offset": 43, - "length": 13 - }, - "importance": 0.1493 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Histologic Grade - Nottingham", - "offset": 225, - "length": 29 - }, - "importance": 0.1038 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Invasive", - "offset": 133, - "length": 8 - }, - "importance": 0.089 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "duct carcinoma", - "offset": 68, - "length": 14 - }, - "importance": 0.0807 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "invasive", - "offset": 423, - "length": 8 - }, - "importance": 0.057 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Extensive", - "offset": 444, - "length": 9 - }, - "importance": 0.0494 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Tumor", - "offset": 22, - "length": 5 - }, - "importance": 0.0311 - } - ], - "value": "None", - "confidenceScore": 0.5217 - }, - { - "type": "pathologicStageT", - "evidence": [ - { - "patientDataEvidence": { - "id": "document1", - "text": "Invasive duct", - "offset": 43, - "length": 13 - }, - "importance": 0.3125 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Left", - "offset": 13, - "length": 4 - }, - "importance": 0.201 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "invasive", - "offset": 193, - "length": 8 - }, - "importance": 0.1244 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "invasive", - "offset": 423, - "length": 8 - }, - "importance": 0.0961 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Invasive", - "offset": 133, - "length": 8 - }, - "importance": 0.0623 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Tumor", - "offset": 22, - "length": 5 - }, - "importance": 0.0583 - } - ], - "value": "T1", - "confidenceScore": 0.9477 - }, - { - "type": "pathologicStageN", - "evidence": [ - { - "patientDataEvidence": { - "id": "document1", - "text": "invasive component:", - "offset": 193, - "length": 19 - }, - "importance": 0.1402 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Nottingham combined histologic score:", - "offset": 244, - "length": 37 - }, - "importance": 0.1096 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Invasive carcinoma", - "offset": 133, - "length": 18 - }, - "importance": 0.1067 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Ductal", - "offset": 174, - "length": 6 - }, - "importance": 0.0896 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Invasive duct carcinoma;", - "offset": 43, - "length": 24 - }, - "importance": 0.0831 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Histologic", - "offset": 156, - "length": 10 - }, - "importance": 0.0447 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "in situ", - "offset": 83, - "length": 7 - }, - "importance": 0.042 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Tumor", - "offset": 22, - "length": 5 - }, - "importance": 0.0092 - } - ], - "value": "N0", - "confidenceScore": 0.7927 - }, - { - "type": "pathologicStageM", - "evidence": [ - { - "patientDataEvidence": { - "id": "document1", - "text": "In situ carcinoma (DCIS)", - "offset": 298, - "length": 24 - }, - "importance": 0.1111 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Nottingham combined histologic", - "offset": 244, - "length": 30 - }, - "importance": 0.0999 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "invasive carcinoma:", - "offset": 423, - "length": 19 - }, - "importance": 0.0787 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "invasive", - "offset": 193, - "length": 8 - }, - "importance": 0.0617 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Invasive duct carcinoma;", - "offset": 43, - "length": 24 - }, - "importance": 0.0594 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Tumor", - "offset": 22, - "length": 5 - }, - "importance": 0.0579 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "of DCIS:", - "offset": 343, - "length": 8 - }, - "importance": 0.0483 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Laterality:", - "offset": 0, - "length": 11 - }, - "importance": 0.0324 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Invasive carcinoma", - "offset": 133, - "length": 18 - }, - "importance": 0.0269 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "carcinoma in", - "offset": 73, - "length": 12 - }, - "importance": 0.0202 - }, - { - "patientDataEvidence": { - "id": "document1", - "text": "Tumor", - "offset": 95, - "length": 5 - }, - "importance": 0.0112 - } - ], - "value": "M0", - "confidenceScore": 0.9208 - } - ] - } - ], - "modelVersion": "2023-03-01-preview" - }, - "jobId": "5f975105-6f11-4985-b5cd-896215fb5cd3", - "createdDateTime": "2023-03-08T17:10:39Z", - "expirationDateTime": "2023-03-08T17:27:19Z", - "lastUpdateDateTime": "2023-03-08T17:10:41Z", - "status": "succeeded" -} -``` --## Next steps --Refer to the following page to get better insights into the request and responses: -->[!div class="nextstepaction"] -> [Inference information](inferences.md) |
azure-health-insights | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/overview.md | - Title: What is Onco-Phenotype (Preview)- -description: Enable healthcare organizations to rapidly identify key cancer attributes within their patient populations. ----- Previously updated : 05/05/2024-----# What is Onco-Phenotype (Preview)? -> [!IMPORTANT] -> Onco-Phenotype will be retired on July 31st, 2024, at which time the Onco-Phenotype model will no longer be available. -> -> The Onco-Phenotype model is being retired, but please note that all other models within Azure Health Insights will remain available. The container image for Onco-Phenotype will also be removed from the [Microsoft Artifact Registry](https://mcr.microsoft.com). If you’ve downloaded the image and have it deployed in your own hosting environment, the Onco-phenotype model will cease to function. -> -> If you have Azure AI Health Insights deployed via the Azure Portal, it will continue to work as usual, but the Onco-Phenotype endpoint will no longer be available. As per the standard operating procedure for the Onco-Phenotype model, API results are available for 24 hours from the time the request was created, after which the results are purged. We will honor this commitment up until the model is retired. -> -> We understand that you may have questions regarding this retirement. Please reach out to our Customer Service and Support (CSS) team for assistance. If you don’t currently have CSS support, you can purchase support [here](https://azure.microsoft.com/support/plans/). --Onco-Phenotype is an AI model that’s offered within the context of the broader Azure AI Health Insights. It augments traditional clinical natural language processing tools by enabling healthcare organizations to rapidly identify key cancer attributes within their patient populations. ---> [!IMPORTANT] -> The Onco-Phenotype model is a capability provided “AS IS” and “WITH ALL FAULTS.” The Onco-Phenotype model isn't intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. This capability isn't designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of the Onco-Phenotype model. The customer is responsible for ensuring compliance with those license terms, including any geographic or other applicable restrictions. ---## Onco-Phenotype features -The Onco-Phenotype model, available in the Azure AI Health Insights cognitive service as an API, augments traditional clinical natural language processing (NLP) tools by helping healthcare providers rapidly identify key attributes of a cancer within their patient populations with an existing cancer diagnosis. You can use this model to infer tumor site; histology; clinical stage tumor (T), node (N), and metastasis (M) categories; and pathologic stage TNM categories from unstructured clinical documents, along with confidence scores and relevant evidence. --- **Tumor site** refers to the primary tumor location. --- **Histology** refers to the cell type of a given tumor.--The following paragraph is adapted from [American Joint Committee on Cancer (AJCC)'s Cancer Staging System](https://www.facs.org/quality-programs/cancer/ajcc/cancer-staging). --Cancer staging describes the severity of an individual's cancer based on the magnitude of the original tumor, as well as on the extent cancer has spread in the body. The Onco-Phenotype model supports inferring two types of staging from the clinical documents - clinical staging and pathologic staging. They’re both expressed in the form of TNM categories, where TNM indicates the extent of the tumor (T), the extent of spread to the lymph nodes (N), and the presence of metastasis (M). --- **Clinical staging** determines the nature and extent of cancer based on the physical examination, imaging tests, and biopsies of affected areas. --- **Pathologic staging** can only be determined from individual patients who have had surgery to remove a tumor or otherwise explore the extent of the cancer. Pathologic staging combines the results of clinical staging (physical exam, imaging test) with surgical results. --The Onco-Phenotype model enables cancer registrars to efficiently abstract cancer patients as it infers the above-mentioned key cancer attributes from unstructured clinical documents along with evidence that are relevant to those attributes. Leveraging this API can reduce the manual time spent combing through large amounts of patient documentation by focusing on the most relevant content in support of a clinician. ---## Language support --The service currently supports the English language. --## Limits and quotas --For the Public Preview, you can select the Free F0 SKU. The official pricing will be released after Public Preview. --## Next steps --Get started using the Onco-Phenotype model: -->[!div class="nextstepaction"] -> [Deploy the service via the portal](../deploy-portal.md) |
azure-health-insights | Patient Info | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/patient-info.md | - Title: Onco-Phenotype patient info - -description: This article describes how and which patient information can be sent to the Onco-Phenotype model ----- Previously updated : 05/05/2024-----# Onco-Phenotype patient info --The Onco-Phenotype currently can receive patient information in the form of unstructured clinical notes. -The payload should contain a ```patients``` section with one or more objects where the ```data``` property contains one or more JSON object of ```kind``` "note". - --## Example request --In this example, the Onco-Phenotype model receives patient information in the form of unstructured clinical notes. --> [!NOTE] -> The examples in this article are based on API version: 2023-03-01-preview. For a specific API version, please follow the reference to the REST API to see full description of each API version. --```json -{ - "configuration": { - "checkForCancerCase": true, - "includeEvidence": false - }, - "patients": [ - { - "id": "patient1", - "data": [ - { - "kind": "note", - "clinicalType": "pathology", - "id": "document1", - "language": "en", - "createdDateTime": "2022-01-01T00:00:00", - "content": { - "sourceType": "inline", - "value": "Laterality: Left \n Tumor type present: Invasive duct carcinoma; duct carcinoma in situ \n Tumor site: Upper inner quadrant \n Invasive carcinoma \n Histologic type: Ductal \n Size of invasive component: 0.9 cm \n Histologic Grade - Nottingham combined histologic score: 1 out of 3 \n In situ carcinoma (DCIS) \n Histologic type of DCIS: Cribriform and solid \n Necrosis in DCIS: Yes \n DCIS component of invasive carcinoma: Extensive \n" - } - } - ] - } - ] -} -``` ----## Next steps --To get started using the Onco-Phenotype model: -->[!div class="nextstepaction"] -> [Deploy the service via the portal](../deploy-portal.md) |
azure-health-insights | Support And Help | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/support-and-help.md | - Title: Onco-Phenotype support and help options- -description: How to obtain help and support for questions and problems when you create applications that use with Onco-Phenotype model ----- Previously updated : 05/05/2024-----# Onco-Phenotype model support and help options --Are you just starting to explore the functionality of the Onco-Phenotype model? Perhaps you're implementing a new feature in your application. Or after using the service, do you have suggestions on how to improve it? Here are options for where you can get support, stay up-to-date, give feedback, and report bugs for Azure AI Health Insights. --## Create an Azure support request --Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal. --* [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) -* [Azure portal for the United States government](https://portal.azure.us) ---## Post a question on Microsoft Q&A --For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our expert community, engage with us on [Microsoft Q&A](/answers/products/azure?product=all), Azure's preferred destination for community support. |
azure-health-insights | Transparency Note | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/transparency-note.md | - Title: Transparency Note for Onco-Phenotype -description: Transparency Note for Onco-Phenotype ---- Previously updated : 05/05/2024----# Transparency Note for Onco-Phenotype --## What is a Transparency Note? --An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Creating a system that is fit for its intended purpose requires an understanding of how the technology works, what its capabilities and limitations are, and how to achieve the best performance. Microsoft’s Transparency Notes are intended to help you understand how our AI technology works, the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment. You can use Transparency Notes when developing or deploying your own system, or share them with the people who will use or be affected by your system. --Microsoft’s Transparency Notes are part of a broader effort at Microsoft to put our AI Principles into practice. To find out more, see the [Microsoft AI principles](https://www.microsoft.com/ai/responsible-ai). --## The basics of Onco-Phenotype --### Introduction --The Onco-Phenotype model, available in the Azure AI Health Insights cognitive service as an API, augments traditional clinical natural language processing (NLP) tools by helping healthcare providers rapidly identify key cancer attributes of a cancer within their patient populations with an existing cencer diagnosis. You can use this model to infer tumor site; histology; clinical stage tumor (T), lymph node (N), and metastasis (M) categories; and pathologic stage TNM categories from unstructured clinical documents, along with confidence scores and relevant evidence. --### Key terms --| Term | Definition | -| | - | -| Tumor site | The location of the primary tumor. | -| Histology | The cell type of a given tumor. | -| Clinical stage | Clinical stage helps users determine the nature and extent of cancer based on the physical examination, imaging tests, and biopsies of affected areas. | -| Pathologic stage | Pathologic stage can be determined only from individual patients who have had surgery to remove a tumor or otherwise to explore the extent of the cancer. Pathologic stage combines the results of clinical stage (physical exam, imaging test) with surgical results. | -| TNM categories | TNM categories indicate the extent of the tumor (T), the extent of spread to the lymph nodes (N), and the presence of metastasis (M). | -| ICD-O-3 | _International Classification of Diseases for Oncology, Third Edition_. The worldwide standard coding system for cancer diagnoses. | --## Capabilities --### System behavior --The Onco-Phenotype model, available in the Azure AI Health Insights cognitive service as an API, takes in unstructured clinical documents as input and returns inferences for cancer attributes along with confidence scores as output. Through the model configuration as part of the API request, it also allows the user to seek evidence with the inference values and to explicitly check for the existence of a cancer case before generating the inferences for cancer attributes. ---Upon receiving a valid API request to process the unstructured clinical documents, a job is created and the request is processed asynchronously. The status of the job and the inferences (upon successful job completion) can be accessed by using the job ID. The job results are available for only 24 hours and are purged thereafter. --### Use cases --#### Intended uses --The Onco-Phenotype model can be used in the following scenario. The system’s intended uses include: --- **Assisted annotation and curation:** To support healthcare systems and cancer registrars identify and extract cancer attributes for regulatory purposes and for downstream tasks such as clinical trials matching, research cohort discovery, and molecular tumor board discussions.--#### Considerations when choosing a use case --We encourage customers to use the Onco-Phenotype model in their innovative solutions or applications. However, here are some considerations when choosing a use case: --- **Avoid scenarios that use personal health information for a purpose not permitted by patient consent or applicable law.** Health information has special protections regarding privacy and consent. Make sure that all data you use has patient consent for the way you use the data in your system or you're otherwise compliant with applicable law as it relates to the use of health information.-- **Facilitate human review and inference error corrections.** Given the sensitive nature of health information, it's essential that a human review the source data and correct any inference errors.-- **Avoid scenarios that use this service as a medical device, for clinical support, or as a diagnostic tool or workflow without a human in the loop.** The system wasn't designed for use as a medical device, for clinical support, or as a diagnostic tool for the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions without human intervention. A qualified professional should always verify the inferences and relevant evidence before finalizing or relying on the information.--## Limitations --### Technical limitations, operational factors, and ranges --Specific characteristics and limitations of the Onco-Phenotype model include: --- **Multiple cancer cases for a patient:** The model infers only a single set of phenotype values (tumor site, histology, and clinical/pathologic stage TNM categories) per patient. If the model is given an input with multiple primary cancer diagnoses, the behavior is undefined and might mix elements from the separate diagnoses.-- **Inference values for tumor site and histology:** The inference values are only as exhaustive as the training dataset labels. If the model is presented with a cancer case for which the true tumor site or histology wasn't encountered during training (for example, a rare tumor site or histology), the model will be unable to produce a correct inference result.-- **Clinical/pathologic stage (TNM categories):** The model doesn't currently identify the initiation of a patient's definitive treatment. Therefore, it might use clinical stage evidence to infer a pathologic stage value or vice-versa. Manual review should verify that appropriate evidence supports clinical and pathologic stage results. The model doesn't predict subcategories or isolated tumor cell modifiers. For instance, T3a would be predicted as T3, and N0(i+) would be predicted as N0.--## System performance --In many AI systems, performance is often defined in relation to accuracy or by how often the AI system offers a correct prediction or output. Depending on the workflow or scenario, you can leverage the confidence scores that are returned with each inference and choose to set thresholds based on the tolerance for incorrect inferences. The performance of the system can be assessed by computing statistics based on true positive, true negative, false positive, and false negative instances. For example, in the tumor site predictions, one can consider a tumor site (like lung) being the positive class and other sites, including not having one, being the negative class. Using the lung tumor site as an example positive class, the following table illustrates different outcomes. --| **Outcome** | **Correct/Incorrect** | **Definition** | **Example** | -| -- | | -- | -- | -| True Positive | Correct | The system returns the tumor site as lung and that would be expected from a human judge. | The system correctly infers the tumor site as lung on the clinical documents of a lung cancer patient. | -| True Negative | Correct | The system doesn't return the tumor site as lung, and this aligns with what would be expected from a human judge. | The system returns the tumor site as breast on the clinical documents of a breast cancer patient. | -| False Positive | Incorrect | The system returns the tumor site as lung where a human judge wouldn't. | The system returns the tumor site as lung on the clinical documents of a breast cancer patient. | -| False Negative | Incorrect | The system doesn't return the tumor site as lung where a human judge would identify it as lung. | The system returns the tumor site as breast on the clinical documents of a lung cancer patient. | --### Best practices for improving system performance --For each inference, the Onco-Phenotype model returns a confidence score that expresses how confident the model is with the response. Confidence scores range from 0 to 1. The higher the confidence score, the more certain the model is about the inference value it provided. However, the system isn't designed for workflows or scenarios without a human in the loop. Also, inference values can't be consumed without human review, irrespective of the confidence score. You can choose to completely discard an inference value if its confidence score is below a confidence score threshold that best suits the scenario. --## Evaluation of Onco-Phenotype --### Evaluation methods --The Onco-Phenotype model was evaluated on a held-out dataset that shares the same characteristics as the training dataset. The training and held-out datasets consist of patients located only in the United States. The patient races include White or Caucasian, Black or African American, Asian, Native Hawaiian or Pacific Islander, American Indian or Alaska native, and Other. During model development and training, a separate development dataset was used for error analysis and model improvement. --### Evaluation results --Although the Onco-Phenotype model makes mistakes on the held-out dataset, it was observed that the inferences, and the evidence spans identified by the model are helpful in speeding up manual curation effort. --Microsoft has also tested the generalizability of the model by evaluating the trained model on a secondary dataset that was collected from a different hospital system, and which was unavailable during training. A limited performance decrease was observed on the secondary dataset. --#### Fairness considerations --At Microsoft, we strive to empower every person on the planet to achieve more. An essential part of this goal is working to create technologies and products that are fair and inclusive. Fairness is a multi-dimensional, sociotechnical topic and impacts many different aspects of our product development. You can learn more about Microsoft’s approach to fairness [here](https://www.microsoft.com/ai/responsible-ai?rtc=1&activetab=pivot1:primaryr6). --One dimension we need to consider is how well the system performs for different groups of people. This might include looking at the accuracy of the model and measuring the performance of the complete system. Research has shown that without conscious effort focused on improving performance for all groups, it's often possible for the performance of an AI system to vary across groups based on factors such as race, ethnicity, language, gender, and age. --The evaluation performance of the Onco-Phenotype model was stratified by race to ensure minimal performance discrepancy between different patient racial groups. The lowest performance by racial group is well within 80% of the highest performance by racial group. When the evaluation performance was stratified by gender, there was no significant difference. --However, each use case is different, and our testing might not perfectly match your context or cover all scenarios that are required for your use case. We encourage you to thoroughly evaluate error rates for the service by using real-world data that reflects your use case, including testing with users from different demographic groups. --## Evaluating and integrating Onco-Phenotype for your use --As Microsoft works to help customers safely develop and deploy solutions that use the Onco-Phenotype model, we offer guidance for considering the AI systems' fairness, reliability & safety, privacy &security, inclusiveness, transparency, and human accountability. These considerations are in line with our commitment to developing responsible AI. --When getting ready to integrate and use AI-powered products or features, the following activities help set you up for success: --- **Understand what it can do:** Fully vet and review the capabilities of Onco-Phenotype to understand its capabilities and limitations.-- **Test with real, diverse data:** Understand how Onco-Phenotype will perform in your scenario by thoroughly testing it by using real-life conditions and data that reflects the diversity in your users, geography, and deployment contexts. Small datasets, synthetic data, and tests that don't reflect your end-to-end scenario are unlikely to sufficiently represent your production performance.-- **Respect an individual's right to privacy:** Collect data and information from individuals only for lawful and justifiable purposes. Use data and information that you have consent to use only for this purpose.-- **Legal review:** Obtain appropriate legal advice to review your solution, particularly if you'll use it in sensitive or high-risk applications. Understand what restrictions you might need to work within and your responsibility to resolve any issues that might come up in the future.-- **System review:** If you're planning to integrate and responsibly use an AI-powered product or feature in an existing system of software or in customer and organizational processes, take the time to understand how each part of your system will be affected. Consider how your AI solution aligns with Microsoft's Responsible AI principles.-- **Human in the loop:** Keep a human in the loop. This means ensuring constant human oversight of the AI-powered product or feature and maintaining the role of humans in decision-making. Ensure that you can have real-time human intervention in the solution to prevent harm. This enables you to manage where the AI model doesn't perform as expected.-- **Security:** Ensure that your solution is secure and that it has adequate controls to preserve the integrity of your content and prevent unauthorized access.-- **Customer feedback loop:** Provide a feedback channel that allows users and individuals to report issues with the service after it's deployed. After you've deployed an AI-powered product or feature, it requires ongoing monitoring and improvement. Be ready to implement any feedback and suggestions for improvement.--## Learn more about responsible AI --[Microsoft AI Principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6) --[Microsoft responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources) --[Microsoft Azure Learning courses on responsible AI](/training/paths/responsible-ai-business-principles/) --## Learn more about Onco-Phenotype --[Overview of Onco-Phenotype](overview.md) --## Contact us --[Give us feedback on this document](mailto:health-ai-feedback@microsoft.com). --## About this document --© 2023 Microsoft Corporation. All rights reserved. This document is provided "as-is" and for informational purposes only. Information and views expressed in this document, including URL and other Internet Web site references, may change without notice. You bear the risk of using it. Some examples are for illustration only and are fictitious. No real association is intended or inferred. |
azure-health-insights | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/overview.md | Azure AI Health Insights is a Cognitive Service that provides prebuilt models th ## Available models -There are currently three models available in Azure AI Health Insights: +There are currently two models available in Azure AI Health Insights: The [Trial Matcher](./trial-matcher/overview.md) model receives patients' data and clinical trials protocols, and provides relevant clinical trials based on eligibility criteria. -The [Onco-Phenotype](./oncophenotype/overview.md) receives clinical records of oncology patients and outputs cancer staging, such as **clinical stage TNM** categories and **pathologic stage TNM categories** as well as **tumor site** and **histology**. - The [Radiology Insights](./radiology-insights/overview.md) model receives patients' radiology report and provides quality checks with feedback on errors and mismatches. The Radiology Insights model ensures critical findings are surfaced and presented using the full context of a radiology report. In addition, the model is highlighting follow-up recommendations and clinical findings with measurements documented by the radiologist. ## Architecture The [Radiology Insights](./radiology-insights/overview.md) model receives patien [ ![Diagram that shows Azure AI Health Insights architecture.](media/architecture.png)](media/architecture.png#lightbox) Azure AI Health Insights service receives patient data in different modalities, such as unstructured healthcare data, FHIR resources or specific JSON format data. In addition, the service receives a model configuration, such as ```includeEvidence``` parameter. -With these input patient data and configuration, the service can run the data through the selected health insights AI model, such as Trial Matcher, Onco-Phenotype or Radiology Insights. +With these input patient data and configuration, the service can run the data through the selected health insights AI model, such as Trial Matcher or Radiology Insights. ## Next steps Review the following information to learn how to deploy Azure AI Health Insights >[!div class="nextstepaction"] > [Deploy Azure AI Health Insights using Azure portal](deploy-portal.md) ->[!div class="nextstepaction"] -> [Onco-Phenotype](oncophenotype/overview.md) - >[!div class="nextstepaction"] > [Trial Matcher](trial-matcher//overview.md) |
azure-health-insights | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/get-started.md | To submit a request to the Trial Matcher, you need to make a POST request to the In the example below the patients are matches to the ```Clinicaltrials_gov``` source, for a ```lung cancer``` condition with facility locations for the city ```Orlando```. ```http-POST https://{your-cognitive-service-endpoint}/healthinsights/trialmatcher/jobs?api-version=2022-01-01-preview +PUT https://{your-cognitive-service-endpoint}/health-insights/trial-matcher/jobs/id?api-version=2024-08-01-preview Content-Type: application/json Ocp-Apim-Subscription-Key: {your-cognitive-services-api-key} { |
azure-health-insights | Use Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/use-containers.md | Title: How to use Azure AI Health Insights containers -description: Learn how to use Project Health Insight models on premises using Docker containers. +description: Learn how to use Azure AI Health Insight models on premises using Docker containers. The following table describes the minimum and recommended specifications for the | Model | Minimum cpu | Maximum cpu | Minimum memory | Maximum memory| |-|--|--|--|--| | Trial Matcher | 4000m |4000m |5G | 7G | -| OncoPhenotype | 4000m |8000m |2G | 12G | CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command. CPU core and memory correspond to the `--cpus` and `--memory` settings, which ar Azure AI Health Insights container images can be found on the `mcr.microsoft.com` container registry syndicate. They reside within the `azure-cognitive-services/health-insights/` repository and can be found by their model name. - Clinical Trial Matcher: The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/health-insights/clinical-matching`-- Onco-Phenotype: The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/health-insights/cancer-profiling` -To use the latest version of the container, you can use the `latest` tag. You can find a full list of tags on the MCR via `https://mcr.microsoft.com/v2/azure-cognitive-services/health-insights/clinical-matching/tags/list` and `https://mcr.microsoft.com/v2/azure-cognitive-services/health-insights/cancer-profiling/tags/list`. +To use the latest version of the container, you can use the `latest` tag. You can find a full list of tags on the MCR via `https://mcr.microsoft.com/v2/azure-cognitive-services/health-insights/clinical-matching/tags/list`. ++- Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download this container image from the Microsoft public container registry. +You can find the featured tags on the [docker hub clinical matching page](https://hub.docker.com/r/microsoft/azure-cognitive-services-health-insights-clinical-matching). + ``` docker pull mcr.microsoft.com/azure-cognitive-services/health-insights/<model-name>:<tag-name> When you use Azure AI Health Insights container, the data contained in your API ### Run the container locally > [!IMPORTANT]-> The docker run command can only be used of the cancer-profiling model, to use the clinical-matching model, you should use the docker compose command. see Example Docker compose file. +> To use the clinical-matching model, you should use the docker compose command. see Example Docker compose file. To run the container in your own environment after downloading the container image, execute the following `docker run` command. Replace the placeholders below with your own values: This command: Use the example cURL request as a reference how to submit a query to the container you have deployed replacing the `serverURL` variable with the appropriate value. ```bash-curl -X POST 'http://<serverURL>:5000/health-insights/<model>/jobs?api-version=<version>/' --header 'Content-Type: application/json' --header 'accept: application/json' --data-binary @example.json +curl -X PUT 'http://<serverURL>:5000/health-insights/<model>/jobs/id?api-version=<version>/' --header 'Content-Type: application/json' --header 'accept: application/json' --data-binary @example.json ``` #### Example docker compose file -The below example shows how a [docker compose](https://docs.docker.com/compose/) file can be created to deploy the health-insights containers. +The below example shows how a [docker compose](https://docs.docker.com/reference/compose-file/) file can be created to deploy the health-insights containers. ```yaml version: "3" |
azure-maps | Power Bi Visual Geocode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-geocode.md | Geocoding is the process of taking an address and returning the corresponding la ## The location field -The **Location** field in the Azure Maps Power BI Visual can accept multiple values, such as country/region, state, city, street address and zip code. By providing multiple sources of location information in the Location field, you help to guarantee more accurate results and eliminate ambiguity that would prevent a specific location to be determined. For example, there are over 20 different cities in the United States named *Franklin*. +The **Location** field in the Azure Maps Power BI Visual can accept multiple values, such as country/region, state, city, street address, and zip code. Providing multiple sources of location information in the Location field enhances the accuracy of results and removes any ambiguity that might limit the identification of a specific location. For example, there are over 20 different cities in the United States named *Franklin*. ## Use geo-hierarchies to drill down When entering multiple values into the **Location** field, you create a geo-hierarchy. Geo-hierarchies enable the hierarchical drill-down features in the map, allowing you to drill down to different "levels" of location. | Button | Description | |:-:|-| | 1 | The drill button on the far right, called Drill Mode, allows you to select a map Location and drill down into that specific location one level at a time. For example, if you turn on the drill-down option and select North America, you move down in the hierarchy to the next level--states in North America. For geocoding, Power BI sends Azure Maps country and state data for North America only. The button on the left goes back up one level. | | 2 | The double arrow drills to the next level of the hierarchy for all locations at once. For example, if you're currently looking at countries/regions and then use this option to move to the next level, states, Power BI displays state data for all countries/regions. For geocoding, Power BI sends Azure Maps state data (no country/region data) for all locations. This option is useful if each level of your hierarchy is unrelated to the level above it. |-| 3 | Similar to the drill-down option, except that you don't need to select the map. It expands down to the next level of the hierarchy remembering the current level's context. For example, if you're currently looking at countries/regions and select this icon, you move down in the hierarchy to the next level--states. For geocoding, Power BI sends data for each state and its corresponding country/region to help Azure Maps geocode more accurately. In most maps, you'll either use this option or the drill-down option on the far right. This sends Azure as much information as possible and result in more accurate location information. | +| 3 | Similar to the drill-down option, except that you don't need to select the map. It expands down to the next level of the hierarchy remembering the current level's context. For example, if you're currently looking at countries/regions and select this icon, you move down in the hierarchy to the next level--states. For geocoding, Power BI sends data for each state and its corresponding country/region to help Azure Maps geocode more accurately. In most maps, either use this option or the drill-down option on the far right. This sends Azure as much information as possible and result in more accurate location information. | ## Categorize geographic fields in Power BI -To ensure fields are correctly geocoded, you can set the Data Category on the data fields in Power BI. In Data view, select the desired column. From the ribbon, select the Modeling tab and then set the Data Category to one of the following properties: Address, City, Continent, Country, Region, County, Postal Code, State, or Province. These data categories help Azure correctly encode the data. To learn more, see [Data categorization in Power BI Desktop]. If you're live connecting to SQL Server Analysis Services, set the data categorization outside of Power BI using [SQL Server Data Tools (SSDT)]. +To ensure fields are correctly geocoded, you can set the Data Category on the data fields in Power BI. In Data view, select the desired column. From the ribbon, select the Modeling tab and then set the Data Category to one of the following properties: Address, Place, City, County, State or Province, Postal Code, Country, Continent, Latitude, or Longitude. These data categories help Azure correctly encode the data. To learn more, see [Data categorization in Power BI Desktop]. If you're live connecting to SQL Server Analysis Services, set the data categorization outside of Power BI using [SQL Server Data Tools (SSDT)]. :::image type="content" source="media/power-bi-visual/data-category.png" alt-text="A screenshot showing the data category drop-down list in Power BI desktop."::: To ensure fields are correctly geocoded, you can set the Data Category on the da > When categorizing geographic fields in Power BI, be sure to enter **State** and **County** data separately for accurate geocoding. Incorrect categorization, such as entering both **State** and **County** data into either category, might work currently but can lead to issues in the future. > > For instance:+> > - Correct Usage: State = GA, County = Decatur County > - Incorrect Usage: State = Decatur County, GA or County = Decatur County, GA |
azure-maps | Power Bi Visual Understanding Layers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-understanding-layers.md | The general layer section of the **Format** pane are common settings that apply > > **General layer settings retirement** >-> The **Show zeros** and **Show negatives** Power BI Visual General layer settings were deprecated starting in the September 2023 release of Power BI. You can no longer create new reports using these settings, but existing reports will continue to work. It is recomended that you upgrade existing reports. To upgrade to the new **range scaling** property, select the desired option in the **Range scaling** drop-down list: +> The **Show zeros** and **Show negatives** Power BI Visual General layer settings were deprecated starting in the September 2023 release of Power BI. You can no longer create new reports using these settings, but existing reports will continue to work. It is recommended that you upgrade existing reports. To upgrade to the new **range scaling** property, select the desired option in the **Range scaling** drop-down list: > > :::image type="content" source="./media/power-bi-visual/range-scaling-drop-down.png" alt-text="A screenshot of the range scaling drop-down"::: > |
azure-maps | Release Notes Map Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md | This document contains information about new features and other changes to the M ## v3 (latest) -### [3.3.0] (Aug 8, 2024) +### [3.4.0] (CDN: September 30, 2024, npm: TBA) -#### New features (3.3.0) +#### New features +- Add support for PMTiles. ++#### Bug fixes +- Accessibility: Fix overflow issue with the style picker label in small containers. +- Fix attribution not updating after style changes with a GeoJSON data source. +- Fix `setCamera` with bounds and min/max zoom. +- Use `ResizeObserver` instead of window resize events. +- Fix footer logo width. ++#### Other changes +- Add `@types/geojson` as a dependency. +- Update dependency `@microsoft/applicationinsights-web` to `^3.3.0` ++### [3.3.0] (August 8, 2024) ++#### New features - Update the Copyright control - Make the copyright text smaller and ensure it fits on one line. - Use different types of Microsoft logos for different CSS themes to improve visibility. - Implement RWD to hide part of the component (MS logo) when the map canvas is relatively small. - Enhance base layer class by adding abstract `getOptions` and `setOptions` functions. -#### Bug fixes (3.3.0) +#### Bug fixes - Skip existing sources when copying user layers. - **\[BREAKING\]** Address the incorrect ordering of latitude and longitude values in `Position.fromLatLng()`. - Fix hidden accessible element visible issue on control buttons. ### [3.2.1] (May 13, 2024) -#### New features (3.2.1) +#### New features - Constrain horizontal panning when `renderWorldCopies` is set to `false`. - Make `easeTo` and `flyTo` animation smoother when the target point is close to the limits: maxBounds, vertical world edges, or antimeridian. -#### Bug fixes (3.2.1) +#### Bug fixes - Correct accessible numbers for hidden controls while using 'Show numbers' command. - Fix memory leak in worker when the map is removed. - Fix unwanted zoom and panning changes at the end of a panning motion. -#### Other changes (3.2.1) +#### Other changes - Improve the format of inline code in the document. ### [3.2.0] (March 29, 2024) -#### Other changes (3.2.0) +#### Other changes - Upgrade MapLibre to [V4](https://github.com/maplibre/maplibre-gl-js/releases/tag/v4.0.0). This document contains information about new features and other changes to the M ### [3.1.2] (February 22, 2024) -#### New features (3.1.2) +#### New features - Added `fillAntialias` option to `PolygonLayer` for enabling MSAA antialiasing on polygon fills. -#### Other changes (3.1.2) +#### Other changes - Update the feedback icon and link. ### [3.1.1] (January 26, 2024) -#### New features (3.1.1) +#### New features - Added a new option, `enableAccessibilityLocationFallback`, to enable or disable reverse-geocoding API fallback for accessibility (screen reader). -#### Bug fixes (3.1.1) +#### Bug fixes - Resolved an issue where ApplicationInsights v3.0.5 was potentially sending a large number of requests. ### [3.1.0] (January 12, 2024) -#### New features (3.1.0) +#### New features - Added a new control, `atlas.control.ScaleControl`, to display a scale bar on the map. - Introduced functions for accessing, updating, and deleting a feature state. -#### Bug fixes (3.1.0) +#### Bug fixes - Addressed the issue of layer ordering after a style update, when a user layer is inserted before another user layer. This document contains information about new features and other changes to the M ### [3.0.3] (November 29, 2023) -#### New features (3.0.3) +#### New features - Included ESM support. -#### Other changes (3.0.3) +#### Other changes - The accessibility feature for screen reader has been upgraded to utilize the Search V2 API (reverse geocoding). This document contains information about new features and other changes to the M ### [3.0.2] (November 1, 2023) -#### Bug fixes (3.0.2) +#### Bug fixes - Addressed several errors in the type declaration file and added a dependency for `@maplibre/maplibre-gl-style-spec`. -#### Other changes (3.0.2) +#### Other changes - Removed Authorization headers from style, thumbnail, sprite, and glyph requests to enhance CDN caching for static assets. This document contains information about new features and other changes to the M ### [3.0.1] (October 6, 2023) -#### Bug fixes (3.0.1) +#### Bug fixes - Various accessibility improvements. This document contains information about new features and other changes to the M - Fixed missing event names in `HtmlMarkerEvents`. -#### Other changes (3.0.1) +#### Other changes - Modified member methods to be protected for the zoom, pitch, and compass controls. This document contains information about new features and other changes to the M ### [3.0.0] (August 18, 2023) -#### Bug fixes (3.0.0) +#### Bug fixes - Fixed zoom control to take into account the `maxBounds` [CameraOptions]. - Fixed an issue that mouse positions are shifted after a css scale transform on the map container. -#### Other changes (3.0.0) +#### Other changes - Phased out the style definition version `2022-08-05` and switched the default `styleDefinitionsVersion` to `2023-01-01`. - Added the `mvc` parameter to encompass the map control version in both definitions and style requests. -#### Installation (3.0.0) +#### Installation The version is available on [npm][3.0.0] and CDN. The version is available on [npm][3.0.0] and CDN. ### [3.0.0-preview.10] (July 11, 2023) -#### Bug fixes (3.0.0-preview.10) +#### Bug fixes - Dynamic pixel ratio fixed in underlying maplibre-gl dependency. - Fixed an issue where `sortKey`, `radialOffset`, `variableAnchor` isn't applied when used in `SymbolLayer` options. -#### Installation (3.0.0-preview.10) +#### Installation The preview is available on [npm][3.0.0-preview.10] and CDN. The preview is available on [npm][3.0.0-preview.10] and CDN. ### [3.0.0-preview.9] (June 27, 2023) -#### New features (3.0.0-preview.9) +#### New features - WebGL2 is used by default. The preview is available on [npm][3.0.0-preview.10] and CDN. - Ability to customize maxPitch / minPitch in `CameraOptions` -#### Bug fixes (3.0.0-preview.9) +#### Bug fixes - Fixed an issue where accessibility-related duplicated DOM elements might result when `map.setServiceOptions` is called -#### Installation (3.0.0-preview.9) +#### Installation The preview is available on [npm][3.0.0-preview.9] and CDN. - **NPM:** Refer to the instructions at [azure-maps-control@3.0.0-preview.9][3.0.0-preview.9] The preview is available on [npm][3.0.0-preview.9] and CDN. ### [3.0.0-preview.8] (June 2, 2023) -#### Bug fixes (3.0.0-preview.8) +#### Bug fixes - Fixed an exception that occurred while updating the property of a layout that no longer exists. The preview is available on [npm][3.0.0-preview.9] and CDN. - Fixed an error in subsequent `map.setStyle()` calls if the raw Maplibre style is retrieved in the `stylechanged` event callback on style serialization. -#### Other changes (3.0.0-preview.8) +#### Other changes - Updated attribution logo and link. -#### Installation (3.0.0-preview.8) +#### Installation The preview is available on [npm][3.0.0-preview.8] and CDN. The preview is available on [npm][3.0.0-preview.8] and CDN. ### [3.0.0-preview.7] (May 2, 2023) -#### New features (3.0.0-preview.7) +#### New features - In addition to map configuration, [Map.setServiceOptions()] now supports changing `domain`, `styleAPIVersion`, `styleDefinitionsVersion` on runtime. -#### Bug fixes (3.0.0-preview.7) +#### Bug fixes - Fixed token expired exception on relaunches when using Azure AD / shared token / anonymous authentication by making sure authentication is resolved prior to any style definition request The preview is available on [npm][3.0.0-preview.8] and CDN. - Fixed the possibility of event listener removal called on undefined target in `EventManager.remove()` -#### Installation (3.0.0-preview.7) +#### Installation The preview is available on [npm][3.0.0-preview.7] and CDN. The preview is available on [npm][3.0.0-preview.7] and CDN. ### [3.0.0-preview.6] (March 31, 2023) -#### Installation (3.0.0-preview.6) +#### Installation The preview is available on [npm][3.0.0-preview.6] and CDN. The preview is available on [npm][3.0.0-preview.6] and CDN. <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.6/atlas.min.js"></script> ``` -#### New features (3.0.0-preview.6) +#### New features - Optimized the internal style transform performance. -#### Bug fixes (3.0.0-preview.6) +#### Bug fixes - Resolved an issue where the first style set request was unauthenticated for `AAD` authentication. The preview is available on [npm][3.0.0-preview.6] and CDN. ### [3.0.0-preview.5] (March 15, 2023) -#### Installation (3.0.0-preview.5) +#### Installation The preview is available on [npm][3.0.0-preview.5] and CDN. The preview is available on [npm][3.0.0-preview.5] and CDN. <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.5/atlas.min.js"></script> ``` -#### New features (3.0.0-preview.5) +#### New features - Support dynamically updating mapConfiguration via `map.setServiceOptions({ mapConfiguration: 'MAP_CONFIG' })` ### [3.0.0-preview.4] (March 10, 2023) -#### Installation (3.0.0-preview.4) +#### Installation The preview is available on [npm][3.0.0-preview.4] and CDN. The preview is available on [npm][3.0.0-preview.4] and CDN. <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.4/atlas.min.js"></script> ``` -#### New features (3.0.0-preview.4) +#### New features - Extended map coverage in China, Japan, and Korea. The preview is available on [npm][3.0.0-preview.4] and CDN. - Additional information about the altitude of mountains and the location of waterfalls. -#### Changes (3.0.0-preview.4) +#### Changes - Traffic data now only support relative mode. The preview is available on [npm][3.0.0-preview.4] and CDN. - Changed the default `minZoom` from -2 to 1. -#### Bug fixes (3.0.0-preview.4) +#### Bug fixes - Cleaned up various memory leaks in [Map.dispose()]. The preview is available on [npm][3.0.0-preview.4] and CDN. ### [3.0.0-preview.3] (February 2, 2023) -#### Installation (3.0.0-preview.3) +#### Installation The preview is available on [npm][3.0.0-preview.3] and CDN. The preview is available on [npm][3.0.0-preview.3] and CDN. <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.3/atlas.min.js"></script> ``` -#### New features (3.0.0-preview.3) +#### New features - **\[BREAKING\]** Migrated from [adal-angular] to [@azure/msal-browser] used for authentication with Microsoft Azure Active Directory ([Azure AD]). Changes that might be required: The preview is available on [npm][3.0.0-preview.3] and CDN. - Allow pitch and bearing being set with [CameraBoundsOptions] in [Map.setCamera(options)]. -#### Bug fixes (3.0.0-preview.3) +#### Bug fixes - Fixed issue in [language mapping], now `zh-Hant-TW` no longer reverts back to `en-US`. The preview is available on [npm][3.0.0-preview.3] and CDN. ### [3.0.0-preview.2] (December 16, 2022) -#### Installation (3.0.0-preview.2) +#### Installation The preview is available on [npm][3.0.0-preview.2] and CDN. The preview is available on [npm][3.0.0-preview.2] and CDN. <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.2/atlas.min.js"></script> ``` -#### New features (3.0.0-preview.2) +#### New features Add `progressiveLoading` and `progressiveLoadingInitialLayerGroups` to [StyleOptions] to enable the capability of loading map layers progressively. This feature improves the perceived loading time of the map. For more information, see [2.2.2 release notes](#222-december-15-2022). -#### Bug fixes (3.0.0-preview.2) +#### Bug fixes - Fixed an issue that the ordering of user layers wasn't preserved after calling `map.layers.move()`. Add `progressiveLoading` and `progressiveLoadingInitialLayerGroups` to [StyleOpt ### [3.0.0-preview.1] (November 18, 2022) -### Installation (3.0.0-preview.1) +### Installation The preview is available on [npm][3.0.0-preview.1]. The preview is available on [npm][3.0.0-preview.1]. npm i azure-maps-control@next ``` -#### New features (3.0.0-preview.1) +#### New features This update is the first preview of the upcoming 3.0.0 release. The underlying [maplibre-gl] dependency has been upgraded from `1.14` to `3.0.0-pre.1`, offering improvements in stability and performance. -#### Bug fixes (3.0.0-preview.1) +#### Bug fixes - Fixed a regression issue that prevents IndoorManager from removing a tileset: This update is the first preview of the upcoming 3.0.0 release. The underlying [ ### [2.3.7] (February 22, 2024) -#### New features (2.3.7) +#### New features - Added `fillAntialias` option to `PolygonLayer` for enabling MSAA antialiasing on polygon fills. - Added a new option, `enableAccessibilityLocationFallback`, to enable or disable reverse-geocoding API fallback for accessibility (screen reader). -#### Other changes (2.3.7) +#### Other changes - Update the feedback icon and link. ### [2.3.6] (January 12, 2024) -#### New features (2.3.6) +#### New features - Added a new control, `atlas.control.ScaleControl`, to display a scale bar on the map. - Introduced functions for accessing, updating, and deleting a feature state. -#### Bug fixes (2.3.6) +#### Bug fixes - Addressed the issue of layer ordering after a style update, when a user layer is inserted before another user layer. ### [2.3.5] (November 29, 2023) -#### Other changes (2.3.5) +#### Other changes - The accessibility feature for screen reader has been upgraded to utilize the Search V2 API (reverse geocoding). ### [2.3.4] (November 1, 2023) -#### Other changes (2.3.4) +#### Other changes - Removed Authorization headers from style, thumbnail, sprite, and glyph requests to enhance CDN caching for static assets. This update is the first preview of the upcoming 3.0.0 release. The underlying [ ### [2.3.3] (October 6, 2023) -#### Bug fixes (2.3.3) +#### Bug fixes - Resolved the issue with dynamic attribution when progressive loading is enabled. ### [2.3.2] (August 11, 2023) -#### Bug fixes (2.3.2) +#### Bug fixes - Fixed an issue where accessibility-related duplicated DOM elements might result when `map.setServiceOptions` is called. - Fixed zoom control to take into account the `maxBounds` [CameraOptions]. -#### Other changes (2.3.2) +#### Other changes - Added the `mvc` parameter to encompass the map control version in both definitions and style requests. ### [2.3.1] (June 27, 2023) -#### Bug fixes (2.3.1) +#### Bug fixes - Fix `ImageSpriteManager` icon images might get removed during style change -#### Other changes (2.3.1) +#### Other changes - Security: insecure-randomness fix in UUID generation. ### [2.3.0] (June 2, 2023) -#### New features (2.3.0) +#### New features - **\[BREAKING\]** Refactored the internal StyleManager to replace `_stylePatch` with `transformStyle`. This change will allow road shield icons to update and render properly after a style switch. -#### Bug fixes (2.3.0) +#### Bug fixes - Fixed an exception that occurred while updating the property of a layout that no longer exists. - Fixed an issue where BubbleLayer's accessible indicators didn't update when the data source was modified. -#### Other changes (2.3.0) +#### Other changes - Updated attribution logo and link. ### [2.2.7] (May 2, 2023) -#### New features (2.2.7) +#### New features - In addition to map configuration, [Map.setServiceOptions()] now supports changing `domain`, `styleAPIVersion`, `styleDefinitionsVersion` on runtime. -#### Bug fixes (2.2.7) +#### Bug fixes - Fixed token expired exception on relaunches when using Azure AD / shared token / anonymous authentication by making sure authentication is resolved prior to any style definition request This update is the first preview of the upcoming 3.0.0 release. The underlying [ ### [2.2.6] -#### Bug fixes (2.2.6) +#### Bug fixes - Resolved an issue where the first style set request was unauthenticated for `AAD` authentication. This update is the first preview of the upcoming 3.0.0 release. The underlying [ ### [2.2.5] -#### New features (2.2.5) +#### New features - Support dynamically updating mapConfiguration via `map.setServiceOptions({ mapConfiguration: 'MAP_CONFIG' })` ### [2.2.4] -#### Bug fixes (2.2.4) +#### Bug fixes - Cleaned up various memory leaks in [Map.dispose()]. This update is the first preview of the upcoming 3.0.0 release. The underlying [ ### [2.2.3] -#### New features (2.2.3) +#### New features - Allow pitch and bearing being set with [CameraBoundsOptions] in [Map.setCamera(options)]. -#### Bug fixes (2.2.3) +#### Bug fixes - Fixed issue in [language mapping], now `zh-Hant-TW` no longer reverts back to `en-US`. This update is the first preview of the upcoming 3.0.0 release. The underlying [ ### [2.2.2] (December 15, 2022) -#### New features (2.2.2) +#### New features Add `progressiveLoading` and `progressiveLoadingInitialLayerGroups` to [StyleOptions] to enable the capability of loading map layers progressively. This feature improves the perceived loading time of the map. Add `progressiveLoading` and `progressiveLoadingInitialLayerGroups` to [StyleOpt - Possible values are `base`, `transit`, `labels`, `buildings`, and `labels_places`. - Other layer groups are deferred such that the initial layer groups can be loaded first. -#### Bug fixes (2.2.2) +#### Bug fixes - Fixed an issue that the ordering of user layers wasn't preserved after calling `map.layers.move()`. |
azure-netapp-files | Azure Netapp Files Resize Capacity Pools Or Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resize-capacity-pools-or-volumes.md | For information about monitoring a volumeΓÇÖs capacity, see [Monitor the capacit * Volume resize operations are nearly instantaneous but not always immediate. There can be a short delay for the volume's updated size to appear in the portal. Verify the size from a host perspective before re-attempting the resize operation. >[!IMPORTANT]->If you are using a capacity pool with a size of 2 TiB or smaller and have `ANFStdToBasicNetworkFeaturesRevert` and `ANFBasicToStdNetworkFeaturesUpgrade` AFECs enabled and want to change the capacity pool's QoS type from auto to manual, you must [perform the operation with the REST API](#resizing-the-capacity-pool-or-a-volume-using-rest-api) using the `2023-07-01` API version or later. +>If you are using a capacity pool with a size of 2 TiB or smaller and have the `ANFStdToBasicNetworkFeaturesRevert` and `ANFBasicToStdNetworkFeaturesUpgrade` AFECs enabled and want to change the capacity pool's QoS type from auto to manual, you must [perform the operation with the REST API](#resizing-the-capacity-pool-or-a-volume-using-rest-api) using the `2023-07-01` API version or later. ## Resize the capacity pool using the Azure portal |
azure-netapp-files | Azure Netapp Files Set Up Capacity Pool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md | Creating a capacity pool enables you to create volumes within it. * If you're using Azure CLI, ensure that you're using the latest version. For more information, see [How to update the Azure CLI](/cli/azure/update-azure-cli). * If you're using PowerShell, ensure that you're using the latest version of the Az.NetAppFiles module. To update to the latest version, use the 'Update-Module Az.NetAppFiles' command. For more information, see [Update-Module](/powershell/module/powershellget/update-module). * If you're using the Azure REST API, ensure that you specify the latest version.-* If you're creating 1-TiB capacity pool, you must first register the feature: - 1. Register the feature: - ```azurepowershell-interactive - Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANF1TiBPoolSize - ``` - 2. Check the status of the feature registration: - > [!NOTE] - > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing. - ```azurepowershell-interactive - Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANF1TiBPoolSize - ``` - You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status. - >[!IMPORTANT] >To create a 1-TiB capacity pool with a tag, you must use API versions `2023-07-01_preview` to `2024-01-01_preview` or stable releases from `2024-01-01`. |
backup | Azure Kubernetes Service Backup Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-backup-overview.md | After the Backup extension is installed and Trusted Access is enabled, you can c The backup solution enables the backup operations for your AKS datasources that are deployed in the cluster and for the data that's stored in the persistent volume for the cluster, and then store the backups in a blob container. The disk-based persistent volumes are backed up as disk snapshots in a snapshot resource group. The snapshots and cluster state in a blob both combine to form a recovery point that is stored in your tenant called Operational Tier. You can also convert backups (first successful backup in a day, week, month, or year) in the Operational Tier to blobs, and then move them to a Vault (outside your tenant) once a day. > [!NOTE]-> Currently, Azure Backup supports only persistent volumes in CSI driver-based Azure Disk Storage. During backups, the solution skips other persistent volume types, such as Azure File Share and blobs. Also, backups are eligible to be moved to the vault if the persistent volumes are of size less than or equal to 1 TB. +> Currently, Azure Backup supports only persistent volumes in CSI driver-based Azure Disk Storage. During backups, the solution skips other persistent volume types, such as Azure File Share and blobs. Also, if you have defined retention rules for Vault tier then backups are only eligible to be moved to the vault if the persistent volumes are of size less than or equal to 1 TB. ## Configure backup You incur charges for: - **Snapshot fee**: Azure Backup for AKS protects a disk-based persistent volume by taking snapshots that are stored in the resource group in your Azure subscription. These snapshots incur snapshot storage charges. Because the snapshots aren't copied to the Backup vault, backup storage cost doesn't apply. For more information on the snapshot pricing, see [Managed Disk pricing](https://azure.microsoft.com/pricing/details/managed-disks/). +- **Backup Storage fee**: Azure Backup for AKS also supports storing backups in Vault Tier. This can be achieved by defining retention rules for **vault-standard** in the backup policy, with one restore point per day eligible to be moved into the Vault. Restore points stored in the Vault Tier are charged a separate fee called Backup Storage fee as per the total data stored (in GBs) and redundancy type enable on the Backup Vault. + ## Next step |
backup | Azure Kubernetes Service Backup Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-backup-troubleshoot.md | These error codes can appear while you enable AKS backup to store backups in a v **Recommended action**: Use same cluster version for Target cluster as Source cluster or manually apply the CRs. +### LinkedAuthorizationFailed ++**Error code**: LinkedAuthorizationFailed ++**Cause**: To perform a restore operation, user needs to have a **read** permission over the backed up AKS cluster. ++**Recommended action**: Assign Reader role on the source AKS cluster and then proceed to perform the restore operation. + ## Next steps - [About Azure Kubernetes Service (AKS) backup](azure-kubernetes-service-backup-overview.md) |
backup | Azure Kubernetes Service Cluster Backup Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-concept.md | Your Azure resources access AKS clusters through the AKS regional gateway using For AKS backup, the Backup vault accesses your AKS clusters via Trusted Access to configure backups and restores. The Backup vault is assigned a predefined role **Microsoft.DataProtection/backupVaults/backup-operator** in the AKS cluster, allowing it to only perform specific backup operations. -To enable Trusted Access between a Backup vault and an AKS cluster, you must register the `TrustedAccessPreview` feature flag on `Microsoft.ContainerService` at the subscription level. Learn more [to register the resource provider](azure-kubernetes-service-cluster-manage-backups.md#enable-the-feature-flag). --Learn [how to enable Trusted Access](azure-kubernetes-service-cluster-manage-backups.md#register-the-trusted-access). +To enable Trusted Access between a Backup vault and an AKS cluster. Learn [how to enable Trusted Access](azure-kubernetes-service-cluster-manage-backups.md#trusted-access-related-operations) >[!Note] >- You can install the Backup Extension on your AKS cluster directly from the Azure portal under the *Backup* section in AKS portal. |
backup | Azure Kubernetes Service Cluster Backup Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-policy.md | + + Title: Audit and enforce backup operations for Azure Kubernetes Service clusters using Azure Policy +description: 'An article describing how to use Azure Policy to audit and enforce backup operations for all Azure Kubernetes Service clusters created in a given scope' + Last updated : 08/26/2024++++++# Audit and enforce backup operations for Azure Kubernetes Service clusters using Azure Policy ++One of the key responsibilities of a Backup or Compliance Admin in an organization is to ensure that all business-critical machines are backed up with the appropriate retention. ++Azure Backup provides various built-in policies (using [Azure Policy](../governance/policy/overview.md)) to help you automatically ensure that your Azure Kubernetes Service clusters are ready for backup configuration. Depending on how your backup teams and resources are organized, you can use any one of the below policies: ++## Policy 1 - Azure Backup Extension should be installed in AKS clusters ++Use this [audit-only](../governance/policy/concepts/effects.md#audit) policy to identify the AKS clusters that don't have the backup extension installed. However, this policy doesn't automatically install the backup extension to these AKS clusters. It's useful only to evaluate the overall readiness of the AKS clusters for backup compliance, and not to take action immediately. ++## Policy 2 - Azure Backup should be enabled for AKS clusters ++Use this [audit-only](../governance/policy/concepts/effects.md#audit) policy to identify the clusters that don't have backups enabled. However, this policy doesn't automatically configure backups for these clusters. It's useful only to evaluate the overall compliance of the clusters, and not to take action immediately. ++## Policy 3 - Install Azure Backup Extension in AKS clusters (Managed Cluster) with a given tag. ++A central backup team in an organization can use this policy to install backup extension to any AKS clusters in a region. You can choose to **include** clusters that contain a certain tag, in the scope of this policy. ++## Policy 4 - Install Azure Backup Extension in AKS clusters (Managed Cluster) without a given tag. ++A central backup team in an organization can use this policy to install backup extension to any AKS clusters in a region. You can choose to **exclude** clusters that contain a certain tag, from the scope of this policy. ++## Supported Scenarios ++Before you audit and enforce backups for AKS clusters, see the following scenarios supported: ++* The built-in policy is currently supported only for Azure Kubernetes Service clusters. ++* Users must take care to ensure that the necessary [prerequisites](azure-kubernetes-service-cluster-backup-concept.md#backup-extension) are enabled before Policies 3 and 4 are assigned. ++* Policies 3 and 4 can be assigned to a single region and subscription at a time. ++* For Policies 1, 2, 3 and 4, management group scope is currently unsupported. ++## Using the built-in policies ++This section describes the end-to-end process of assigning Policy 3: **Install Azure Backup Extension in AKS clusters (Managed Cluster) with a given tag**. Similar instructions apply for the other policies. Once assigned, any new AKS cluster created under this scope has backup extension installed automatically. ++To assign Policy 3, follow these steps: ++1. Sign in to the Azure portal and navigate to the **Policy** Dashboard. + +2. Select **Definitions** in the left menu to get a list of all built-in policies across Azure Resources. + +3. Filter the list for **Category=Backup** and select the policy named *Install Azure Backup Extension in AKS clusters (Managed Cluster) with a given tag*. + ++4. Select the name of the policy. You're then redirected to the detailed definition for this policy. +++5. Select the **Assign** button at the top of the pane. This redirects you to the **Assign Policy** pane. + +6. Under **Basics**, select the three dots next to the **Scope** field. It opens up a right context pane where you can select the subscription for the policy to be applied on. You can also optionally select a resource group, so that the policy is applied only for AKS clusters in a particular resource group. +++7. In the **Parameters** tab, choose a location from the drop-down, and select the storage account to which the backup extension installed in the AKS cluster in the scope must be associated. You can also choose to specify a tag name and an array of tag values. An AKS cluster that contains any of the specified values for the given tag are excluded from the scope of the policy assignment. +++8. Ensure that **Effect** is set to deployIfNotExists. + +9. Navigate to **Review+create** and select **Create**. ++> [!NOTE] +> +> - Use [remediation](../governance/policy/how-to/remediate-resources.md) to enable these policies on existing AKS clusters. ++## Next step ++[Learn more about Azure Policy](../governance/policy/overview.md) |
backup | Azure Kubernetes Service Cluster Backup Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-support-matrix.md | ->[!Note] ->Vaulted backup and Cross Region Restore for AKS using Azure Backup are currently in preview. - ## Supported regions -- Operational Tier support for AKS backup is supported in all the following Azure public cloud regions: East US, North Europe, West Europe, South East Asia, West US 2, East US 2, West US, North Central US, Central US, France Central, Korea Central, Australia East, UK South, East Asia, West Central US, Japan East, South Central US, West US 3, Canada Central, Canada East, Australia South East, Central India, Norway East, Germany West Central, Switzerland North, Sweden Central, Japan West, UK West, Korea South, South Africa North, South India, France South, Brazil South, and UAE North.+- Operational Tier support for AKS backup is supported in all the following Azure public cloud regions: East US, North Europe, West Europe, South East Asia, West US 2, East US 2, West US, North Central US, Central US, France Central, Korea Central, Australia East, UK South, East Asia, West Central US, Japan East, South Central US, West US 3, Canada Central, Canada East, Australia South East, Central India, Norway East, Germany West Central, Switzerland North, Sweden Central, Japan West, UK West, Korea South, South Africa North, South India, France South, Brazil South, UAE North, China East 2, China East 3, China North 2, China North 3, USGov Virginia, USGov Arizona and USGov Texas. ++- Vault Tier and Cross Region Restore support (preview) for AKS backup are available in the following regions: East US, West US, West US 3, North Europe, West Europe, North Central US, South Central US, West Central US, East US 2, Central US, UK South, UK West, East Asia, South-East Asia, Japan East South India, Central India, Canada Central and Norway East. -- Vault Tier and Cross Region Restore support (preview) for AKS backup are available in the following regions: East US, West US, West US 3, North Europe, West Europe, North Central US, South Central US, East US 2, Central US, UK South, UK West, East Asia, and South-East Asia. >[!Note]- >If Cross Region Restore is enabled, backups stored in Vault Tier will be available in the Azure Paired region. See the [list of Azure Paired Region](../reliability/cross-region-replication-azure.md#azure-paired-regions). + >Vaulted backup and Cross Region Restore for AKS using Azure Backup are currently in preview. + > + >To access backups stored in Vault Tier in the Azure paired region, enable Cross Region Restore capability for your Backup Vault. See the [list of Azure Paired Region](../reliability/cross-region-replication-azure.md#azure-paired-regions). ## Limitations You can use [Azure Backup](./backup-overview.md) to help protect Azure Kubernete - Before you install the backup extension in an AKS cluster, ensure that the CSI drivers and snapshot are enabled for your cluster. If they're disabled, [enable these settings](/azure/aks/csi-storage-drivers#enable-csi-storage-drivers-on-an-existing-cluster). +- Provide a new and empty blob container as input while installing backup extension in an AKS cluster for the first time. Don't use same blob container for more than one AKS cluster. + - AKS backups don't support in-tree volumes. You can back up only CSI driver-based volumes. You can [migrate from tree volumes to CSI driver-based persistent volumes](/azure/aks/csi-migrate-in-tree-volumes). -- Currently, an AKS backup supports only the backup of Azure disk-based persistent volumes (enabled by the CSI driver). The supported Azure Disk SKUs are Standard HDD, Standard SSD, and Premium SSD. The disks belonging to Premium SSD v2 and Ultra Disk SKU are not supported. Both static and dynamically provisioned volumes are supported. For backup of static disks, the persistent volumes specification should have the *storage class* defined in the **YAML** file, otherwise such persistent volumes will be skipped from the backup operation.+- Currently, an AKS backup supports only the backup of Azure disk-based persistent volumes (enabled by the CSI driver). The supported Azure Disk SKUs are Standard HDD, Standard SSD, and Premium SSD. The disks belonging to Premium SSD v2 and Ultra Disk SKU aren't supported. Both static and dynamically provisioned volumes are supported. For backup of static disks, the persistent volumes specification should have the *storage class* defined in the **YAML** file, otherwise such persistent volumes are skipped from the backup operation. -- Azure Files shares and Azure Blob Storage persistent volumes are currently not supported by AKS backup due to lack of CSI Driver-based snapshotting capability. If you're using said persistent volumes in your AKS clusters, you can configure backups for them via the Azure Backup solutions. For more information, see [Azure file share backup](azure-file-share-backup-overview.md) and [Azure Blob Storage backup](blob-backup-overview.md).+- Azure Files shares and Azure Blob Storage persistent volumes are not supported by AKS backup due to lack of CSI Driver-based snapshotting capability. If you're using said persistent volumes in your AKS clusters, you can configure backups for them via the Azure Backup solutions. For more information, see [Azure file share backup](azure-file-share-backup-overview.md) and [Azure Blob Storage backup](blob-backup-overview.md). - Any unsupported persistent volume type is skipped while a backup is being created for the AKS cluster. - Currently, AKS clusters using a service principal aren't supported. If your AKS cluster uses a service principal for authorization, you can update the cluster to use a [system-assigned managed identity](/azure/aks/use-managed-identity#update-an-existing-aks-cluster-to-use-a-system-assigned-managed-identity) or a [user-assigned managed identity](/azure/aks/use-managed-identity#update-an-existing-cluster-to-use-a-user-assigned-managed-identity). -- You can only install the Backup Extension on agent nodes with Ubuntu and Azure Linux as Operating System. AKS Clusters with Windows based agent nodes do not allow Backup Extension installation.+- You can only install the Backup Extension on agent nodes with Ubuntu and Azure Linux as Operating System. AKS Clusters with Windows based agent nodes don't allow Backup Extension installation. -- You cannot install Backup Extension in AKS Cluster with ARM64 based agent nodes irrespective of Operating System (Ubuntu/Azure Linux/Windows) running on these nodes.+- You can't install Backup Extension in AKS Cluster with Arm64 based agent nodes irrespective of Operating System (Ubuntu/Azure Linux/Windows) running on these nodes. - You must install the backup extension in the AKS cluster. If you're using Azure CLI to install the backup extension, ensure that the version is 2.41 or later. Use `az upgrade` command to upgrade the Azure CLI. -- The blob container provided as input during installation of the backup extension should be in the same region and subscription as that of the AKS cluster. Only blob containers in a General-purpose V2 Storage Account are supported and Premium Storage Account are not supported. +- The blob container provided as input during installation of the backup extension should be in the same region and subscription as that of the AKS cluster. Only blob containers in a General-purpose V2 Storage Account are supported and Premium Storage Account aren't supported. - The Backup vault and the AKS cluster should be in the same region and subscription. -- Azure Backup for AKS provides both Operation Tier (Snapshot) and Vault Tier backup. Multiple backups per day can be stored in Operational Tier, with only one backup per day to be stored in the Vault.+- Azure Backup for AKS provides both Operational Tier (Snapshot) and Vault Tier backup. Multiple backups per day can be stored in Operational Tier, with only one backup per day to be stored in the Vault as per the retention policy defined. - Currently, the modification of a backup policy and the modification of a snapshot resource group (assigned to a backup instance during configuration of the AKS cluster backup) aren't supported. You can use [Azure Backup](./backup-overview.md) to help protect Azure Kubernete - For successful backup and restore operations, the Backup vault's managed identity requires role assignments. If you don't have the required permissions, permission problems might happen during backup configuration or restore operations soon after you assign roles because the role assignments take a few minutes to take effect. [Learn about role definitions](azure-kubernetes-service-cluster-backup-concept.md#required-roles-and-permissions). -- Backup vault does not support Azure Lighthouse. Thus, cross tenant management cannot be enabled by Lighthouse for Azure Backup for AKS and you cannot backup/restore AKS Clusters across tenant.+- Backup vault doesn't support Azure Lighthouse. Thus, cross tenant management can't be enabled by Lighthouse for Azure Backup for AKS and you cannot backup/restore AKS Clusters across tenant. ++- The following namespaces are skipped from Backup Configuration and not cofigured for backups: `kube-system`, `kube-node-lease`, `kube-public`. - Here are the AKS backup limits: You can use [Azure Backup](./backup-overview.md) to help protect Azure Kubernete | Number of backup policies per Backup vault | 5,000 | | Number of backup instances per Backup vault | 5,000 | | Number of on-demand backups allowed in a day per backup instance | 10 |+ | Number of namespaces per backup instance | 800 | | Number of allowed restores per backup instance in a day | 10 | - Configuration of a storage account with private endpoint is supported. |
backup | Azure Kubernetes Service Cluster Backup Using Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-using-cli.md | Azure Backup now allows you to back up AKS clusters (cluster resources and persi - You must [install the Backup Extension](azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension) to configure backup and restore operations on an AKS cluster. Learn more [about Backup Extension](azure-kubernetes-service-cluster-backup-concept.md#backup-extension). -- Ensure that `Microsoft.KubernetesConfiguration`, `Microsoft.DataProtection`, and the `TrustedAccessPreview` feature flag on `Microsoft.ContainerService` are registered for your subscription before initiating the backup configuration and restore operations.+- Ensure that `Microsoft.KubernetesConfiguration`, `Microsoft.DataProtection`, and `Microsoft.ContainerService` are registered for your subscription before initiating the backup configuration and restore operations. - Ensure to perform [all the prerequisites](azure-kubernetes-service-cluster-backup-concept.md) before initiating backup or restore operation for AKS backup. The configuration of backup is performed in two steps: "snapshot_volumes": true } ```+The following namespaces are skipped from backup configuration and not cofigured for backups: kube-system, kube-node-lease, kube-public. 2. Prepare the relevant request using the relevant vault, policy, AKS cluster, backup configuration, and snapshot resource group using the `az dataprotection backup-instance initialize` command. |
backup | Azure Kubernetes Service Cluster Backup Using Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-using-powershell.md | Azure Backup now allows you to back up AKS clusters (cluster resources and persi - You must [install the Backup Extension](azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension) to configure backup and restore operations on an AKS cluster. Learn more [about Backup Extension](azure-kubernetes-service-cluster-backup-concept.md#backup-extension). -- Ensure that `Microsoft.KubernetesConfiguration`, `Microsoft.DataProtection`, and the `TrustedAccessPreview` feature flag on `Microsoft.ContainerService` are registered for your subscription before initiating the backup configuration and restore operations.+- Ensure that `Microsoft.KubernetesConfiguration`, `Microsoft.DataProtection`, and `Microsoft.ContainerService` are registered for your subscription before initiating the backup configuration and restore operations. - Ensure to perform [all the prerequisites](azure-kubernetes-service-cluster-backup-concept.md) before initiating backup or restore operation for AKS backup. With the created Backup vault and backup policy, and the AKS cluster in *ready-t The configuration of backup is performed in two steps: -1. Prepare backup configuration to define which cluster resources are to be backed up using the `New-AzDataProtectionBackupConfigurationClientObject` cmdlet. In the following example, the configuration is defined as all cluster resources under current, and future namespaces will be backed up with the label as `key-value pair x=y`. Also, all the cluster scoped resources and persistent volumes are backed up. +1. Prepare backup configuration to define which cluster resources are to be backed up using the `New-AzDataProtectionBackupConfigurationClientObject` cmdlet. In the following example, the configuration is defined as all cluster resources under current, and future namespaces will be backed up with the label as `key-value pair x=y`. Also, all the cluster scoped resources and persistent volumes are backed up. The following namespaces are skipped from backup configuration and not cofigured for backups: kube-system, kube-node-lease, kube-public. ```azurepowershell $backupConfig = New-AzDataProtectionBackupConfigurationClientObject -SnapshotVolume $true -IncludeClusterScopeResource $true -DatasourceType AzureKubernetesService -LabelSelector "env=prod" |
backup | Azure Kubernetes Service Cluster Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup.md | You can use Azure Backup to back up AKS clusters (cluster resources and persiste - You must [install the Backup extension](azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension) to configure backup and restore operations for an AKS cluster. Learn more [about the Backup extension](azure-kubernetes-service-cluster-backup-concept.md#backup-extension). -- Ensure that `Microsoft.KubernetesConfiguration`, `Microsoft.DataProtection`, and the `TrustedAccessPreview` feature flag on `Microsoft.ContainerService` are registered for your subscription before you initiate backup configuration and restore operations.+- Ensure that `Microsoft.KubernetesConfiguration`, `Microsoft.DataProtection`, and `Microsoft.ContainerService` are registered for your subscription before you initiate backup configuration and restore operations. - Ensure that you perform [all the prerequisites](azure-kubernetes-service-cluster-backup-concept.md) before you initiate a backup or restore operation for AKS backup. To configure backups for AKS cluster: :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/grant-permission.png" alt-text="Screenshot that shows how to proceed to the next step after granting permission."::: > [!NOTE]- > - Before you enable Trusted Access, enable the `TrustedAccessPreview` feature flag for the `Microsoft.ContainerServices` resource provider on the subscription. - > - If the AKS cluster doesn't have the Backup extension installed, you can perform the installation step that configures backup. + > - If the AKS cluster doesn't have the Backup extension installed, you can perform the installation during configuring backup for the cluster. 1. Select the backup policy, which defines the schedule for backups and their retention period. Then select **Next**. To configure backups for AKS cluster: Azure Backup for AKS allows you to define the application boundary within AKS cluster that you want to back up. You can use the filters that are available within backup configurations to choose the resources to back up and also to run custom hooks. The defined backup configuration is referenced by the value for **Backup Instance Name**. The below filters are available to define your application boundary: -1. **Select Namespaces to backup**, you can either select **All** to back up all existing and future namespaces in the cluster, or you can select **Choose from list** to select specific namespaces for backup. +1. **Select Namespaces to backup**, you can either select **All** to back up all existing and future namespaces in the cluster, or you can select **Choose from list** to select specific namespaces for backup. The following namespaces are skipped from Backup Configuration and not cofigured for backups: kube-system, kube-node-lease, kube-public. :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/backup-instance-name.png" alt-text="Screenshot that shows how to select namespaces to include in the backup." lightbox="./media/azure-kubernetes-service-cluster-backup/backup-instance-name.png"::: |
backup | Azure Kubernetes Service Cluster Manage Backups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-manage-backups.md | The registration may take up to *10 minutes*. To monitor the registration proces az provider show --name Microsoft.KubernetesConfiguration --output table ``` -### Register the Trusted Access --To enable Trusted Access between the Backup vault and AKS cluster, you must register *TrustedAccessPreview* feature flag on *Microsoft.ContainerService* over the subscription. To perform the registration, run the following commands: --## Enable the feature flag --To enable the feature flag follow these steps: --1. Install the *aks-preview* extension: -- ```azurecli-interactive - az extension add --name aks-preview - ``` --1. Update to the latest version of the extension released: -- ```azurecli-interactive - az extension update --name aks-preview - ``` --1. Register the *TrustedAccessPreview* feature flag: -- ```azurecli-interactive - az feature register --namespace "Microsoft.ContainerService" --name "TrustedAccessPreview" - ``` - - It takes a few minutes for the status to show *Registered*. --1. Verify the registration status: -- ```azurecli-interactive - az feature show --namespace "Microsoft.ContainerService" --name "TrustedAccessPreview" - ``` --1. When the status shows *Registered*, refresh the `Microsoft.ContainerService` resource provider registration: -- ```azurecli-interactive - az provider register --namespace Microsoft.ContainerService - ``` - ## Backup Extension related operations This section provides the set of Azure CLI commands to perform create, update, or delete operations on the Backup Extension. You can use the update command to change compute limits for the underlying Backup Extension Pods. |
backup | Azure Kubernetes Service Cluster Restore Using Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-restore-using-cli.md | You can perform both *Original-Location Recovery (OLR)* (restoring in the AKS cl - AKS backup allows you to restore to original AKS cluster (that was backed up) and to an alternate AKS cluster. AKS backup allows you to perform a full restore and item-level restore. You can utilize [restore configurations](#restore-to-an-aks-cluster) to define parameters based on the cluster resources that will be picked up during the restore. -- You must [install the Backup Extension](azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension) in the target AKS cluster. Also, you must [enable Trusted Access](azure-kubernetes-service-cluster-manage-backups.md#register-the-trusted-access) between the Backup vault and the AKS cluster.+- You must [install the Backup Extension](azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension) in the target AKS cluster. Also, you must [enable Trusted Access](azure-kubernetes-service-cluster-manage-backups.md#trusted-access-related-operations) between the Backup vault and the AKS cluster. For more information on the limitations and supported scenarios, see the [support matrix](azure-kubernetes-service-cluster-backup-support-matrix.md). |
backup | Azure Kubernetes Service Cluster Restore Using Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-restore-using-powershell.md | Initialize the variables with required details related to each resource to be us - AKS backup allows you to restore to original AKS cluster (that was backed up) and to an alternate AKS cluster. AKS backup allows you to perform a full restore and item-level restore. You can utilize [restore configurations](#restore-to-an-aks-cluster) to define parameters based on the cluster resources that will be restored. -- You must [install the Backup Extension](azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension) in the target AKS cluster. Also, you must [enable Trusted Access](azure-kubernetes-service-cluster-manage-backups.md#register-the-trusted-access) between the Backup vault and the AKS cluster.+- You must [install the Backup Extension](azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension) in the target AKS cluster. Also, you must [enable Trusted Access](azure-kubernetes-service-cluster-manage-backups.md#trusted-access-related-operations) between the Backup vault and the AKS cluster. For more information on the limitations and supported scenarios, see the [support matrix](azure-kubernetes-service-cluster-backup-support-matrix.md). |
backup | Azure Kubernetes Service Cluster Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-restore.md | Azure Backup now allows you to back up AKS clusters (cluster resources and persi - AKS backup allows you to restore to original AKS cluster (that was backed up) and to an alternate AKS cluster. AKS backup allows you to perform a full restore and item-level restore. You can utilize [restore configurations](#restore-configurations) to define parameters based on the cluster resources that are to be restored. -- You must [install the Backup Extension](azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension) in the target AKS cluster. Also, you must [enable Trusted Access](azure-kubernetes-service-cluster-manage-backups.md#register-the-trusted-access) between the Backup vault and the AKS cluster.+- You must [install the Backup Extension](azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension) in the target AKS cluster. Also, you must [enable Trusted Access](azure-kubernetes-service-cluster-manage-backups.md#trusted-access-related-operations) between the Backup vault and the AKS cluster. - In case you are trying to restore a backup stored in Vault Tier, you need to provide a storage account in input as a staging location. Backup data is stored in the Backup vault as a blob within the Microsoft tenant. During a restore operation, the backup data is copied from one vault to staging storage account across tenants. Ensure that the staging storage account for the restore has the **AllowCrossTenantReplication** property set to **true**. |
backup | Backup Azure Database Postgresql Flex Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql-flex-overview.md | Azure Backup and Azure Database Services have come together to build an enterpri To perform the backup operation: -1. Grant permissions to the backup vault MSI on the target ARM resource (PostgreSQL-Flexible server), establishing access and control. +1. Grant permissions to the backup vault MSI on the target ARM resource (PostgreSQL-Flexible server), establishing access, and control. 1. Configure backup policies, specify scheduling, retention, and other parameters. Once the configuration is complete: For successful backup operations, the vault MSI needs the following permissions: 1. *PostgreSQL Flexible Server Long Term Retention Backup* role on the server. 1. *Reader* role on the resource group of the server. +## Understand pricing ++You incur charges for: ++- **Protected instance fee**: Azure Backup for PostgreSQL - Flexible servers charges a *protected instance fee* as per the size of the database. When you configure backup for a PostgreSQL Flexible server, a protected instance is created. Each instance is charged on the basis of its size (in GBs) on a per unit (250 GB) basis. ++- **Backup Storage fee**: Azure Backup for PostgreSQL - Flexible servers store backups in Vault Tier. Restore points stored in the vault-standard tier are charged a separate fee called Backup Storage fee as per the total data stored (in GBs) and redundancy type enable on the Backup Vault. + ## Next steps [Azure Database for PostgreSQL -Flex backup (preview)](backup-azure-database-postgresql-flex.md). |
backup | Backup Azure Database Postgresql Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql-overview.md | Azure Backup and Azure Database Services have come together to build an enterpri You can use this solution independently or in addition to the [native backup solution offered by Azure PostgreSQL](/azure/postgresql/concepts-backup) that offers retention up to 35 days. The native solution is suited for operational recoveries, such as when you want to recover from the latest backups. The Azure Backup solution helps you with your compliance needs and more granular and flexible backup/restore. +>[!Note] +>Azure Database for PostgreSQL - Single Server is on the retirement path and is scheduled for retirement by March 28, 2025. +> +>If you currently have an Azure Database for PostgreSQL - Single Server service hosting production servers, we're glad to inform you that you can migrate your Azure Database for PostgreSQL - Single Server to the Azure Database for PostgreSQL - Flexible Server. +> +>Azure Database for PostgreSQL - Flexible Server is a fully managed production-ready> database service designed for more granular control and flexibility over database management functions and configuration settings with the enterprise grade [backup solution by Azure Backup](backup-azure-database-postgresql-flex-overview.md). For more information about Azure Database for PostgreSQL - Flexible Server, visit Azure Database for PostgreSQL - Flexible Server. + ## Backup process 1. As a backup admin, you can specify the Azure PostgreSQL databases that you intend to back up. Additionally, you can also specify the details of the Azure key vault that stores the credentials needed to connect to the specified database(s). These credentials are securely seeded by the database admin in the Azure key vault. END LOOP; END; $do$ ```- ) > [!NOTE] > If a database for which backup was already configured is failing with **UserErrorMissingDBPermissions** Please refer to this [troubleshooting guide](backup-azure-database-postgresql-troubleshoot.md) for assistance in resolving the issue. |
backup | Backup Azure Mysql Flexible Server About | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mysql-flexible-server-about.md | The following table lists permissions that the vault MSI requires for successful | **Backup** | - MySQL Flexible Server Long-term Retention Backup Role <br><br> - Reader Role on the server's resource group. | | **Restore** | Storage Blob Data Contributor Role on the target storage account. | +## Understand pricing ++You incur charges for: ++- **Protected instance fee**: Azure Backup for MySQL - Flexible servers charges a *protected instance fee* as per the size of the database. When you configure backup for am Azure MySQL - Flexible server, a protected instance is created. Each instance is charged on the basis of its size (in GBs) on a per unit (250 GB) basis. ++- **Backup Storage fee**: Azure Backup for MySQL - Flexible servers store backups in Vault Tier. Restore points stored in the vault-standard tier are charged a separate fee called Backup Storage fee as per the total data stored (in GBs) and redundancy type enable on the Backup Vault. ++ ## Next steps - [Support matrix for Azure Database for MySQL - Flexible Server retention for long term (preview)](backup-azure-mysql-flexible-server-support-matrix.md). - [Back up an Azure Database for MySQL - Flexible Server (preview)](backup-azure-mysql-flexible-server.md).-- [Restore an Azure Database for MySQL - Flexible Server (preview)](backup-azure-mysql-flexible-server-restore.md).+- [Restore an Azure Database for MySQL - Flexible Server (preview)](backup-azure-mysql-flexible-server-restore.md). |
backup | Backup Azure Sap Hana Database Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database-troubleshoot.md | Title: Troubleshoot SAP HANA databases back up errors description: Describes how to troubleshoot common errors that might occur when you use Azure Backup to back up SAP HANA databases. Previously updated : 09/10/2024 Last updated : 09/30/2024 See the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and [What ### UserErrorHANAInternalRoleNotPresent -| **Error message** | `Azure Backup does not have required role privileges to carry out Backup and Restore operations` | +| **Error message** | `Azure Backup doesn't have required role privileges to carry out Backup and Restore operations` | | - | |-| **Possible causes** | All operations fail with this error when the Backup user (AZUREWLBACKUPHANAUSER) doesn't have the **SAP_INTERNAL_HANA_SUPPORT** role assigned or the role might have been overwritten. | +| **Possible causes** | All operations fail with this error when the Backup user (AZUREWLBACKUPHANAUSER) doesn't have the **SAP_INTERNAL_HANA_SUPPORT** role assigned or the role might be overwritten. | | **Recommended action** | Download and run the [pre-registration script](https://aka.ms/scriptforpermsonhana) on the SAP HANA instance, or manually assign the **SAP_INTERNAL_HANA_SUPPORT** role to the Backup user (AZUREWLBACKUPHANAUSER).<br><br>**Note**<br><br>If you're using HANA 2.0 SPS04 Rev 46 and later, this error doesn't occur as the use of the **SAP_INTERNAL_HANA_SUPPORT** role is deprecated in these HANA versions. | ### UserErrorInOpeningHanaOdbcConnection | **Error message** | `Failed to connect to HANA system` | | | |-| **Possible causes** | <ul><li>Connection to HANA instance failed</li><li>System DB is offline</li><li>Tenant DB is offline</li><li>Backup user (AZUREWLBACKUPHANAUSER) doesn't have enough permissions/privileges.</li></ul> | +| **Possible causes** | <ul><li>Connection to HANA instance failed</li><li>System database (DB) is offline</li><li>Tenant DB is offline</li><li>Backup user (AZUREWLBACKUPHANAUSER) doesn't have enough permissions/privileges.</li></ul> | | **Recommended action** | Check if the system is running. If one or more databases is running, ensure that the required permissions are set. To do so, download and run the [pre-registration script](https://aka.ms/scriptforpermsonhana) on the SAP HANA instance. | ### UserErrorHanaInstanceNameInvalid | **Error message** | `The specified SAP HANA instance is either invalid or can't be found` | | | |-| **Possible causes** | <ul><li>The specified SAP HANA instance is either invalid or can't be found.</li><li>Multiple SAP HANA instances on a single Azure VM can't be backed up.</li></ul> | +| **Possible causes** | <ul><li>The specified SAP HANA instance is either invalid or can't be found.</li><li>Multiple SAP HANA instances on a single Azure Virtual Machine (VM) can't be backed up.</li></ul> | | **Recommended action** | <ul><li>Ensure that only one HANA instance is running on the Azure VM.</li><li> To resolve the issue, run the script from the _Discover DB_ pane (you can also find the script [here](https://aka.ms/scriptforpermsonhana)) with the correct SAP HANA instance.</li></ul> | ### UserErrorHANALSNValidationFailure | **Error message** | `Backup log chain is broken` | | | |-| **Possible causes** | HANA LSN Log chain break can be triggered for various reasons, including:<ul><li>Azure Storage call failure to commit backup.</li><li>The Tenant DB is offline.</li><li>Extension upgrade has terminated an in-progress Backup job.</li><li>Unable to connect to Azure Storage during backup.</li><li>SAP HANA has rolled back a transaction in the backup process.</li><li>A backup is complete, but catalog isn't yet updated with success in HANA system.</li><li>Backup failed from Azure Backup perspective, but success from the perspective of HANA ΓÇö the log backup/catalog destination might have been updated from Backint-to-file system, or the Backint executable might have been changed.</li></ul> | -| **Recommended action** | To resolve this issue, Azure Backup triggers an autoheal Full backup. While this auto-heal backup is in progress, all log backups are triggered by HANA fail with **OperationCancelledBecauseConflictingAutohealOperationRunningUserError**. Once the autoheal Full backup is complete, logs and all other backups start working as expected.<br>If you don't see an autoheal full backup triggered or any successful backup (Full/Differential/ Incremental) in 24 hours, contact Microsoft support.</br> | +| **Possible causes** | HANA LSN Log chain break can be triggered for various reasons, including:<ul><li>Azure Storage call failure to commit backup.</li><li>The Tenant DB is offline.</li><li>Extension upgrade is terminated an in-progress Backup job.</li><li>Unable to connect to Azure Storage during backup.</li><li>SAP HANA has rolled back a transaction in the backup process.</li><li>A backup is complete, but catalog isn't yet updated with success in HANA system.</li><li>Backup failed from Azure Backup perspective, but success from the perspective of HANA ΓÇö the log backup/catalog destination might have been updated from Backint-to-file system, or the Backint executable might have been changed.</li></ul> | +| **Recommended action** | To resolve this issue, Azure Backup triggers an auto-heal Full backup. While this auto-heal backup is in progress, all log backups are triggered by HANA fail with **OperationCancelledBecauseConflictingAutohealOperationRunningUserError**. Once the auto-heal Full backup is complete, logs and all other backups start working as expected.<br>If you don't see an auto-heal full backup triggered or any successful backup (Full/Differential/ Incremental) in 24 hours, contact Microsoft support.</br> | ### UserErrorSDCtoMDCUpgradeDetected See the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and [What |**Error message** | `The source and target systems for restore are incompatible.` | ||| |**Possible causes** | The restore flow fails with this error when the source and target HANA databases, and systems are incompatible. |-|Recommended action | Ensure that your restore scenario isn't in the following list of possible incompatible restores:<br> **Case 1:** SYSTEMDB can't be renamed during restore.<br>**Case 2:** Source - SDC and target - MDC: The source database can't be restored as SYSTEMDB or tenant DB on the target. <br> **Case 3:** Source ΓÇö MDC and target ΓÇö SDC: The source database (SYSTEMDB or tenant DB) can't be restored to the target.<br>To learn more, see the note **1642148** in the [SAP support launchpad](https://launchpad.support.sap.com). | +|Recommended action | Ensure that your restore scenario isn't in the following list of possible incompatible restores:<br> **Case 1:** SYSTEMDB can't be renamed during restore.<br>**Case 2:** Source ΓÇö SDC and target ΓÇö MDC: The source database can't be restored as SYSTEMDB or tenant DB on the target. <br> **Case 3:** Source ΓÇö MDC and target ΓÇö SDC: The source database (SYSTEMDB or tenant DB) can't be restored to the target.<br>To learn more, see the note **1642148** in the [SAP support launchpad](https://launchpad.support.sap.com). | ### UserErrorHANAPODoesNotExist -**Error message** | `Database configured for backup does not exist.` +**Error message** | `Database configured for backup doesn't exist.` | -- **Possible causes** | If you delete a database that is configured for backup, all scheduled and on-demand backups on this database will fail. **Recommended action** | Verify if the database is deleted. Re-create the database or [stop protection](sap-hana-db-manage.md#stop-protection-for-an-sap-hana-database) (with or without retain data) for the database. See the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and [What **Possible causes** | - Disk corruption issue. <br> - Memory allocation issues. <br> - Too many databases in use. <br> - Topology update issue. **Recommended action** | Work with the SAP HANA team to fix this issue. However, if the issue persists, you can contact Microsoft support for further assistance. +### UserErrorRestoreTargetDirectoriesAbsent ++| **Error Message** | `PreRestoreDataParamsPrep: Target directory` doesn't exist. | +| | | +| **Possible Causes** | Restore as files is failing due to *directory* that is selected for restore doesn't exist on the Target server or isn't accessible. +| **Recommended action** | Verify the directory that you selected is available on the target server and ensure you have selected the correct target server at the time of restore. | + ## Restore checks ### Single Container Database (SDC) restore |
backup | Backup Azure Sap Hana Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database.md | Title: Back up an SAP HANA database to Azure with Azure Backup description: In this article, learn how to back up an SAP HANA database to Azure virtual machines with the Azure Backup service. Previously updated : 04/26/2024 Last updated : 09/30/2024 You can similarly create NSG outbound security rules for Azure Storage and Micro If you're using Azure Firewall, create an application rule by using the *AzureBackup* [Azure Firewall FQDN tag](../firewall/fqdn-tags.md). This allows all outbound access to Azure Backup. +>[!Note] +>Azure Backup currently doesn't support the *TLS inspection enabled* **Application Rule** on Azure Firewall. + #### Allow access to service IP ranges If you choose to allow access service IPs, refer to the IP ranges in the JSON file available [here](https://www.microsoft.com/download/confirmation.aspx?id=56519). You'll need to allow access to IPs corresponding to Azure Backup, Azure Storage, and Microsoft Entra ID. |
backup | Backup Managed Disks Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-managed-disks-policy.md | + + Title: Audit and enforce backup for Managed Disks using Azure Policy +description: 'An article describing how to use Azure Policy to audit and enforce backup for all Disks created in a given scope' + Last updated : 08/26/2024++++++# Audit and enforce backup for Managed Disks using Azure Policy ++One of the key responsibilities of a Backup or Compliance Admin in an organization is to ensure that all business-critical machines are backed up with the appropriate retention. ++Today, Azure Backup provides various built-in policies (using [Azure Policy](../governance/policy/overview.md)) to help you automatically ensure that your Azure Managed Disks are configured for backup. Depending on how your backup teams and resources are organized, you can use any one of the below policies: ++## Policy 1 - Azure Backup should be enabled for Managed Disks ++Use an [audit-only](../governance/policy/concepts/effects.md#audit) policy to identify disks which don't have backup enabled. However, this policy doesn't automatically configure backups for these disks. It is useful when you're only looking to evaluate the overall compliance of the disks but not looking to take action immediately. ++## Policy 2 - Configure backup for Azure Disks (Managed Disks) with a given tag to an existing backup vault in the same region ++A central backup team of an organization can use this policy to configure backup to an existing central Backup vault in the same subscription and location as the Managed Disks being governed. You can choose to **include** Disks that contain a certain tag, in the scope of this policy. ++## Policy 3 - Configure backup for Azure Disks (Managed Disks) without a given tag to an existing backup vault in the same region ++This policy works the same as Policy 2 above, with the only difference being that you can use this policy to **exclude** Disks that contain a certain tag, from the scope of this policy. ++## Supported Scenarios ++Before you audit and enforce backups for AKS clusters, see the following scenarios supported: ++* The built-in policy is currently supported only for Azure Managed Disks. Ensure that the Backup Vault and backup policy specified during assignment is a Disk backup policy. ++* The Policies 2 and 3 can be assigned to a single location and subscription at a time. To enable backup for Disks across locations and subscriptions, multiple instances of the policy assignment need to be created, one for each combination of location and subscription. ++* For the Policies 1, 2 and 3, management group scope is currently unsupported. ++* For the Policies 2 and 3, the specified vault and the disks configured for backup can be under different resource groups. +++## Using the built-in policies ++The below steps describe the end-to-end process of assigning Policy 2: **Configure backup on Managed Disks with a given tag to an existing backup vault in the same location to a given scope** . Similar instructions are applicable for the other policies. Once assigned, any new Managed Disk created in the scope is automatically configured for backup. ++To assign Policy 2, follow these steps: ++1. Sign in to the Azure portal and navigate to the **Policy** Dashboard. ++2. Select **Definitions** in the left menu to get a list of all built-in policies across Azure Resources. ++3. Filter the list for **Category=Backup** and select the policy named *Configure backup on Managed Disks with a given tag to an existing backup vault in the same location to a given scope*. + ++4. Select the name of the policy. You're then redirected to the detailed definition for this policy. +++5. Select the **Assign** button at the top of the pane. This redirects you to the **Assign Policy** pane. ++6. Under **Basics**, select the three dots next to the **Scope** field. It opens up a right context pane where you can select the subscription for the policy to be applied on. You can also optionally select a resource group, so that the policy is applied only for Disks in a particular resource group. +++7. In the **Parameters** tab, choose a location from the drop-down, and select the vault, backup policy to which the Disks in the scope must be associated, and resource group where these disk snapshots are stored. You can also choose to specify a tag name and an array of tag values. A Disk that contains any of the specified values for the given tag is included in the scope of the policy assignment. +++8. Ensure that **Effect** is set to deployIfNotExists. ++9. Navigate to **Review+create** and select **Create**. ++> [!NOTE] +> +> - Use [remediation](../governance/policy/how-to/remediate-resources.md) to enable policy of exisiting Managed Disks. +> - It's recommended that this policy not be assigned to more than 200 Disks at a time. If the policy is assigned to more than 200 Disks, it can result in the backup being triggered a few hours later than that specified by the schedule. ++## Next step ++[Learn more about Azure Policy](../governance/policy/overview.md) |
backup | Backup Sql Server Database Azure Vms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-database-azure-vms.md | Title: Back up multiple SQL Server VMs from the vault description: In this article, learn how to back up SQL Server databases on Azure virtual machines with Azure Backup from the Recovery Services vault Previously updated : 04/17/2024 Last updated : 09/30/2024 SQL Server databases are critical workloads that require a low recovery-point ob This article shows how to back up a SQL Server database that's running on an Azure VM to an Azure Backup Recovery Services vault. -In this article, you'll learn how to: --> [!div class="checklist"] -> -> * Create and configure a vault. -> * Discover databases and set up backups. -> * Set up auto-protection for databases. - >[!Note] >See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios. Before you back up a SQL Server database, check the following criteria: 1. Identify or create a [Recovery Services vault](backup-sql-server-database-azure-vms.md#create-a-recovery-services-vault) in the same region and subscription as the VM hosting the SQL Server instance. 1. Verify that the VM has [network connectivity](backup-sql-server-database-azure-vms.md#establish-network-connectivity). 1. Make sure that the [Azure Virtual Machine Agent](/azure/virtual-machines/extensions/agent-windows) is installed on the VM.-1. Make sure that .NET 4.5.2 version or above is installed on the VM. +1. Make sure that .NET 4.6.2 version or above is installed on the VM. ++ >[!Caution] + >The support for backups of SQL VMs running .NET Framework 4.6.1 or below will soon be deprecated because these versions are [officially out of support](/lifecycle/products/microsoft-net-framework). We recommend that you upgrade the .NET Framework to version 4.6.2 or above to ensure that there are no backup failures. + 1. Make sure that the SQL Server databases follow the [database naming guidelines for Azure Backup](#database-naming-guidelines-for-azure-backup). 1. Ensure that the combined length of the SQL Server VM name and the resource group name doesn't exceed 84 characters for Azure Resource Manager VMs (or 77 characters for classic VMs). This limitation is because some characters are reserved by the service. 1. Check that you don't have any other backup solutions enabled for the database. Disable all other SQL Server backups before you back up the database. You can similarly create NSG outbound security rules for Azure Storage and Micro If you're using Azure Firewall, create an application rule by using the *AzureBackup* [Azure Firewall FQDN tag](../firewall/fqdn-tags.md). This allows all outbound access to Azure Backup. +>[!Note] +>Azure Backup currently doesn't support the *TLS inspection enabled* **Application Rule** on Azure Firewall. + #### Allow access to service IP ranges If you choose to allow access service IPs, refer to the IP ranges in the JSON file available [here](https://www.microsoft.com/download/confirmation.aspx?id=56519). You'll need to allow access to IPs corresponding to Azure Backup, Azure Storage, and Microsoft Entra ID. |
backup | Disk Backup Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/disk-backup-troubleshoot.md | Error Message: The subscription isn't registered to use namespace Microsoft.Comp Recommended Action: The required resource provider is not registered for your subscription. Register both the resource providers' namespace (_Microsoft.Compute_ and _Microsoft.Storage_) using the steps in [Solution 3](../azure-resource-manager/templates/error-register-resource-provider.md#solution-3azure-portal). ++++### Error code: LinkedAuthorizationFailed ++Error Message: To perform a restore operation, user needs to have a **read** permission over the backed up Managed Disk. ++Recommended Action: Assign Reader role over the source Disk and then proceed to perform the restore operation. + ## Next steps [Azure Disk Backup support matrix](disk-backup-support-matrix.md) |
backup | Quick Kubernetes Backup Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-kubernetes-backup-terraform.md | To implement the Terraform code for AKS backup flow, run the following scripts: #Assign Role to Extension Identity over Storage Account resource "azurerm_role_assignment" "extensionrole" { scope = azurerm_storage_account.backupsa.id- role_definition_name = "Storage Blob Data Contributor" + role_definition_name = "Storage AccountContributor" principal_id = azurerm_kubernetes_cluster_extension.dataprotection.aks_assigned_identity[0].principal_id depends_on = [azurerm_kubernetes_cluster_extension.dataprotection] } |
backup | Restore Azure Database Postgresql Flex | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-database-postgresql-flex.md | Follow these steps: - The Fifth file **_tablespaces**. Has the tablespaces backed up using pg_dumpall -1. Post restoration completion to the target storage account, you can use pg_restore utility to restore an Azure Database for PostgreSQL flexible server database from the target. Use the following command to connect to an existing postgresql flexible server and an existing database +1. Post restoration completion to the target storage account, you can use pg_restore utility to restore the database and other files to a PostgreSQL Flexible server. Use the following command to connect to an existing postgresql flexible server and an existing database - `pg_restore -h <hostname> -U <username> -d <db name> -Fd -j <NUM> -C <dump directory>` + `az storage blob download --container-name <container-name> --name <blob-name> --account-name <storage-account-name> --account-key <storage-account-key> --file - | pg_restore -h <postgres-server-url> -p <port> -U <username> -d <database-name> -v -` + * `--account-name`: Name of the Target Storage Account. + * `--container-name`: Name of the blob container. + * `--blob-name`: Name of the blob. + * `--account-key`: Storage Account Key. * `-Fd`: The directory format. * `-j`: The number of jobs. * `-C`: Begin the output with a command to create the database itself and then reconnect to it. - Here's an example of how this syntax might appear: -- `pg_restore -h <hostname> -U <username> -j <Num of parallel jobs> -Fd -C -d <databasename> sampledb_dir_format` - If you have more than one database to restore, re-run the earlier command for each database. Also, by using multiple concurrent jobs **-j**, you can reduce the time it takes to restore a large database on a multi-vCore target server. The number of jobs can be equal to or less than the number of vCPUs that are allocated for the target server.++1. To restore the other three files (roles, schema and tablespaces), use the psql utility to restore them to a PostgreSQL Flexible server. ++ `az storage blob download --container-name <container-name> --name <blob-name> --account-name <storage-account-name> --account-key <storage-account-key> --file - + | psql -h <hostname> -U <username> -d <db name> -f <dump directory> -v -` ++ Re-run the above command for each file. ## Next steps |
container-apps | Dapr Components | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-components.md | scopes: #### Referencing Dapr secret store components -Once you [create a Dapr secret store using one of the previous approaches](#creating-a-dapr-secret-store-component), you can reference that secret store from other Dapr components in the same environment. In the following example, the `secretStoreComponent` field is populated with the name of the secret store specified in the previous examples, where the `sb-root-connectionstring` is stored. +Once you [create a Dapr secret store using one of the previous approaches](#creating-a-dapr-secret-store-component), you can reference that secret store from other Dapr components in the same environment. The following example demonstrates using Entra ID authentication. ```yaml componentType: pubsub.azure.servicebus.queue version: v1-secretStoreComponent: "my-secret-store" +secretStoreComponent: "[your_secret_store_name]" metadata:- - name: connectionString - secretRef: sb-root-connectionstring + - name: namespaceName + # Required when using Azure Authentication. + # Must be a fully-qualified domain name + value: "[your_servicebus_namespace.servicebus.windows.net]" + - name: azureTenantId + value: "[your_tenant_id]" + - name: azureClientId + value: "[your_client_id]" + - name: azureClientSecret + secretRef: azClientSecret scopes: - publisher-app - subscriber-app componentType: pubsub.azure.servicebus.queue version: v1 secretStoreComponent: "my-secret-store" metadata:- - name: connectionString - secretRef: sb-root-connectionstring + - name: namespaceName + # Required when using Azure Authentication. + # Must be a fully-qualified domain name + value: "[your_servicebus_namespace.servicebus.windows.net]" + - name: azureTenantId + value: "[your_tenant_id]" + - name: azureClientId + value: "[your_client_id]" + - name: azureClientSecret + secretRef: azClientSecret scopes: - publisher-app - subscriber-app resource daprComponent 'daprComponents@2022-03-01' = { secretStoreComponent: 'my-secret-store' metadata: [ {- name: 'connectionString' - secretRef: 'sb-root-connectionstring' + name: 'namespaceName' + // Required when using Azure Authentication. + // Must be a fully-qualified domain name + value: '[your_servicebus_namespace.servicebus.windows.net]' + name: 'azureTenantId' + value: '[your_tenant_id]' + name: 'azureClientId' + value: '[your_client_id]' + name: 'azureClientSecret' + secretRef: 'azClientSecret' } ] scopes: [ This resource defines a Dapr component called `dapr-pubsub` via ARM. "secretScoreComponent": "my-secret-store", "metadata": [ {- "name": "connectionString", - "secretRef": "sb-root-connectionstring" + "name": "namespaceName", + "value": "[your_servicebus_namespace.servicebus.windows.net]", + "name": "azureTenantId", + "value": "[your_tenant_id]", + "name": "azureClientId", + "value": "[your_client_id]", + "name": "azureClientSecret", + "secretRef": "azClientSecret" } ], "scopes": ["publisher-app", "subscriber-app"] |
cost-management-billing | Tutorial Export Acm Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md | In the export list, select the storage account name. On the storage account page In Storage Explorer, navigate to the container that you want to open and select the folder corresponding to the current month. A list of CSV files is shown. Select one and then select **Open**. - The file opens with the program or application set to open CSV file extensions. Here's an example in Excel. :::image type="content" border="true" source="./media/tutorial-export-acm-data/example-export-data.png" alt-text="Screenshot showing exported CSV data in Excel."::: |
cost-management-billing | Tutorial Improved Exports | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-improved-exports.md | The improved exports experience currently has the following limitations. - Shared access service (SAS) key-based cross tenant export is only supported for Microsoft partners at the billing account scope. It isn't supported for other partner scenarios like any other scope, EA indirect contract, or Azure Lighthouse. +- EA price sheet: Reservation prices are only available for the current month price sheet and cannot be retrieved for historical exports. To retain historical reservation prices, set up recurring exports. + ## FAQ #### Why is file partitioning enabled in exports? The file partitioning is a feature that is activated by default to facilitate th In the new export experience, missing attributes such as subscription IDs will be set to null or empty, rather than using a default empty GUID (00000000-0000-0000-0000-000000000000), to more accurately indicate the absence of a value. This affects charges pertaining to unused reservations, unused savings plan and rounding adjustments. +#### How much historical data can I retrieve using Exports? ++You can retrieve up to 13 months of historical data through the portal UI for all datasets, except for RI recommendations, which are limited to the current recommendation snapshot. To access data older than 13 months, you can use the REST API. ++- Cost and usage (Actual), Cost and usage (Amortized), Cost and usage (FOCUS): Up to 7 years of data. ++- Reservation transactions: Up to 7 years of data across all channels. ++- Reservation recommendations, Reservation details: Up to 13 months of data. ++- All available prices: ++ - MCA/MPA: Up to 13 months. + + - EA: Up to 25 months (starting from December 2022). + ## Next steps - Learn more about exports at [Tutorial: Create and manage exported data](tutorial-export-acm-data.md). |
cost-management-billing | Ea Azure Marketplace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-azure-marketplace.md | This article explains how EA customers and partners can view marketplace charges ## Azure Marketplace for EA customers -Azure Marketplace charges are visible on the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Azure Marketplace purchases and consumption are billed outside of Azure Prepayment on a quarterly or monthly cadence and in arrears. See [Manage Azure Marketplace on Azure portal](direct-ea-administration.md#enable-azure-marketplace-purchases). +Azure Marketplace charges are visible on the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Most Azure Marketplace purchases and consumption are billed outside of Azure Prepayment on a quarterly or monthly cadence and in arrears. Some third-party reseller services available on Azure Marketplace now consume your Enterprise Agreement (EA) Azure Prepayment balance. Previously these services were billed outside of EA Azure Prepayment and were invoiced separately. EA Azure Prepayment for these services in Azure Marketplace helps simplify customer purchase and payment management. For a complete list of services that now consume Azure Prepayment, see the [March 06, 2018 update on the Azure website](https://azure.microsoft.com/updates/azure-marketplace-third-party-reseller-services-now-use-azure-monetary-commitment/). ++For more information, see [Manage Azure Marketplace on Azure portal](direct-ea-administration.md#enable-azure-marketplace-purchases). Customers should contact their Licensing Solutions Provider (LSP) for information on Azure Marketplace charges. New monthly or annually recurring Azure Marketplace purchases are billed in full Existing, monthly recurring charges continue to renew on the first of each calendar month. Annual charges renew on the anniversary of the purchase date. -Some third-party reseller services available on Azure Marketplace now consume your Enterprise Agreement (EA) Azure Prepayment balance. Previously these services were billed outside of EA Azure Prepayment and were invoiced separately. EA Azure Prepayment for these services in Azure Marketplace helps simplify customer purchase and payment management. For a complete list of services that now consume Azure Prepayment, see the [March 06, 2018 update on the Azure website](https://azure.microsoft.com/updates/azure-marketplace-third-party-reseller-services-now-use-azure-monetary-commitment/). ### Enabling Azure Marketplace purchases |
cost-management-billing | Programmatically Create Subscription Enterprise Agreement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md | If you have multiple user roles in addition to the Account Owner role, then you Call the PUT API to create a subscription creation request/alias. ```json-PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2021-10-01 +PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/{{guid}}?api-version=2021-10-01api-version=2021-10-01 ``` In the request body, provide as the `billingScope` the `id` from one of your `enrollmentAccounts`. You can do a GET on the same URL to get the status of the request. ### Request ```json-GET https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2021-10-01 +GET https://management.azure.com/providers/Microsoft.Subscription/aliases/{{guid}}?api-version=2021-10-01 ``` ### Response |
cost-management-billing | Programmatically Create Subscription Microsoft Customer Agreement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md | The following example creates a subscription named *Dev Team subscription* for t Replace the placeholder value `sampleAlias` as needed. For more information on these REST calls, see [Create](/rest/api/subscription/alias/create) and [Get](/rest/api/subscription/alias/get). ```json-PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2021-10-01 +PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/{{guid}}?api-version=2021-10-01api-version=2021-10-01 ``` ### Request body You can do a GET on the same URL to get the status of the request. ### Request ```json-GET https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2021-10-01 +GET https://management.azure.com/providers/Microsoft.Subscription/aliases/{{guid}}?api-version=2021-10-01 ``` ### Response |
cost-management-billing | Programmatically Create Subscription Microsoft Partner Agreement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-partner-agreement.md | The following example creates a subscription named *Dev Team subscription* for ### [REST](#tab/rest) ```json-PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2021-10-01 +PUT https://management.azure.com/providers/Microsoft.Subscription/aliases/{{guid}}?api-version=2021-10-01api-version=2021-10-01 ``` ### Request body You can do a GET on the same URL to get the status of the request. ### Request ```json-GET https://management.azure.com/providers/Microsoft.Subscription/aliases/sampleAlias?api-version=2021-10-01 +GET https://management.azure.com/providers/Microsoft.Subscription/aliases/{{guid}}?api-version=2021-10-01 ``` ### Response |
cost-management-billing | Limited Time Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/limited-time-linux.md | + + Title: Save on select Linux VMs for a limited time +description: Learn how to save up to 56% on select Linux VMs with a limited-time offer by purchasing a one-year Azure Reserved Virtual Machine Instance. +++++ Last updated : 09/17/2024+++#customer intent: I want to learn how to save money with reservations and buy one. +++# Save on select Linux VMs for a limited time ++Save up to 15 percent in addition to the existing one-year [Azure Reserved Virtual Machine (VM) Instances](/azure/virtual-machines/prepay-reserved-vm-instances?toc=%2Fazure%2Fcost-management-billing%2Freservations%2Ftoc.json&source=azlto3) discount for select Linux VMs for a limited period. Customers could potentially see total savings of to up to 56% compared to running an Azure VM on a pay-as-you-go basis. This offer is available between October 1, 2024, and March 31, 2025. ++## Purchase the offer ++To take advantage of this promotional offer, [purchase](https://portal.azure.com/#view/Microsoft_Azure_Reservations/CreateBlade) a one-year Azure Reserved Virtual Machine Instance for a qualified VM SKU and region. ++## Buy a reservation ++1. Sign in to the [Azure portal](https://portal.azure.com/). +2. Select **All services** > **Reservations**. +3. Select **Add** and then select a qualified product listed in the [Terms and conditions of the offer](#terms-and-conditions-of-the-offer) section. +4. Select the [scope](prepare-buy-reservation.md#reservation-scoping-options), and then a billing subscription that you want to use for the reservation. You can change the reservation scope after purchase. +5. Set the **Region** to one supported by the offer. For more information, see the [Qualifying regions](#qualifying-regions) section. +6. Select a reservation term and billing frequency. +7. Select **Add to cart**. +8. In the cart, you can change the quantity. After you review your cart and you're ready to purchase, select **Next: Review + buy**. +9. Select **Buy now**. ++You can view the reservation in the [Reservations](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade/Reservations) page in the Azure portal. ++## Terms and conditions of the offer ++These terms and conditions (hereinafter referred to as "terms") govern the promotional offer ("offer") provided by Microsoft to customers purchasing a one-year Azure Reserved VM Instance in a qualified region between October 1, 2024 (12 AM Pacific Standard Time) and March 31, 2025 (11:59 PM Pacific Standard Time), for any of the following VM series: ++- Dadsv6¹ +- Daldsv6¹ +- Dalsv6¹ +- Dasv6¹ +- DCadsv5 +- DCasv5 +- Ddsv5² +- Ddv5 +- Dlsv5 +- Dpdsv6 +- Dpldsv6 +- Dplsv6 +- Dpsv6 +- Dsv5² +- Dv5² +- Eadsv6¹ +- Easv6¹ +- ECadsv5 +- ECasv5 +- Edsv5 +- Edv5 +- Epdsv6 +- Epsv6 +- Esv5 +- Ev5 +- Lasv3 +- Lsv3 ++*¹ VM isn't available for the offer in US East.* ++*² VM isn't available for the offer in US Government Iowa and US Government Virginia.* ++Instance size flexibility is available for these VMs. For more information about instance size flexibility, see [Virtual machine size flexibility](/azure/virtual-machines/reserved-vm-instance-size-flexibility?source=azlto7). ++### Qualifying regions ++The offer applies to all regions where the Azure VM Reserved Instances are generally available (GA) for the VM series listed in the section above, except EU West (Amsterdam), EU North (Dublin), AP Southeast (Singapore), and QA Central (Qatar). ++The offer provides an additional discount of up to 15 percent in addition to the existing one-year [Azure Reserved Virtual Machine (VM) Instances](/azure/virtual-machines/prepay-reserved-vm-instances?toc=%2Fazure%2Fcost-management-billing%2Freservations%2Ftoc.json&source=azlto3) rates. The savings don’t include operating system costs. Actual savings might vary based on instance type or usage. Customers could potentially see total savings of to up to 56% compared to running an Azure VM on a pay-as-you-go basis. ++### Eligibility ++The offer is available based on the following criteria: ++- To buy a reservation, you must have the owner role or reservation purchaser role on an Azure subscription that's of one of the following types: + - Enterprise (MS-AZR-0017P or MS-AZR-0148P) + - Pay-as-you-go (MS-AZR-0003P or MS-AZR-0023P) + - Microsoft Customer Agreement +- Cloud solution providers can use the Azure portal or [Partner Center](/partner-center/azure-reservations?source=azlto1) to purchase Azure Reservations. You can't purchase a reservation if you have a custom role that mimics owner role or reservation purchaser role on an Azure subscription. You must use the built-in owner or built-in reservation purchaser role. +- For more information about who can purchase a reservation, see [Buy an Azure reservation](prepare-buy-reservation.md?source=azlto2). ++### Offer details ++Upon successful purchase and payment for the one-year Azure Reserved VM Instance in a qualified region for one or more of the qualifying VMs during the specified period, the discount applies automatically to the number of running virtual machines. You don't need to assign a reservation to a virtual machine to benefit from the discounts. A reserved instance purchase covers only the compute part of your VM usage. For more information about how to pay and save with an Azure Reserved VM Instance, see [Prepay for Azure virtual machines to save money](/azure/virtual-machines/prepay-reserved-vm-instances?toc=%2Fazure%2Fcost-management-billing%2Freservations%2Ftoc.json&source=azlto3). ++- Other taxes might apply. +- Payment is processed using the payment method on file for the selected subscriptions. +- Estimated savings are calculated based on your current on-demand rate. ++### Charge back promotional offer costs ++Enterprise Agreement and Microsoft Customer Agreement billing readers can view amortized cost data for reservations. They can use the cost data to charge back the monetary value for a subscription, resource group, resource, or a tag to their partners. In amortized data, the effective price is the prorated hourly reservation cost. The cost is the total cost of reservation usage by the resource on that day. Users with an individual subscription can get the amortized cost data from their usage file. For more information, see [Charge back Azure Reservation costs](charge-back-usage.md). ++### Discount limitations ++- The discount automatically applies to the number of running virtual machines in qualified regions that match the reservation scope and attributes. +- The discount applies for one year after the date of purchase. +- The discount only applies to resources associated with subscriptions purchased through Enterprise, Cloud Solution Provider (CSP), Microsoft Customer Agreement, and individual plans with pay-as-you-go rates. +- A reservation discount is "use-it-or-lose-it." So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours. +- When you deallocate, delete, or scale the number of VMs, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are lost. +- Stopped VMs are billed and continue to use reservation hours. To use your available reservation hours with other workloads, deallocate or delete VM resources or scale-in other VMs. +- For more information about how Azure Reserved VM Instance discounts are applied, see [How the Azure reservation discount is applied to virtual machines](../manage/understand-vm-reservation-charges.md). ++### Exchanges and refunds ++The offer follows standard exchange and refund policies for reservations. For more information about exchanges and refunds, see [Self-service exchanges and refunds for Azure Reservations](exchange-and-refund-azure-reservations.md?source=azlto6). ++### Renewals ++- The renewal price **will not be** the limited time offer price, but the price available at time of renewal. +- For more information about renewals, see [Automatically renew Azure reservations](reservation-renew.md?source=azlto5). ++### Termination or modification ++Microsoft reserves the right to modify, suspend, or terminate the offer at any time without prior notice. ++If you purchased the one-year Azure Reserved Virtual Machine Instances for the qualified VMs in qualified regions between October 1, 2024, and March 31, 2025, you’ll continue to get the discount throughout the one-year term, even if the offer is canceled. ++By participating in the offer, customers agree to be bound by these terms and the decisions of Microsoft. Microsoft reserves the right to disqualify any customer who violates these terms or engages in any fraudulent or harmful activities related to the offer. ++## Related content ++- [How the Azure reservation discount is applied to virtual machines](../manage/understand-vm-reservation-charges.md) +- [Purchase Azure Reserved VM instances in the Azure portal](https://portal.azure.com/#view/Microsoft_Azure_Reservations/CreateBlade) +- [Linux on Azure tech community blog](https://aka.ms/linuxpromoffer_techcommunityblog) |
data-factory | Connector Deprecation Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-deprecation-plan.md | This article describes future deprecations for some connectors of Azure Data Fac | [Salesforce (legacy)](connector-salesforce-legacy.md)   | [Link](connector-salesforce.md#upgrade-the-salesforce-linked-service) | End of support announced and new version available | October 11, 2024 | January 10, 2025| | [Salesforce Service Cloud (legacy)](connector-salesforce-service-cloud-legacy.md)   | [Link](connector-salesforce-service-cloud.md#upgrade-the-salesforce-service-cloud-linked-service) | End of support announced and new version available | October 11, 2024 |January 10, 2025 | | [PostgreSQL (legacy)](connector-postgresql-legacy.md)   | [Link](connector-postgresql.md#upgrade-the-postgresql-linked-service)| End of support announced and new version available |October 31, 2024 | January 10, 2025 | +| [ServiceNow (legacy)](connector-servicenow-legacy.md)   | [Link](connector-servicenow.md#upgrade-your-servicenow-linked-service) | End of support announced and new version available | December 31, 2024 | March 1, 2025 | | [Snowflake (legacy)](connector-snowflake-legacy.md)   | [Link](connector-snowflake.md#upgrade-the-snowflake-linked-service) | End of support announced and new version available | October 31, 2024 | January 10, 2025 | | [Azure Database for MariaDB](connector-azure-database-for-mariadb.md) |/ | End of support announced |December 31, 2024 | December 31, 2024 | | [Concur (Preview)](connector-concur.md) |/ | End of support announced | December 31, 2024 | December 31, 2024 | The following connector was deprecated. If legacy connectors are deprecated with no updated connectors available, you can still use the [ODBC Connector](connector-odbc.md) which enables you to continue using these data sources with their native ODBC drivers, or other alternatives. This can enable you to continue using them indefinitely into the future. +## How to find your impacted objects in your data factory ++Here's the steps to get your objects which still rely on the deprecated connectors or connectors that have a precise end of support date. It is recommended to take action to upgrade those object to the new connector version before the end of the support date. ++1. Open your Azure Data Factory. +2. Go to Manage – Linked services page. +3. You should see the Linked Service that is still on legacy version with alert behind it. +4. Click on the number under the 'Related' column will show you the related objects that utilize this particular Linked service. +5. To learn more about the upgrade guidance and the comparison between the legacy and the new version, you can navigate to the connector upgrade section within each connector page. ++ ## Related content - [Azure Data Factory connectors overview](connector-overview.md) |
data-factory | Connector Salesforce | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce.md | Here are steps that help you upgrade your linked service and related queries: 1. Configure the connected apps in Salesforce portal by referring to [Prerequisites](connector-salesforce.md#prerequisites). -1. Create a new Salesforce linked service and configure it by referring toΓÇ»[Linked service properties](connector-salesforce.md#linked-service-properties). You also need to manually update existing datasets that rely on the old linked service, editing each dataset to use the new linked service instead. Pipeline activities that reference the updated datasets automatically use the updated linked service reference. +1. Create a new Salesforce linked service and configure it by referring toΓÇ»[Linked service properties](connector-salesforce.md#linked-service-properties). You also need to manually update existing datasets that rely on the old linked service, editing each dataset to use the new linked service instead. 1. If you use SQL query in the copy activity source or the lookup activity that refers to the legacy linked service, you need to convert them to the SOQL query. Learn more about SOQL query from [Salesforce as a source type](connector-salesforce.md#salesforce-as-a-source-type) and [Salesforce Object Query Language (SOQL)](https://developer.salesforce.com/docs/atlas.en-us.soql_sosl.meta/soql_sosl/sforce_api_calls_soql.htm). |
data-factory | Connector Servicenow Legacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-servicenow-legacy.md | Last updated 10/20/2023 This article outlines how to use the Copy Activity in Azure Data Factory and Synapse Analytics pipelines to copy data from ServiceNow. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. >[!IMPORTANT]->The service has released a new ServiceNow connector which provides better native ServiceNow support, refer to [ServiceNow connector](connector-servicenow.md) article on details. +>The new ServiceNow connector provides improved native ServiceNow support. If you are using the legacy ServiceNow connector in your solution, please [upgrade your ServiceNow connector](connector-servicenow.md#upgrade-your-servicenow-linked-service) before **December 31, 2024**. Refer to this [section](connector-servicenow.md#differences-between-servicenow-and-servicenow-legacy) for details on the difference between the legacy and latest version. ## Supported capabilities |
data-factory | Connector Servicenow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-servicenow.md | Last updated 08/23/2024 This article outlines how to use the Copy Activity in Azure Data Factory and Synapse Analytics pipelines to copy data from ServiceNow. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. >[!IMPORTANT]->The new ServiceNow connector provides improved native ServiceNow support. If you are using the legacy ServiceNow connector in your solution, supported as-is for backward compatibility only, refer to [ServiceNow connector (legacy)](connector-servicenow-legacy.md) article. -+>The new ServiceNow connector provides improved native ServiceNow support. If you are using the legacy ServiceNow connector in your solution, please [upgrade your ServiceNow connector](#upgrade-your-servicenow-linked-service) before **December 31, 2024**. Refer to this [section](#differences-between-servicenow-and-servicenow-legacy) for details on the difference between the legacy and latest version. ## Supported capabilities To copy data from ServiceNow, set the source type in the copy activity to **Serv To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md). -## Upgrade your ServiceNow linked service +## <a name="upgrade-your-servicenow-linked-service"></a> Upgrade the ServiceNow connector -Here are the steps that help you to upgrade your ServiceNow linked service: +Here are the steps that help you to upgrade your ServiceNow connector: 1. Create a new linked service by referring to [Linked service properties](#linked-service-properties). 2. **Query** in source is upgraded to **Query builder**, which has the same usage as the condition builder in ServiceNow. Learn how to configure it referring to [ServiceNow as source](#servicenow-as-source). |
data-manager-for-agri | How To Set Up Sensor As Customer And Partner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-sensor-as-customer-and-partner.md | Follow the below steps to register as a sensor partner so that you can start pus 1. Sensor integration should be enabled before it can be initiated. This step provisions required internal Azure resources for sensor integration for Data Manager for Agriculture instance. This can be done by running following <a href="https://github.com/projectkudu/ARMClient" target=" blank">armclient</a> command. ```armclient -armclient patch /subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.AgFoodPlatform/farmBeats/<datamanager-instance-name>?api-version=2023-04-01-preview "{properties:{sensorIntegration:{enabled:'true'}}}" +armclient patch /subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.AgFoodPlatform/farmBeats/<datamanager-instance-name>?api-version=2023-06-01-preview "{properties:{sensorIntegration:{enabled:'true'}}}" ``` Sample output: Sample output: 2. The above job might take a few minutes to complete. To know the status of job, the following armclient command should be run: ```armclient -armclient get /subscriptions/<subscription-id>/resourceGroups/<resource-group-name> /providers/Microsoft.AgFoodPlatform/farmBeats/<datamanager-instance-name>?api-version=2023-04-01-preview +armclient get /subscriptions/<subscription-id>/resourceGroups/<resource-group-name> /providers/Microsoft.AgFoodPlatform/farmBeats/<datamanager-instance-name>?api-version=2023-06-01-preview ``` 3. To verify whether it's completed, look at the highlighted attribute. It should be updated as ΓÇ£SucceededΓÇ¥ from ΓÇ£CreatingΓÇ¥ in the earlier step. The attribute that indicates that the sensor integration is enabled is indicated by **provisioningState inside the sensorIntegration object**. You're now all set to start pushing sensor data for all sensors using the respec ## Next steps -* Test our APIs [here](/rest/api/data-manager-for-agri). +* Test our APIs [here](/rest/api/data-manager-for-agri). |
databox-online | Azure Stack Edge Mini R Safety | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-safety.md | -Read all the safety information in this article before you use your Azure Stack Edge Mini R device, a composition of one battery pack, one AC/DC plugged power supply, one module power adapter, and one server module. Failure to follow instructions could result in fire, electric shock, injuries, or damage to your properties. Read all safety information below before using Azure Stack Edge Mini R. +Read all the safety information in this article before you use your Azure Stack Edge Mini R device. The device includes a one battery pack, one AC/DC plugged power supply, one module power adapter, and one server module. Failure to follow instructions could result in fire, electric shock, injuries, or damage to your properties. Read all of the following safety information before using Azure Stack Edge Mini R. ## Safety icon conventions The following signal words for hazard alerting signs are: | Icon | Description | |: |: |-| ![Hazard Symbol](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png)| **DANGER:** Indicates a hazardous situation that, if not avoided, will result in death or serious injury. <br> **WARNING:** Indicates a hazardous situation that, if not avoided, could result in death or serious injury. <br> **CAUTION:** Indicates a hazardous situation that, if not avoided, could result in minor or moderate injury.| +| ![Hazard Symbol](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png)| **DANGER:** Indicates a hazardous situation that, if not avoided, may result in death or serious injury. <br> **WARNING:** Indicates a hazardous situation that, if not avoided, could result in death or serious injury. <br> **CAUTION:** Indicates a hazardous situation that, if not avoided, could result in minor or moderate injury.| | The following hazard icons are to be observed when setting up and running your Azure Stack Edge Mini R device: The following hazard icons are to be observed when setting up and running your A | ![Hazard Symbol](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png) | Hazard Symbol | | ![Electrical Shock Icon](./media/azure-stack-edge-mini-r-safety/icon-safety-electric-shock.png) | Electric Shock Hazard | | ![Indoor Use Only](./media/azure-stack-edge-mini-r-safety/icon-safety-indoor-use-only.png) | Indoor Use Only |-| ![No User Serviceable Parts Icon](./media/azure-stack-edge-mini-r-safety/icon-safety-do-not-access.png) | No User Serviceable Parts. Do not access unless properly trained. | +| ![No User Serviceable Parts Icon](./media/azure-stack-edge-mini-r-safety/icon-safety-do-not-access.png) | No User Serviceable Parts. Don't access unless properly trained. | | ## Handling precautions and site selection The Azure Stack Edge Mini R device has the following handling precautions and si ![Electrical Shock Icon](./media/azure-stack-edge-mini-r-safety/icon-safety-electric-shock.png) ![No User Serviceable Parts Icon](./media/azure-stack-edge-mini-r-safety/icon-safety-do-not-access.png) **CAUTION:** -* Inspect the *as-received* device for damages. If the device enclosure is damaged, [contact Microsoft Support](azure-stack-edge-placeholder.md) to obtain a replacement. Do not attempt to operate the device. -* If you suspect the device is malfunctioning, [contact Microsoft Support](azure-stack-edge-placeholder.md) to obtain a replacement. Do not attempt to service the device. -* The device contains no user-serviceable parts. Hazardous voltage, current, and energy levels are present inside. Do not open. Return the device to Microsoft for servicing. +* Inspect the *as-received* device for damages. If the device enclosure is damaged, [contact Microsoft Support](azure-stack-edge-placeholder.md) to obtain a replacement. Don't attempt to operate the device. +* If you suspect the device is malfunctioning, [contact Microsoft Support](azure-stack-edge-placeholder.md) to obtain a replacement. Don't attempt to service the device. +* The device contains no user-serviceable parts. Hazardous voltage, current, and energy levels are present inside. Don't open. Return the device to Microsoft for servicing. ![Warning Icon 3](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png) **CAUTION:** -It is recommended to operate the system: +It's recommended to operate the system: * Away from sources of heat including direct sunlight and radiators. * In locations not exposed to moisture or rain. * Located in a space that minimizes vibration and physical shock. The system is designed for shock and vibration according to MIL-STD-810G. * Isolated from strong electromagnetic fields produced by electrical devices.-* Do not allow any liquid or any foreign object to enter the System. Do not place beverages or any other liquid containers on or near the system. +* Don't allow any liquid or any foreign object to enter the System. Don't place beverages or any other liquid containers on or near the system. ![Warning Icon 4](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png) ![No User Serviceable Parts Icon](./media/azure-stack-edge-mini-r-safety/icon-safety-do-not-access.png) **CAUTION:** -* This equipment contains a lithium battery. Do not attempt to service the battery pack. Batteries in this equipment are not user serviceable. Risk of Explosion if battery is replaced by an incorrect type. +* This equipment contains a lithium battery. Don't attempt to service the battery pack. Batteries in this equipment aren't user serviceable. Risk of Explosion if battery is replaced by an incorrect type. ![Warning Icon 5](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png) **CAUTION:** -Only charge the battery pack when it is a part of the Azure Stack Edge Mini R device, do not charge as a separate device. +Only charge the battery pack when it's a part of the Azure Stack Edge Mini R device, don't charge as a separate device. ![Warning Icon 6](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png) **CAUTION:** Only charge the battery pack when it is a part of the Azure Stack Edge Mini R de ![Warning Icon 7](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png) **CAUTION:** -* Do not burn or short circuit the battery pack. It must be recycled or disposed of properly. +* Don't burn or short circuit the battery pack. It must be recycled or disposed of properly. ![Warning Icon 8](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png) **CAUTION:** -* In lieu of using the provided AC/DC power supply, this system also has the option to use a field provided Type 2590 Battery. In this case, the end user shall verify that it meets all applicable safety, transportation, environmental, and any other national/regional and local regulations. +* In lieu of using the provided AC/DC power supply, this system also can use a field provided Type 2590 Battery. In this case, the end user shall verify that it meets all applicable safety, transportation, environmental, and any other national/regional and local regulations. * When operating the system with Type 2590 Battery, operate the battery within the conditions of use specified by the battery manufacturer. ![Warning Icon 9](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png) **CAUTION:** The Azure Stack Edge Mini R device has the following electrical precautions: When used with the power supply adaptor: -* Provide a safe electrical earth connection to the power supply cord. The alternating current (AC) cord has a three-wire grounding plug (a plug that has a grounding pin). This plug fits only a grounded AC outlet. Do not defeat the purpose of the grounding pin. +* Provide a safe electrical earth connection to the power supply cord. The alternating current (AC) cord has a three-wire grounding plug (a plug that has a grounding pin). This plug fits only a grounded AC outlet. Don't defeat the purpose of the grounding pin. * Given that the plug on the power supply cord is the main disconnect device, ensure that the socket outlets are located near the device and are easily accessible. * Unplug the power cord(s) (by pulling the plug, not the cord) and disconnect all cables if any of the following conditions exist: * The power cord or plug becomes frayed or otherwise damaged. * The device is exposed to rain, excess moisture, or other liquids.- * The device has been dropped and the device casing has been damaged. + * The device is dropped and the device casing is damaged. * You suspect the device needs service or repair. * Permanently unplug the unit before you move it or if you think it has become damaged in any way. When used with the power supply adaptor: * Voltage: 100 - 240 Volts AC * Current: 1.7 Amperes-* Frequency: 50 to 60 Hz +* Frequency: 50 Hz to 60 Hz ![Warning Icon 11](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png) ![Electrical Shock Icon](./media/azure-stack-edge-mini-r-safety/icon-safety-electric-shock.png) **WARNING:** -* Do not attempt to modify or use AC power cord(s) other than the ones provided with the equipment. +* Don't attempt to modify or use AC power cord(s) other than the ones provided with the equipment. ![Warning Icon 12](./media/azure-stack-edge-mini-r-safety/icon-safety-warning.png) ![Electrical Shock Icon](./media/azure-stack-edge-mini-r-safety/icon-safety-electric-shock.png) When used with the power supply adaptor: ## Regulatory information -The following contains regulatory information for Azure Stack Edge Mini R device, regulatory model number: TMA01. +Regulatory information for Azure Stack Edge Mini R device, regulatory model number: TMA01. The Azure Stack Edge Mini R device is designed for use with NRTL Listed (UL, CSA, ETL, etc.), and IEC/EN 60950-1 or IEC/EN 62368-1 compliant (CE marked) Information Technology equipment. The Netgear A6150 WiFi USB Adapter complies with ANSI/IEEE C95.1-1999 and was te Netgear A6150 Specific Absorption Rate (SAR): 1.18 W/kg averaged over 1 g of tissue -The Netgear A6150 WiFi USB Adapter is to be used with approved antennas only. This device and its antenna(s) must not be co-located or operating in conjunction with any other antenna or transmitter except in accordance with FCC multitransmitter product procedures. For products available in the USA market, only channel 1~11 can be operated. Selection of other channels is not possible. +The Netgear A6150 WiFi USB Adapter is to be used with approved antennas only. This device and its antenna(s) must not be co-located or operating with any other antenna or transmitter except in accordance with FCC multitransmitter product procedures. For products available in the USA market, only channel 1~11 can be operated. Selection of other channels is not possible. Operation in the band 5150ΓÇô5250 MHz is only for indoor use to reduce the potential for harmful interference to co-channel mobile satellite systems. Operation in the band 5150ΓÇô5250 MHz is only for indoor use to reduce the poten Users are advised that high-power radars are allocated as primary users (priority users) of the bands 5250ΓÇô5350 MHz and 5650ΓÇô5850 MHz, and these radars could cause interference and/or damage to LE-LAN devices. -This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there is no guarantee that interference will not occur in a particular installation. +This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there's no guarantee that interference won't occur in a particular installation. If this equipment does cause harmful interference to radio or television reception, which can be determined by turning the equipment off and on, the user is encouraged to try to correct the interference by one or more of the following measures: For more information about interference issues, go to the FCC website at [fcc.go Additional information about radiofrequency safety can be found on the FCC website at [https://www.fcc.gov/general/radio-frequency-safety-0](https://www.fcc.gov/general/radio-frequency-safety-0) and the Industry Canada website at [http://www.ic.gc.ca/eic/site/smt-gst.nsf/eng/sf01904.html](http://www.ic.gc.ca/eic/site/smt-gst.nsf/eng/sf01904.html). -This product has demonstrated EMC compliance under conditions that included the use of compliant peripheral devices and shielded cables between system components. It is important that you use compliant peripheral devices and shielded cables between system components to reduce the possibility of causing interference to radios, television sets, and other electronic devices. +This product demonstrates EMC compliance under conditions that included the use of compliant peripheral devices and shielded cables between system components. It's important that you use compliant peripheral devices and shielded cables between system components to reduce the possibility of causing interference to radios, television sets, and other electronic devices. This device complies with part 15 of the FCC Rules and Industry Canada license-exempt RSS standard(s). Operation is subject to the following two conditions: (1) this device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation of the device. Disposal of waste batteries and electrical and electronic equipment: ![Warning Icon 14](./media/azure-stack-edge-mini-r-safety/icon-ewaste-disposal.png) -This symbol on the product or its batteries or its packaging means that this product and any batteries it contains must not be disposed of with your household waste. Instead, it is your responsibility to hand this over to an applicable collection point for the recycling of batteries and electrical and electronic equipment. This separate collection and recycling will help to conserve natural resources and prevent potential negative consequences for human health and the environment due to the possible presence of hazardous substances in batteries and electrical and electronic equipment, which could be caused by inappropriate disposal. For more information about where to drop off your batteries and electrical and electronic waste, please contact your local city/municipality office, your household waste disposal service, or the shop where you purchased this product. Contact erecycle@microsoft.com for additional information on WEEE. +This symbol on the product or its batteries or its packaging means that this product and any batteries it contains must not be disposed of with your household waste. Instead, it is your responsibility to hand it over to an applicable collection point for the recycling of batteries and electrical and electronic equipment. This separate collection and recycling helps to conserve natural resources and prevent potential negative consequences for human health and the environment due to the possible presence of hazardous substances in batteries and electrical and electronic equipment, which could be caused by inappropriate disposal. For more information about where to drop off your batteries and electrical and electronic waste, contact your local city/municipality office, your household waste disposal service, or the shop where you purchased this product. Contact erecycle@microsoft.com for additional information on WEEE. + This product contains coin cell battery(ies). -The Netgear A6150 WiFi USB Adapter provided with this equipment is intended to be operated close to the human body and is tested for body-worn Specific Absorption Rate (SAR) compliance (see below values). When carrying the product or using it while worn on your body, maintain a distance of 10mm from the body to ensure compliance with RF exposure requirements. +This product might contain Lithium-Ion and/or Lithium Metal battery(ies). The batteries contained in this product comply with regulatory requirements of EU REGULATION (EU) 2023/1542 as applicable. ++The Netgear A6150 WiFi USB Adapter provided with this equipment is intended to be operated close to the human body and is tested for body-worn Specific Absorption Rate (SAR) compliance (see below values). When carrying the product or using it while worn on your body, maintain a distance of 10 mm from the body to ensure compliance with RF exposure requirements. -**Netgear A6150 Specific Absorption Rate (SAR):** 0.54 W/kg averaged over 10g of tissue +**Netgear A6150 Specific Absorption Rate (SAR):** 0.54 W/kg averaged over 10 g of tissue ΓÇâ This device may operate in all member states of the EU. Observe national/regional and local regulations where the device is used. This device is restricted to indoor use only when operating in the 5150-5350 MHz frequency range in the following countries/regions: |
devtest-labs | Devtest Labs Roadmap | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-labs-roadmap.md | + + Title: Roadmap for Azure DevTest Labs +description: Learn about features in development and coming soon for Azure DevTest Labs. ++++ Last updated : 09/30/2024++#customer intent: As a customer, I want to understand upcoming features and enhancements in Azure DevTest Labs so that I can plan and optimize development and deployment strategies. +++# Azure DevTest Labs roadmap ++This roadmap presents a set of planned feature releases that underscores Microsoft's commitment to empowering enterprise development and testing teams. DevTest Labs enables development teams to self-serve customized, ready-to-test machines in the cloud, all while maintaining adherence to organizational governance policies. This feature list offers a glimpse into our plans for the next six months, highlighting key features we're developing. It isn't exhaustive but shows major investments. Some features might release as previews and evolve based on your feedback before becoming generally available. We always listen to your input, so the timing, design, and delivery of some features might change. ++Key DevTest Labs deliverables can be grouped under the following themes: ++- [Ready-to-test virtual machines](#ready-to-test-virtual-machines) +- [Enterprise management](#enterprise-management) +- [Performance and reliability](#performance-and-reliability) ++## Ready-to-test virtual machines ++The fundamental goal of DevTest Labs is to provide a seamless, intuitive experience that empowers development and testing teams. DevTest Labs enables you to swiftly access ready-to-test machines for deploying and validating the latest versions or features of any version of your application. We're dedicated to advancing this vision by continuously investing in innovative technologies to enhance machine customization and optimize testing efficiency. ++- **Hibernate VM:** Hibernating a machine preserves its exact state upon resumption, allowing developers to effortlessly troubleshoot issues identified during testing. +- **Lab-level Secrets:** Platform engineers and dev managers will be able to set up centralized secrets accessible to the entire team, streamlining virtual machine creation and management. +- **Generation 2 VMs:** Generation 2 VMs will improve virtual machine boot and installation times and enable secure boot by default. ++## Enterprise management ++DevTest Labs delivers a streamlined and optimized experience for end-users while also offering robust enterprise capabilities to enforce organizational governance, covering security, cost management, and monitoring. We're committed to enhancing these capabilities with upcoming features that will fortify the machines and help reduce costs. ++- **Managed Identity and GitHub Apps:** To ensure secure connections to source control repositories and storage accounts, we'll introduce the ability to: + - Attach Azure Repos repositories via managed identities + - Attach GitHub repositories through GitHub Apps + - Access lab storage accounts via managed identities +- **Trusted Launch:** Trusted launch will enhance protection for lab machines against advanced and persistent attacks by enabling features like secure boot, vTPM, virtualization-based security, and integration with Microsoft Defender for Cloud. +- **Spot VMs:** Spot VMs will lower test machine costs for workloads that can handle interruptions. +- **Standard Load Balancer:** Standard Load Balancer provides significant improvements over basic such as high performance, ultra-low latency, superior resilient load-balancing, and increased security by default. ++## Performance and Reliability ++DevTest Labs aims to provide a seamless testing experience that is as responsive as a local machine, and we're consistently enhancing the reliability, speed, and performance of DevTest Labs through platform optimization. ++**API and Portal Reliability:** We're continuing to invest in our API performance and aiming to attain higher reliability. ++This roadmap outlines our current priorities, and we remain flexible to adapt based on customer feedback. We invite you to share your thoughts and suggest other capabilities you would like to see. Your insights help us refine our focus and deliver even greater value. +++## Related content ++- [What is DevTest Labs?](./devtest-lab-overview.md) |
event-hubs | Event Hubs Python Get Started Send | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-python-get-started-send.md | Be sure to record the connection string and container name for later use in the ## [Connection String](#tab/connection-string) -[Get the connection string to the storage account](../storage/common/storage-configure-connection-string.md) +Follow instructions from [Get the connection string to the storage account](../storage/common/storage-configure-connection-string.md) to get the connection string to the storage account, which you use in the script (`BLOB_STORAGE_CONNECTION_STRING`). You need the connection string and the name of the blob container you just created. + In this section, you create a Python script to receive events from your event hu In the code, use real values to replace the following placeholders: - * `BLOB_STORAGE_CONNECTION_STRING` - * `BLOB_CONTAINER_NAME` - * `EVENT_HUB_CONNECTION_STR` - * `EVENT_HUB_NAME` + * `BLOB_STORAGE_CONNECTION_STRING` - Connection string to the Blob Storage account that you noted earlier. + * `BLOB_CONTAINER_NAME` - Name of the blob container you created in the blob storage. + * `EVENT_HUB_CONNECTION_STR` - Connection string to the Event Hubs namespace you noted earlier. + * `EVENT_HUB_NAME` - Name of the event hub. ```python import asyncio |
expressroute | Expressroute Howto Linkvnet Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-arm.md | This article helps you link virtual networks (VNets) to Azure ExpressRoute circu [!INCLUDE [expressroute-cloudshell](../../includes/expressroute-cloudshell-powershell-about.md)] -## Connect a virtual network in the same subscription to a circuit -You can connect a virtual network gateway to an ExpressRoute circuit by using the following cmdlet. Make sure that the virtual network gateway is created and is ready for linking before you run the cmdlet: +## Connect a virtual network ++# [**Maximum Resiliency**](#tab/maximum) ++**Maximum resiliency** (Recommended): provides the highest level of resiliency to your virtual network. It provides two redundant connections from the virtual network gateway to two different ExpressRoute circuits in different ExpressRoute locations. ++### Clone the script ++To create maximum resiliency connections, clone the setup script from GitHub. ++```azurepowershell-interactive +# Clone the setup script from GitHub. +git clone https://github.com/Azure-Samples/azure-docs-powershell-samples/ +# Change to the directory where the script is located. +CD azure-docs-powershell-samples/expressroute/ +``` ++Run the **New-AzHighAvailabilityVirtualNetworkGatewayConnections.ps1** script to create high availability connections. The following example shows how to create two new connections to two ExpressRoute circuits. ++```azurepowershell-interactive +$SubscriptionId = Get-AzureSubscription -SubscriptionName "<SubscriptionName>" +$circuit1 = Get-AzExpressRouteCircuit -Name "MyCircuit1" -ResourceGroupName "MyRG" +$circuit2 = Get-AzExpressRouteCircuit -Name "MyCircuit2" -ResourceGroupName "MyRG" +$gw = Get-AzVirtualNetworkGateway -Name "ExpressRouteGw" -ResourceGroupName "MyRG" ++highAvailabilitySetup/New-AzHighAvailabilityVirtualNetworkGatewayConnections.ps1 -SubscriptionId $SubscriptionId -ResourceGroupName "MyRG" -Location "West EU" -Name1 "ERConnection1" -Name2 "ERConnection2" -Peer1 $circuit1.Peerings[0] -Peer2 $circuit2.Peerings[0] -RoutingWeight1 10 -RoutingWeight2 10 -VirtualNetworkGateway1 $gw +``` ++If you want to create a new connection and use an existing one, you can use the following example. This example creates a new connection to a second ExpressRoute circuit and uses an existing connection to the first ExpressRoute circuit. ++```azurepowershell-interactive +$SubscriptionId = Get-AzureSubscription -SubscriptionName "<SubscriptionName>" +$circuit1 = Get-AzExpressRouteCircuit -Name "MyCircuit1" -ResourceGroupName "MyRG" +$gw = Get-AzVirtualNetworkGateway -Name "ExpressRouteGw" -ResourceGroupName "MyRG" +$connection = Get-AzVirtualNetworkGatewayConnection -Name "ERConnection1" -ResourceGroupName "MyRG" ++highAvailabilitySetup/New-AzHighAvailabilityVirtualNetworkGatewayConnections.ps1 -SubscriptionId $SubscriptionId -ResourceGroupName "MyRG" -Location "West EU" -Name2 "ERConnection2" -Peer2 $circuit1.Peerings[0] -RoutingWeight2 10 -VirtualNetworkGateway1 $gw -ExistingVirtualNetworkGatewayConnection $connection +``` ++# [**Standard/High Resiliency**](#tab/standard) ++**Standard resiliency**: provides a single redundant connection from the virtual network gateway to a single ExpressRoute circuit. +You can connect a virtual network gateway to an ExpressRoute circuit using the **New-AzVirtualNetworkGatewayConnection** cmdlet. Make sure that the virtual network gateway is created and is ready for linking before you run the cmdlet. ```azurepowershell-interactive $circuit = Get-AzExpressRouteCircuit -Name "MyCircuit" -ResourceGroupName "MyRG" $gw = Get-AzVirtualNetworkGateway -Name "ExpressRouteGw" -ResourceGroupName "MyR $connection = New-AzVirtualNetworkGatewayConnection -Name "ERConnection" -ResourceGroupName "MyRG" -Location "East US" -VirtualNetworkGateway1 $gw -PeerId $circuit.Id -ConnectionType ExpressRoute ``` +> [!NOTE] +> For **High Resiliency**, you must connect to a Metro circuit instead of a Standard circuit. +++ ## Connect a virtual network in a different subscription to a circuit You can share an ExpressRoute circuit across multiple subscriptions. The following figure shows a simple schematic of how sharing works for ExpressRoute circuits across multiple subscriptions. |
expressroute | Metro | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/metro.md | Title: About ExpressRoute Metro (preview) + Title: About ExpressRoute Metro description: This article provides an overview of ExpressRoute Metro and how it works. -# About ExpressRoute Metro (preview) --> [!IMPORTANT] -> ExpresRoute Metro is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +# About ExpressRoute Metro ExpressRoute facilitates the creation of private connections between your on-premises networks and Azure workloads in a designated peering locations. These locations are colocation facilities housing Microsoft Enterprise Edge (MSEE) devices, serving as the gateway to Microsoft's network. The standard ExpressRoute configuration is set up with a pair of links to enhanc ## ExpressRoute Metro -ExpressRoute Metro (preview) is a high-resiliency configuration designed to provide multi-site redundancy. This configuration allows you to benefit from a dual-homed setup that facilitates diverse connections to two distinct ExpressRoute peering locations within a city. The high resiliency configuration benefits from the redundancy across the two peering locations to offer higher availability and resilience for your connectivity from your on-premises to resources in Azure. +ExpressRoute Metro is a high-resiliency configuration designed to provide multi-site redundancy. This configuration allows you to benefit from a dual-homed setup that facilitates diverse connections to two distinct ExpressRoute peering locations within a city. The high resiliency configuration benefits from the redundancy across the two peering locations to offer higher availability and resilience for your connectivity from your on-premises to resources in Azure. Key features of ExpressRoute Metro include: |
firewall | Firewall Preview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md | You can now configure a DNAT rule on Azure Firewall Policy with the private IP a This capability helps with connectivity between overlapped IP networks, which is a common scenario for enterprises when onboarding new partners to their network or merging with new acquisitions. This is also relevant for hybrid scenarios, connecting on-premises datacenters to Azure, where DNAT bridges the gap, enabling communication between private resources over nonroutable IP addresses. -For more information, see [Filter inbound Internet or intranet traffic with Azure Firewall DNAT using the Azure portal](tutorial-firewall-dnat.md). +For more information, see [Private IP DNAT Support and Scenarios with Azure Firewall](https://techcommunity.microsoft.com/t5/azure-network-security-blog/private-ip-dnat-support-and-scenarios-with-azure-firewall/ba-p/4230073). ## Next steps |
governance | Evaluate Impact | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/evaluate-impact.md | Title: Evaluate the impact of a new Azure Policy definition description: Understand the process to follow when introducing a new policy definition into your Azure environment. Previously updated : 08/17/2021 Last updated : 09/30/2024 + # Evaluate the impact of a new Azure Policy definition -Azure Policy is a powerful tool for managing your Azure resources to meet business standards -compliance needs. When people, processes, or pipelines create or update resources, Azure Policy -reviews the request. When the policy definition effect is [Modify](./effects.md#modify), -[Append](./effects.md#deny), or [DeployIfNotExists](./effects.md#deployifnotexists), Policy alters -the request or adds to it. When the policy definition effect is [Audit](./effects.md#audit) or -[AuditIfNotExists](./effects.md#auditifnotexists), Policy causes an Activity log entry to be created -for new and updated resources. And when the policy definition effect is [Deny](./effects.md#deny) or [DenyAction](./effects.md#denyaction), Policy stops the creation or alteration of the request. +Azure Policy is a powerful tool for managing your Azure resources to meet business standards compliance needs. When people, processes, or pipelines create or update resources, Azure Policy reviews the request. When the policy definition effect is [modify](./effect-modify.md), [append](./effect-append.md), or [deployIfNotExists](./effect-deploy-if-not-exists.md), Policy alters the request or adds to it. When the policy definition effect is [audit](./effect-audit.md) or [auditIfNotExists](./effect-audit-if-not-exists.md), Policy causes an Activity log entry to be created for new and updated resources. And when the policy definition effect is [deny](./effect-deny.md) or [denyAction](./effect-deny-action.md), Policy stops the creation or alteration of the request. -These outcomes are exactly as desired when you know the policy is defined correctly. However, it's -important to validate a new policy works as intended before allowing it to change or block work. The -validation must ensure only the intended resources are determined to be non-compliant and no -compliant resources are incorrectly included (known as a _false positive_) in the results. +These outcomes are exactly as desired when you know the policy is defined correctly. However, it's important to validate a new policy works as intended before allowing it to change or block work. The validation must ensure only the intended resources are determined to be non-compliant and no compliant resources are incorrectly included (known as a _false positive_) in the results. The recommended approach to validating a new policy definition is by following these steps: -- Tightly define your policy-- Test your policy's effectiveness-- Audit new or updated resource requests-- Deploy your policy to resources-- Continuous monitoring+- Tightly define your policy. +- Test your policy's effectiveness. +- Audit new or updated resource requests. +- Deploy your policy to resources. +- Continuous monitoring. ## Tightly define your policy -It's important to understand how the business policy is implemented as a policy definition and the -relationship of Azure resources with other Azure services. This step is accomplished by -[identifying the requirements](../tutorials/create-custom-policy-definition.md#identify-requirements) -and -[determining the resource properties](../tutorials/create-custom-policy-definition.md#determine-resource-properties). -But it's also important to see beyond the narrow definition of your business policy. Does your -policy state for example "All Virtual Machines must..."? What about other Azure services that make -use of VMs, such as HDInsight or AKS? When defining a policy, we must consider how this policy -impacts resources that are used by other services. +It's important to understand how the business policy is implemented as a policy definition and the relationship of Azure resources with other Azure services. This step is accomplished by [identifying the requirements](../tutorials/create-custom-policy-definition.md#identify-requirements) and [determining the resource properties](../tutorials/create-custom-policy-definition.md#determine-resource-properties). But it's also important to see beyond the narrow definition of your business policy. For examples, does your policy state that _All Virtual Machines must..._? What about other Azure services that make use of VMs, such as HDInsight or Azure Kubernetes Service (AKS)? When defining a policy, we must consider how this policy impacts resources that are used by other services. -For this reason, your policy definitions should be as tightly defined and focused on the resources -and the properties you need to evaluate for compliance as possible. +For this reason, your policy definitions should be as tightly defined and focused on the resources and the properties you need to evaluate for compliance as possible. ## Test your policy's effectiveness -Before looking to manage new or updated resources with your new policy definition, it's best to see -how it evaluates a limited subset of existing resources, such as a test resource group. The [Azure Policy VS Code extension](../how-to/extension-for-vscode.md#on-demand-evaluation-scan) allows for isolated testing of definitions against existing Azure resources using the on demand evaluation scan. -You may also assign the definition in a _Dev_ environment using the -[enforcement mode](./assignment-structure.md#enforcement-mode) _Disabled_ (DoNotEnforce) on your -policy assignment to prevent the [effect](./effects.md) from triggering or activity log entries from -being created. +Before looking to manage new or updated resources with your new policy definition, it's best to see how it evaluates a limited subset of existing resources, such as a test resource group. The [Azure Policy VS Code extension](../how-to/extension-for-vscode.md#on-demand-evaluation-scan) allows for isolated testing of definitions against existing Azure resources using the on demand evaluation scan. You might also assign the definition in a _Dev_ environment using the [enforcement mode](./assignment-structure.md#enforcement-mode) _Disabled_ (doNotEnforce) on your policy assignment to prevent the [effect](./effect-basics.md) from triggering or activity log entries from being created. -This step gives you a chance to evaluate the compliance results of the new policy on existing -resources without impacting work flow. Check that no compliant resources show as non-compliant -(_false positive_) and that all the resources you expect to be non-compliant are marked correctly. -After the initial subset of resources validates as expected, slowly expand the evaluation to more -existing resources and more scopes. +This step gives you a chance to evaluate the compliance results of the new policy on existing resources without impacting work flow. Check that no compliant resources show as non-compliant (_false positive_) and that all the resources you expect to be non-compliant are marked correctly. After the initial subset of resources validates as expected, slowly expand the evaluation to more existing resources and more scopes. -Evaluating existing resources in this way also provides an opportunity to remediate non-compliant -resources before full implementation of the new policy. This cleanup can be done manually or through -a [remediation task](../how-to/remediate-resources.md) if the policy definition effect is -_DeployIfNotExists_ or _Modify_. +Evaluating existing resources in this way also provides an opportunity to remediate non-compliant resources before full implementation of the new policy. This cleanup can be done manually or through a [remediation task](../how-to/remediate-resources.md) if the policy definition effect is `deployIfNotExists` or `modify`. -Policy definitions with a _DeployIfNotExist_ should leverage the [Azure Resource Manager template what if](../../../azure-resource-manager/templates/deploy-what-if.md) to validate and test the changes that happen when deploying the ARM template. +Policy definitions with a `deployIfNotExists` should use the [Azure Resource Manager template what if](../../../azure-resource-manager/templates/deploy-what-if.md) to validate and test the changes that happen when deploying the ARM template. ## Audit new or updated resources -Once you've validated your new policy definition is reporting correctly on existing resources, it's -time to look at the impact of the policy when resources get created or updated. If the policy -definition supports effect parameterization, use [Audit](./effects.md#audit) or [AuditIfNotExist](./effects.md#auditifnotexists). This configuration -allows you to monitor the creation and updating of resources to see whether the new policy -definition triggers an entry in Azure Activity log for a resource that is non-compliant without -impacting existing work or requests. +After you validate your new policy definition is reporting correctly on existing resources, it's time to look at the effect of the policy when resources are created or updated. If the policy definition supports effect parameterization, use [audit](./effect-audit.md) or [auditIfNotExist](./effect-audit-if-not-exists.md). This configuration allows you to monitor the creation and updating of resources to see whether the new policy definition triggers an entry in Azure Activity log for a resource that's non-compliant without affecting existing work or requests. -It's recommended to both update and create new resources that match your policy definition to see -that the _Audit_ or _AuditIfNotExist_ effect is correctly being triggered when expected. Be on the lookout for resource -requests that shouldn't be affected by the new policy definition that trigger the _Audit_ or _AuditIfNotExist_ effect. -These affected resources are another example of _false positives_ and must be fixed in the policy -definition before full implementation. +The recommendation is to update and create new resources that match your policy definition to see that the `audit` or `auditIfNotExists` effect is correctly being triggered when expected. Be on the lookout for resource requests that shouldn't be affected by the new policy definition that trigger the `audit` or `auditIfNotExists` effect. These affected resources are another example of _false positives_ and must be fixed in the policy definition before full implementation. -In the event the policy definition is changed at this stage of testing, it's recommended to begin -the validation process over with the auditing of existing resources. A change to the policy -definition for a _false positive_ on new or updated resources is likely to also have an impact on -existing resources. +In the event the policy definition is changed at this stage of testing, the recommendation is to begin the validation process over with the auditing of existing resources. A change to the policy definition for a _false positive_ on new or updated resources is likely to also have an effect on existing resources. ## Deploy your policy to resources -After completing validation of your new policy definition with both existing resources and new or -updated resource requests, you begin the process of implementing the policy. It's recommended to -create the policy assignment for the new policy definition to a subset of all resources first, such -as a resource group. You can further filter by resource type or location using the [`resourceSelectors`](./assignment-structure.md#resource-selectors) property within the policy assignment.After validating initial deployment, extend the scope of the policy to broader as a resource group. After validating initial deployment, expand the impact of the policy by adjusting the resourceSelector filters to target more locations or resource types, or by removing the assignment and replacing it with a new one at broader scopes like subscriptions and management groups. Continue this gradual rollout until it's assigned to the full scope of resources intended to be covered by your new policy definition. +After completing validation of your new policy definition with both existing resources and new or updated resource requests, you begin the process of implementing the policy. The recommendation is to create the policy assignment for the new policy definition to a subset of all resources first, such as a resource group. You can further filter by resource type or location using the [resourceSelectors](./assignment-structure.md#resource-selectors) property within the policy assignment. After validating initial deployment, extend the scope of the policy to broader as a resource group. After validating initial deployment, expand the policy's effect by adjusting the `resourceSelector` filters to target more locations or resource types. Or by removing the assignment and replacing it with a new one at broader scopes like subscriptions and management groups. Continue this gradual rollout until it's assigned to the full scope of resources intended to be covered by your new policy definition. -During rollout, if resources are located that should be exempt from your new policy definition, -address them in one of the following ways: +During rollout, if resources are located that should be exempt from your new policy definition, address them in one of the following ways: -- Update the policy definition to be more explicit to reduce unintended impact-- Change the scope of the policy assignment (by removing and creating a new assignment)-- Add the group of resources to the exclusion list for the policy assignment+- Update the policy definition to be more explicit to reduce unintended effects. +- Change the scope of the policy assignment (by removing and creating a new assignment). +- Add the group of resources to the exclusion list for the policy assignment. -Any changes to the scope (level or exclusions) should be fully validated and communicated with your -security and compliance organizations to ensure there are no gaps in coverage. +Any changes to the scope (level or exclusions) should be fully validated and communicated with your security and compliance organizations to ensure there are no gaps in coverage. ## Monitor your policy and compliance -Implementing and assigning your policy definition isn't the final step. Continuously monitor the -[compliance](../how-to/get-compliance-data.md) level of resources to your new policy definition and -setup appropriate -[Azure Monitor alerts and notifications](/azure/azure-monitor/alerts/alerts-overview) for -when non-compliant devices are identified. It's also recommended to evaluate the policy definition -and related assignments on a scheduled basis to validate the policy definition is meeting business -policy and compliance needs. Policies should be removed if no longer needed. Policies also need to update from time to time as the underlying Azure resources evolve and add new properties and -capabilities. +Implementing and assigning your policy definition isn't the final step. Continuously monitor the [compliance](../how-to/get-compliance-data.md) level of resources to your new policy definition and setup appropriate [Azure Monitor alerts and notifications](/azure/azure-monitor/alerts/alerts-overview) for when non-compliant devices are identified. The recommendation is to evaluate the policy definition and related assignments on a scheduled basis to validate the policy definition is meeting business policy and compliance needs. Policies should be removed if no longer needed. Policies also need to update from time to time as the underlying Azure resources evolve and add new properties and capabilities. ## Next steps -- Learn about the [policy definition structure](./definition-structure.md).+- Learn about the [policy definition structure](./definition-structure-basics.md). - Learn about the [policy assignment structure](./assignment-structure.md). - Understand how to [programmatically create policies](../how-to/programmatically-create.md). - Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).-- Review what a management group is with- [Organize your resources with Azure management groups](../../management-groups/overview.md). +- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md). |
governance | Event Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/event-overview.md | Title: Reacting to Azure Policy state change events description: Use Azure Event Grid to subscribe to Azure Policy events, which allow applications to react to state changes without the need for complicated code. Previously updated : 07/12/2022 Last updated : 09/30/2024 + # Reacting to Azure Policy state change events -Azure Policy events enable applications to react to state changes. This integration is done without -the need for complicated code or expensive and inefficient polling services. Instead, events are -pushed through [Azure Event Grid](../../../event-grid/index.yml) to subscribers such as -[Azure Functions](../../../azure-functions/index.yml), -[Azure Logic Apps](../../../logic-apps/index.yml), or even to your own custom HTTP listener. -Critically, you only pay for what you use. +Azure Policy events enable applications to react to state changes. This integration is done without the need for complicated code or expensive and inefficient polling services. Instead, events are pushed through [Azure Event Grid](../../../event-grid/index.yml) to subscribers such as [Azure Functions](../../../azure-functions/index.yml), [Azure Logic Apps](../../../logic-apps/index.yml), or even to your own custom HTTP listener. Critically, you only pay for what you use. -Azure Policy events are sent to the Azure Event Grid, which provides reliable delivery services to -your applications through rich retry policies and dead-letter delivery. Event Grid takes -care of the proper routing, filtering, and multicasting of the events to destinations via Event Grid subscriptions. -To learn more, see [Event Grid message delivery and retry](../../../event-grid/delivery-and-retry.md). +Azure Policy events are sent to the Azure Event Grid, which provides reliable delivery services to your applications through rich retry policies and dead-letter delivery. Event Grid takes care of the proper routing, filtering, and multicasting of the events to destinations via Event Grid subscriptions. To learn more, see [Event Grid message delivery and retry](../../../event-grid/delivery-and-retry.md). > [!NOTE] > Azure Policy state change events are sent to Event Grid after an Event Grid has a few benefits for customers and services in the Azure ecosystem: - Custom event producer: Event Grid event producers and consumers don't need to be Azure or Microsoft services. External applications can receive an alert, show the creation of a remediation task or collect messages on who responds to the state change. See [Route policy state change events to Event Grid with Azure CLI](../tutorials/route-state-change-events.md) for a full tutorial. There are two primary entities when using Event Grid:-- Events: These events can be anything a user may want to react to that includes if a policy compliance state is created, changed, and deleted of a resource such as a VM or storage accounts.+- Events: These events can be anything a user might want to react to for an Azure resource. For example, if a policy compliance state is created, changed, and deleted for a resource such as a virtual machine or storage accounts. - Event Grid Subscriptions: These event subscriptions are user configured entities that direct the proper set of events from a publisher to a subscriber. Event subscriptions can filter events based on the resource path the event originated from and the type of event. Additionally, Event Subscriptions can also filter by scope between Azure subscription and Management group. A common Azure Policy event scenario is tracking when the compliance state of a resource changes during policy evaluation. Event-based architecture is an efficient way to react to these changes and aids in the event based reaction to compliance state changes. -Another scenario is to automatically trigger remediation tasks without manually ticking off _create remediation task_ on the policy page. Event Grid checks for compliance state and resources that are currently noncompliant can be remedied. Learn more about [remediation structure](../concepts/remediation-structure.md). Remediation requires a managed identity and policies must be in Modify or DeployIfNotExists effect. [Learn more about effect types](../how-to/remediate-resources.md). +Another scenario is to automatically trigger remediation tasks without manually selecting _create remediation task_ on the policy page. Event Grid checks for compliance state and resources that are currently noncompliant can be remedied. Learn more about [remediation structure](../concepts/remediation-structure.md). Remediation requires a managed identity and policies must be in `modify` or `deployIfNotExists` effect. [Learn more about effect types](../how-to/remediate-resources.md). -Additionally, Event Grid is helpful as an audit system to store state changes and understand cause of noncompliance over time. The scenarios for Event Grid are endless and based on the motivation, Event Grid is configurable. +Event Grid is helpful as an audit system to store state changes and understand cause of noncompliance over time. The scenarios for Event Grid are endless and based on the motivation, Event Grid is configurable. :::image type="content" source="../../../event-grid/media/overview/functional-model.png" alt-text="Screenshot of Event Grid model of sources and handlers." lightbox="../../../event-grid/media/overview/functional-model-big.png"::: |
governance | Remediation Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/remediation-structure.md | Title: Details of the policy remediation task structure description: Describes the policy remediation task definition used by Azure Policy to bring resources into compliance. Previously updated : 08/30/2024 Last updated : 09/30/2024 Remediation tasks remediate existing resources that aren't compliant. Resources You use JavaScript Object Notation (JSON) to create a policy remediation task. The policy remediation task contains elements for: -- [display name](#display-name-and-description)-- [description](#display-name-and-description) - [policy assignment](#policy-assignment-id) - [policy definitions within an initiative](#policy-definition-id) - [resource count and parallel deployments](#resource-count-and-parallel-deployments) You use JavaScript Object Notation (JSON) to create a policy remediation task. T - [resource discovery mode](#resource-discovery-mode) - [provisioning state and deployment summary](#provisioning-state-and-deployment-summary) - For example, the following JSON shows a policy remediation task for policy definition named `requiredTags` a part of an initiative assignment named `resourceShouldBeCompliantInit` with all default settings. ```json {- "id": "/subscriptions/{subId}/resourceGroups/ExemptRG/providers/Microsoft.PolicyInsights/remediations/remediateNotCompliant", + "id": "/subscriptions/{subId}/resourceGroups/{resourceGroupName}/providers/Microsoft.PolicyInsights/remediations/remediateNotCompliant", "apiVersion": "2021-10-01", "name": "remediateNotCompliant", "type": "Microsoft.PolicyInsights/remediations", "properties": {- "policyAssignmentId": "/subscriptions/{mySubscriptionID}/providers/Microsoft.Authorization/policyAssignments/resourceShouldBeCompliantInit", + "policyAssignmentId": "/subscriptions/{subID}/providers/Microsoft.Authorization/policyAssignments/resourceShouldBeCompliantInit", "policyDefinitionReferenceId": "requiredTags", "resourceCount": 42, "parallelDeployments": 6, For example, the following JSON shows a policy remediation task for policy defin Steps on how to trigger a remediation task at [how to remediate non-compliant resources guide](../how-to/remediate-resources.md). These settings can't be changed after the remediation task begins. -## Display name and description --You use `displayName` and `description` to identify the policy remediation task and provide context for its use. `displayName` has a maximum length of _128_ characters and `description` a maximum length of _512_ characters. - ## Policy assignment ID This field must be the full path name of either a policy assignment or an initiative assignment. `policyAssignmentId` is a string and not an array. This property defines which assignment the parent resource hierarchy or individual resource to remediate. |
governance | Author Policies For Arrays | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/author-policies-for-arrays.md | Title: Author policies for array properties on resources description: Learn to work with array parameters and array language expressions, evaluate the [*] alias, and to append elements with Azure Policy definition rules. Previously updated : 08/17/2021 Last updated : 09/30/2024 + # Author policies for array properties on Azure resources -Azure Resource Manager properties are commonly defined as strings and booleans. When a one-to-many -relationship exists, complex properties are instead defined as arrays. In Azure Policy, arrays are -used in several different ways: +Azure Resource Manager properties are commonly defined as strings and booleans. When a one-to-many relationship exists, complex properties are instead defined as arrays. In Azure Policy, arrays are used in several different ways: -- The type of a [definition parameter](../concepts/definition-structure.md#parameters), to provide- multiple options -- Part of a [policy rule](../concepts/definition-structure.md#policy-rule) using the conditions- **in** or **notIn** -- Part of a policy rule that counts how many array members satisfy a condition-- In the [append](../concepts/effects.md#append) and [modify](../concepts/effects.md#modify) effects- to update an existing array +- The type of a [definition parameter](../concepts/definition-structure.md#parameters), to provide multiple options. +- Part of a [policy rule](../concepts/definition-structure.md#policy-rule) using the conditions `in` or `notIn`. +- Part of a policy rule that counts how many array members satisfy a condition. +- In the [append](../concepts/effect-append.md) and [modify](../concepts/effect-modify.md) effects to update an existing array This article covers each use by Azure Policy and provides several example definitions. This article covers each use by Azure Policy and provides several example defini ### Define a parameter array -Defining a parameter as an array allows the policy flexibility when more than one value is needed. -This policy definition allows any single location for the parameter **allowedLocations** and -defaults to _eastus2_: +Defining a parameter as an array allows the policy flexibility when more than one value is needed. This policy definition allows any single location for the parameter `allowedLocations` and defaults to _eastus2_: ```json "parameters": {- "allowedLocations": { - "type": "string", - "metadata": { - "description": "The list of allowed locations for resources.", - "displayName": "Allowed locations", - "strongType": "location" - }, - "defaultValue": "eastus2" - } + "allowedLocations": { + "type": "string", + "metadata": { + "description": "The list of allowed locations for resources.", + "displayName": "Allowed locations", + "strongType": "location" + }, + "defaultValue": "eastus2" + } } ``` -As **type** was _string_, only one value can be set when assigning the policy. If this policy is -assigned, resources in scope are only allowed within a single Azure region. Most policies -definitions need to allow for a list of approved options, such as allowing _eastus2_, _eastus_, and -_westus2_. +As `type` was _string_, only one value can be set when assigning the policy. If this policy is assigned, resources in scope are only allowed within a single Azure region. Most policies definitions need to allow for a list of approved options, such as allowing _eastus2_, _eastus_, and _westus2_. -To create the policy definition to allow multiple options, use the _array_ **type**. The same policy -can be rewritten as follows: +To create the policy definition to allow multiple options, use the _array_ `type`. The same policy can be rewritten as follows: ```json "parameters": {- "allowedLocations": { - "type": "array", - "metadata": { - "description": "The list of allowed locations for resources.", - "displayName": "Allowed locations", - "strongType": "location" - }, - "defaultValue": [ - "eastus2" - ], - "allowedValues": [ - "eastus2", - "eastus", - "westus2" - ] -- } + "allowedLocations": { + "type": "array", + "metadata": { + "description": "The list of allowed locations for resources.", + "displayName": "Allowed locations", + "strongType": "location" + }, + "defaultValue": [ + "eastus2" + ], + "allowedValues": [ + "eastus2", + "eastus", + "westus2" + ] + } } ``` > [!NOTE]-> Once a policy definition is saved, the **type** property on a parameter can't be changed. +> Once a policy definition is saved, the `type` property on a parameter can't be changed. -This new parameter definition takes more than one value during policy assignment. With the array -property **allowedValues** defined, the values available during assignment are further limited to -the predefined list of choices. Use of **allowedValues** is optional. +This new parameter definition takes more than one value during policy assignment. With the array property `allowedValues` defined, the values available during assignment are further limited to the predefined list of choices. Use of `allowedValues` is optional. ### Pass values to a parameter array during assignment -When assigning the policy through the Azure portal, a parameter of **type** _array_ is displayed as -a single textbox. The hint says "Use ; to separate values. (e.g. London;New York)". To pass the -allowed location values of _eastus2_, _eastus_, and _westus2_ to the parameter, use the following -string: +When you assign the policy through the Azure portal, a parameter of `type` _array_ is displayed as a single textbox. The hint says `Use ; to separate values. (e.g. London;New York)`. To pass the allowed location values of _eastus2_, _eastus_, and _westus2_ to the parameter, use the following string: `eastus2;eastus;westus2` -The format for the parameter value is different when using Azure CLI, Azure PowerShell, or the REST -API. The values are passed through a JSON string that also includes the name of the parameter. +The format for the parameter value is different when using Azure CLI, Azure PowerShell, or the REST API. The values are passed through a JSON string that also includes the name of the parameter. ```json {- "allowedLocations": { - "value": [ - "eastus2", - "eastus", - "westus2" - ] - } + "allowedLocations": { + "value": [ + "eastus2", + "eastus", + "westus2" + ] + } } ``` To use this string with each SDK, use the following commands: -- Azure CLI: Command- [az policy assignment create](/cli/azure/policy/assignment#az-policy-assignment-create) with - parameter **params** -- Azure PowerShell: Cmdlet [New-AzPolicyAssignment](/powershell/module/az.resources/New-Azpolicyassignment)- with parameter **PolicyParameter** -- REST API: In the _PUT_ [create](/rest/api/policy/policy-assignments/create) operation as part of- the Request Body as the value of the **properties.parameters** property +- Azure CLI: Command [az policy assignment create](/cli/azure/policy/assignment#az-policy-assignment-create) with parameter `params`. +- Azure PowerShell: Cmdlet [New-AzPolicyAssignment](/powershell/module/az.resources/New-Azpolicyassignment) with parameter `PolicyParameter`. +- REST API: In the _PUT_ [create](/rest/api/policy/policy-assignments/create) operation as part of the Request Body as the value of the `properties.parameters` property. ## Using arrays in conditions ### In and notIn -The `in` and `notIn` conditions only work with array values. They check the existence of a value in -an array. The array can be a literal JSON array or a reference to an array parameter. For example: +The `in` and `notIn` conditions only work with array values. They check the existence of a value in an array. The array can be a literal JSON array or a reference to an array parameter. For example: ```json {- "field": "tags.environment", - "in": [ "dev", "test" ] + "field": "tags.environment", + "in": [ + "dev", + "test" + ] } ``` ```json {- "field": "location", - "notIn": "[parameters('allowedLocations')]" + "field": "location", + "notIn": "[parameters('allowedLocations')]" } ``` ### Value count -The [value count](../concepts/definition-structure.md#value-count) expression count how many array -members meet a condition. It provides a way to evaluate the same condition multiple times, using -different values on each iteration. For example, the following condition checks whether the resource -name matches any pattern from an array of patterns: +The [value count](../concepts/definition-structure.md#value-count) expression count how many array members meet a condition. It provides a way to evaluate the same condition multiple times, using different values on each iteration. For example, the following condition checks whether the resource name matches any pattern from an array of patterns: ```json {- "count": { - "value": [ "test*", "dev*", "prod*" ], - "name": "pattern", - "where": { - "field": "name", - "like": "[current('pattern')]" - } - }, - "greater": 0 + "count": { + "value": [ + "test*", + "dev*", + "prod*" + ], + "name": "pattern", + "where": { + "field": "name", + "like": "[current('pattern')]" + } + }, + "greater": 0 } ``` -In order to evaluate the expression, Azure Policy evaluates the `where` condition three times, once -for each member of `[ "test*", "dev*", "prod*" ]`, counting how many times it was evaluated to -`true`. On every iteration, the value of the current array member is paired with the `pattern` index -name defined by `count.name`. This value can then be referenced inside the `where` condition by -calling a special template function: `current('pattern')`. +In order to evaluate the expression, Azure Policy evaluates the `where` condition three times, once for each member of `[ "test*", "dev*", "prod*" ]`, counting how many times it was evaluated to `true`. On every iteration, the value of the current array member is paired with the `pattern` index name defined by `count.name`. This value can then be referenced inside the `where` condition by calling a special template function: `current('pattern')`. | Iteration | `current('pattern')` returned value | |:|:| calling a special template function: `current('pattern')`. The condition is true only if the resulted count is greater than 0. -To make the condition above more generic, use parameter reference instead of a literal array: +To make the previous condition more generic, use a `parameters` reference instead of a literal array: ```json {- "count": { - "value": "[parameters('patterns')]", - "name": "pattern", - "where": { - "field": "name", - "like": "[current('pattern')]" - } - }, - "greater": 0 + "count": { + "value": "[parameters('patterns')]", + "name": "pattern", + "where": { + "field": "name", + "like": "[current('pattern')]" + } + }, + "greater": 0 } ``` -When the **value count** expression isn't under any other **count** expression, `count.name` is -optional and the `current()` function can be used without any arguments: +When the `value count` expression isn't under any other `count` expression, `count.name` is optional and the `current()` function can be used without any arguments: ```json {- "count": { - "value": "[parameters('patterns')]", - "where": { - "field": "name", - "like": "[current()]" - } - }, - "greater": 0 + "count": { + "value": "[parameters('patterns')]", + "where": { + "field": "name", + "like": "[current()]" + } + }, + "greater": 0 } ``` -**Value count** also support arrays of complex objects, allowing for more complex conditions. For -example, the following condition defines a desired tag value for each name pattern and checks -whether the resource name matches the pattern, but doesn't have the required tag value: +The `value count` also support arrays of complex objects, allowing for more complex conditions. For example, the following condition defines a desired tag value for each name pattern and checks whether the resource name matches the pattern, but doesn't have the required tag value: ```json {- "count": { - "value": [ - { "pattern": "test*", "envTag": "dev" }, - { "pattern": "dev*", "envTag": "dev" }, - { "pattern": "prod*", "envTag": "prod" }, - ], - "name": "namePatternRequiredTag", - "where": { - "allOf": [ - { - "field": "name", - "like": "[current('namePatternRequiredTag').pattern]" - }, - { - "field": "tags.env", - "notEquals": "[current('namePatternRequiredTag').envTag]" - } - ] + "count": { + "value": [ + { + "pattern": "test*", + "envTag": "dev" + }, + { + "pattern": "dev*", + "envTag": "dev" + }, + { + "pattern": "prod*", + "envTag": "prod" + }, + ], + "name": "namePatternRequiredTag", + "where": { + "allOf": [ + { + "field": "name", + "like": "[current('namePatternRequiredTag').pattern]" + }, + { + "field": "tags.env", + "notEquals": "[current('namePatternRequiredTag').envTag]" }- }, - "greater": 0 + ] + } + }, + "greater": 0 } ``` -For useful examples, see -[value count examples](../concepts/definition-structure.md#value-count-examples). +For useful examples, see [value count examples](../concepts/definition-structure.md#value-count-examples). ## Referencing array resource properties -Many use cases require working with array properties in the evaluated resource. Some scenarios -require referencing an entire array (for example, checking its length). Others require applying a -condition to each individual array member (for example, ensure that all firewall rule block access -from the internet). Understanding the different ways Azure Policy can reference resource properties, -and how these references behave when they refer to array properties is the key for writing -conditions that cover these scenarios. +Many use cases require working with array properties in the evaluated resource. Some scenarios require referencing an entire array (for example, checking its length). Others require applying a condition to each individual array member (for example, ensure that all firewall rule block access from the internet). Understanding the different ways Azure Policy can reference resource properties, and how these references behave when they refer to array properties is the key for writing conditions that cover these scenarios. ### Referencing resource properties -Resource properties can be referenced by Azure Policy using -[aliases](../concepts/definition-structure.md#aliases) There are two ways to reference the values of -a resource property within Azure Policy: +Resource properties can be referenced by Azure Policy using [aliases](../concepts/definition-structure.md#aliases) There are two ways to reference the values of a resource property within Azure Policy: -- Use [field](../concepts/definition-structure.md#fields) condition to check whether **all**- selected resource properties meet a condition. Example: +- Use [field](../concepts/definition-structure.md#fields) condition to check whether all selected resource properties meet a condition. Example: ```json {- "field" : "Microsoft.Test/resourceType/property", + "field": "Microsoft.Test/resourceType/property", "equals": "value" } ``` a resource property within Azure Policy: } ``` -The field condition has an implicit "all of" behavior. If the alias represents a collection of -values, it checks whether all individual values meet the condition. The `field()` function returns -the values represented by the alias as-is, which can then be manipulated by other template -functions. +The field condition has an implicit `allOf` behavior. If the alias represents a collection of values, it checks whether all individual values meet the condition. The `field()` function returns the values represented by the alias as-is, which can then be manipulated by other template functions. ### Referencing array fields -Array resource properties are represented by two different types of aliases. One 'normal' alias and -[array aliases](../concepts/definition-structure.md#understanding-the--alias) that have `[*]` -attached to it: +Array resource properties are represented by two different types of aliases. One normal alias and [array aliases](../concepts/definition-structure-alias.md) that have `[*]` attached to it: - `Microsoft.Test/resourceType/stringArray` - `Microsoft.Test/resourceType/stringArray[*]` #### Referencing the array -The first alias represents a single value, the value of `stringArray` property from the request -content. Since the value of that property is an array, it isn't useful in policy conditions. For -example: +The first alias represents a single value, the value of `stringArray` property from the request content. Since the value of that property is an array, it isn't useful in policy conditions. For example: ```json { example: } ``` -This condition compares the entire `stringArray` array to a single string value. Most conditions, -including `equals`, only accept string values, so there's not much use in comparing an array to a -string. The main scenario where referencing the array property is useful is when checking whether it -exists: +This condition compares the entire `stringArray` array to a single string value. Most conditions, including `equals`, only accept string values, so there's not much use in comparing an array to a string. The main scenario where referencing the array property is useful is when checking whether it exists: ```json { exists: } ``` -With the `field()` function, the returned value is the array from the request content, which can -then be used with any of the -[supported template functions](../concepts/definition-structure.md#policy-functions) that accept -array arguments. For example, the following condition checks whether the length of `stringArray` is -greater than 0: +With the `field()` function, the returned value is the array from the request content, which can then be used with any of the [supported template functions](../concepts/definition-structure.md#policy-functions) that accept array arguments. For example, the following condition checks whether the length of `stringArray` is greater than 0: ```json { greater than 0: #### Referencing the array members collection -Aliases that use the `[*]` syntax represent a **collection of property values selected from an array -property**, which is different than selecting the array property itself. In the case of -`Microsoft.Test/resourceType/stringArray[*]`, it returns a collection that has all of the members of -`stringArray`. As mentioned previously, a `field` condition checks that all selected resource -properties meet the condition, therefore the following condition is true only if **all** the members -of `stringArray` are equal to '"value"'. +Aliases that use the `[*]` syntax represent a collection of property values selected from an array property, which is different than selecting the array property itself. For example, `Microsoft.Test/resourceType/stringArray[*]` returns a collection that has all of the members of `stringArray`. As mentioned previously, a `field` condition checks that all selected resource properties meet the condition, therefore the following condition is true only if all the members of `stringArray` are equal to `"value"`. ```json { of `stringArray` are equal to '"value"'. } ``` -If the array is empty, the condition will evaluate to true because no member of the array is in violation. In this scenario, it is recommended to use the [count expression](../concepts/definition-structure.md#count) instead. If the array contains objects, a `[*]` alias can be used to select the value of a specific property -from each array member. Example: +If the array is empty, the condition evaluates to true because no member of the array is in violation. In this scenario, the recommendation is to use the [count expression](../concepts/definition-structure.md#count) instead. If the array contains objects, a `[*]` alias can be used to select the value of a specific property from each array member. Example: ```json { from each array member. Example: } ``` -This condition is true if the values of all `property` properties in `objectArray` are equal to -`"value"`. For more examples, see [Additional \[\*\] alias -examples](#additional--alias-examples). +This condition is true if the values of all `property` properties in `objectArray` are equal to `"value"`. For more examples, see [More alias examples](#more-alias-examples). -When using the `field()` function to reference an array alias, the returned value is an array of all -the selected values. This behavior means that the common use case of the `field()` function, the -ability to apply template functions to resource property values, is limited. The only template -functions that can be used in this case are the ones that accept array arguments. For example, it's -possible to get the length of the array with -`[length(field('Microsoft.Test/resourceType/objectArray[*].property'))]`. However, more complex -scenarios like applying template function to each array member and comparing it to a desired value -are only possible when using the `count` expression. For more information, see -[Field count expression](#field-count-expressions). +When using the `field()` function to reference an array alias, the returned value is an array of all the selected values. This behavior means that the common use case of the `field()` function, the ability to apply template functions to resource property values, is limited. The only template functions that can be used in this case are the ones that accept array arguments. For example, it's possible to get the length of the array with `[length(field('Microsoft.Test/resourceType/objectArray[*].property'))]`. However, more complex scenarios like applying template function to each array member and comparing it to a desired value are only possible when using the `count` expression. For more information, see [Field count expression](#field-count-expressions). -To summarize, see the following example resource content and the selected values returned by various -aliases: +To summarize, see the following example resource content and the selected values returned by various aliases: ```json { "tags": { "env": "prod" },- "properties": - { - "stringArray": [ "a", "b", "c" ], + "properties": { + "stringArray": [ + "a", + "b", + "c" + ], "objectArray": [ { "property": "value1",- "nestedArray": [ 1, 2 ] + "nestedArray": [ + 1, + 2 + ] }, { "property": "value2",- "nestedArray": [ 3, 4 ] + "nestedArray": [ + 3, + 4 + ] } ] } } ``` -When using the field condition on the example resource content, the results are as follows: +When you use the field condition on the example resource content, the results are as follows: | Alias | Selected values | |: |:| When using the field condition on the example resource content, the results are | `Microsoft.Test/resourceType/objectArray[*].nestedArray` | `[ 1, 2 ]`, `[ 3, 4 ]` | | `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` | `1`, `2`, `3`, `4` | -When using the `field()` function on the example resource content, the results are as follows: +When you use the `field()` function on the example resource content, the results are as follows: | Expression | Returned Value | |: |:| When using the `field()` function on the example resource content, the results a ### Field count expressions -[Field count](../concepts/definition-structure.md#field-count) expressions count how many array -members meet a condition and compare the count to a target value. `Count` is more intuitive and -versatile for evaluating arrays compared to `field` conditions. The syntax is: +[Field count](../concepts/definition-structure.md#field-count) expressions count how many array members meet a condition and compare the count to a target value. `Count` is more intuitive and versatile for evaluating arrays compared to `field` conditions. The syntax is: ```json { "count": {- "field": <[*] alias>, + "field": <[* + ] alias>, "where": <optional policy condition expression> }, "equals|greater|less|any other operator": <target value> } ``` -When used without a `where` condition, `count` simply returns the length of an array. With the -example resource content from the previous section, the following `count` expression is evaluated to -`true` since `stringArray` has three members: +When used without a `where` condition, `count` simply returns the length of an array. With the example resource content from the previous section, the following `count` expression is evaluated to `true` since `stringArray` has three members: ```json { example resource content from the previous section, the following `count` expres } ``` -This behavior also works with nested arrays. For example, the following `count` expression is -evaluated to `true` since there are four array members in the `nestedArray` arrays: +This behavior also works with nested arrays. For example, the following `count` expression is evaluated to `true` since there are four array members in the `nestedArray` arrays: ```json { evaluated to `true` since there are four array members in the `nestedArray` arra } ``` -The power of `count` is in the `where` condition. When it's specified, Azure Policy enumerates the -array members and evaluates each against the condition, counting how many array members evaluated to -`true`. Specifically, in each iteration of the `where` condition evaluation, Azure Policy selects a -single array member ***i*** and evaluate the resource content against the `where` condition **as if -***i*** is the only member of the array**. Having only one array member available in each iteration -provides a way to apply complex conditions on each individual array member. +The power of `count` is in the `where` condition. When `count` is specified, Azure Policy enumerates the array members and evaluates each against the condition, counting how many array members evaluated to `true`. Specifically, in each iteration of the `where` condition evaluation, Azure Policy selects a single array member `i` and evaluate the resource content against the `where` condition **as if `i` is the only member of the array**. Having only one array member available in each iteration provides a way to apply complex conditions on each individual array member. Example: Example: } ``` -In order to evaluate the `count` expression, Azure Policy evaluates the `where` condition three -times, once for each member of `stringArray`, counting how many times it was evaluated to `true`. -When the `where` condition refers to the `Microsoft.Test/resourceType/stringArray[*]` array members, -instead of selecting all the members of `stringArray`, it will only select a single array member -every time: +In order to evaluate the `count` expression, Azure Policy evaluates the `where` condition three times, once for each member of `stringArray`, counting how many times it was evaluated to `true`. When the `where` condition refers to the `Microsoft.Test/resourceType/stringArray[*]` array members, instead of selecting all the members of `stringArray`, it selects only a single array member every time: | Iteration | Selected `Microsoft.Test/resourceType/stringArray[*]` values | `where` Evaluation result | |:|:|:| Here's a more complex expression: The `count` returns `1`. -The fact that the `where` expression is evaluated against the **entire** request content (with -changes only to the array member that is currently being enumerated) means that the `where` -condition can also refer to fields outside the array: +The fact that the `where` expression is evaluated against the **entire** request content (with changes only to the array member that is currently being enumerated) means that the `where` condition can also refer to fields outside the array: ```json { condition can also refer to fields outside the array: | 1 | `tags.env` => `"prod"` | `true` | | 2 | `tags.env` => `"prod"` | `true` | -Nested count expressions can be used to apply conditions to nested array fields. For example, the -following condition checks that the `objectArray[*]` array has exactly two members with -`nestedArray[*]` that contains one or more members: +Nested count expressions can be used to apply conditions to nested array fields. For example, the following condition checks that the `objectArray[*]` array has exactly two members with `nestedArray[*]` that contains one or more members: ```json { following condition checks that the `objectArray[*]` array has exactly two membe | 1 | `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` => `1`, `2` | `nestedArray[*]` has 2 members => `true` | | 2 | `Microsoft.Test/resourceType/objectArray[*].nestedArray[*]` => `3`, `4` | `nestedArray[*]` has 2 members => `true` | -Since both members of `objectArray[*]` have a child array `nestedArray[*]` with two members, the -outer count expression returns `2`. +Since both members of `objectArray[*]` have a child array `nestedArray[*]` with two members, the outer count expression returns `2`. -More complex example: check that the `objectArray[*]` array has exactly two members with -`nestedArray[*]` with any members equal to `2` or `3`: +More complex example: check that the `objectArray[*]` array has exactly two members with `nestedArray[*]` with any members equal to `2` or `3`: ```json { More complex example: check that the `objectArray[*]` array has exactly two memb "count": { "field": "Microsoft.Test/resourceType/objectArray[*].nestedArray[*]", "where": {- "field": "Microsoft.Test/resourceType/objectArray[*].nestedArray[*]", - "in": [ 2, 3 ] + "field": "Microsoft.Test/resourceType/objectArray[*].nestedArray[*]", + "in": [ + 2, + 3 + ] } }, "greaterOrEquals": 1 or `3`, the outer count expression returns `2`. #### Accessing current array member with template functions -When using template functions, use the `current()` function to access the value of the current array -member or the values of any of its properties. To access the value of the current array member, pass -the alias defined in `count.field` or any of its child aliases as an argument to the `current()` -function. For example: +When using template functions, use the `current()` function to access the value of the current array member or the values of any of its properties. To access the value of the current array member, pass the alias defined in `count.field` or any of its child aliases as an argument to the `current()` function. For example: ```json { "count": { "field": "Microsoft.Test/resourceType/objectArray[*]", "where": {- "value": "[current('Microsoft.Test/resourceType/objectArray[*].property')]", - "like": "value*" + "value": "[current('Microsoft.Test/resourceType/objectArray[*].property')]", + "like": "value*" } }, "equals": 2 }- ``` | Iteration | `current()` returned value | `where` Evaluation result | function. For example: #### The field function inside where conditions -The `field()` function can also be used to access the value of the current array member as long as -the **count** expression isn't inside an **existence condition** (`field()` function always refer to -the resource evaluated in the **if** condition). The behavior of `field()` when referring to the -evaluated array is based on the following concepts: +The `field()` function can also be used to access the value of the current array member as long as the **count** expression isn't inside an **existence condition** (`field()` function always refer to the resource evaluated in the **if** condition). The behavior of `field()` when referring to the evaluated array is based on the following concepts: -1. Array aliases are resolved into a collection of values selected from all array members. -1. `field()` functions referencing array aliases return an array with the selected values. -1. Referencing the counted array alias inside the `where` condition returns a collection with a - single value selected from the array member that is evaluated in the current iteration. +- Array aliases are resolved into a collection of values selected from all array members. +- `field()` functions referencing array aliases return an array with the selected values. +- Referencing the counted array alias inside the `where` condition returns a collection with a single value selected from the array member that is evaluated in the current iteration. -This behavior means that when referring to the counted array member with a `field()` function inside -the `where` condition, **it returns an array with a single member**. While this behavior may not be -intuitive, it's consistent with the idea that array aliases always return a collection of selected -properties. Here's an example: +This behavior means that when referring to the counted array member with a `field()` function inside the `where` condition, **it returns an array with a single member**. While this behavior might not be intuitive, it's consistent with the idea that array aliases always return a collection of selected properties. Here's an example: ```json { properties. Here's an example: | 2 | `Microsoft.Test/resourceType/stringArray[*]` => `"b"` </br> `[field('Microsoft.Test/resourceType/stringArray[*]')]` => `[ "b" ]` | `false` | | 3 | `Microsoft.Test/resourceType/stringArray[*]` => `"c"` </br> `[field('Microsoft.Test/resourceType/stringArray[*]')]` => `[ "c" ]` | `false` | -Therefore, when there's a need to access the value of the counted array alias with a `field()` -function, the way to do so is to wrap it with a `first()` template function: +Therefore, when there's a need to access the value of the counted array alias with a `field()` function, the way to do so is to wrap it with a `first()` template function: ```json { function, the way to do so is to wrap it with a `first()` template function: | 2 | `Microsoft.Test/resourceType/stringArray[*]` => `"b"` </br> `[first(field('Microsoft.Test/resourceType/stringArray[*]'))]` => `"b"` | `true` | | 3 | `Microsoft.Test/resourceType/stringArray[*]` => `"c"` </br> `[first(field('Microsoft.Test/resourceType/stringArray[*]'))]` => `"c"` | `true` | -For useful examples, see -[Field count examples](../concepts/definition-structure.md#field-count-examples). +For useful examples, see [Field count examples](../concepts/definition-structure.md#field-count-examples). ## Modifying arrays -The [append](../concepts/effects.md#append) and [modify](../concepts/effects.md#modify) alter -properties on a resource during creation or update. When working with array properties, the behavior -of these effects depends on whether the operation is trying to modify the **\[\*\]** alias or not: +The [append](../concepts/effects.md#append) and [modify](../concepts/effects.md#modify) alter properties on a resource during creation or update. When you work with array properties, the behavior of these effects depends on whether the operation is trying to modify the `[*]` alias or not: > [!NOTE] > Using the `modify` effect with aliases is currently in **preview**. of these effects depends on whether the operation is trying to modify the **\[\* For more information, see the [append examples](../concepts/effects.md#append-examples). -## Additional [*] alias examples +## More alias examples -It's recommended to use the [field count expressions](#field-count-expressions) to check whether -'all of' or 'any of' the members of an array in the request content meet a condition. However, for -some simple conditions it's possible to achieve the same result by using a field accessor with an -array alias as described in -[Referencing the array members collection](#referencing-the-array-members-collection). This pattern -can be useful in policy rules that exceed the limit of allowed **count** expressions. Here are -examples for common use cases: +The recommendation is to use the [field count expressions](#field-count-expressions) to check whether `allOf`or `anyOf` the members of an array in the request content meet a condition. For some simple conditions, it's possible to achieve the same result by using a field accessor with an array alias as described in [Referencing the array members collection](#referencing-the-array-members-collection). This pattern can be useful in policy rules that exceed the limit of allowed `count` expressions. Here are examples for common use cases: -The example policy rule for the scenario table below: +The example policy rule for the following scenario table: ```json "policyRule": {- "if": { - "allOf": [ - { - "field": "Microsoft.Storage/storageAccounts/networkAcls.ipRules", - "exists": "true" - }, + "if": { + "allOf": [ + { + "field": "Microsoft.Storage/storageAccounts/networkAcls.ipRules", + "exists": "true" + }, <-- Condition (see table below) -->- ] - }, - "then": { - "effect": "[parameters('effectType')]" - } + ] + }, + "then": { + "effect": "[parameters('effectType')]" + } } ``` -The **ipRules** array is as follows for the scenario table below: +The `ipRules` array is as follows for the following scenario table: ```json "ipRules": [- { - "value": "127.0.0.1", - "action": "Allow" - }, - { - "value": "192.168.1.1", - "action": "Allow" - } + { + "value": "127.0.0.1", + "action": "Allow" + }, + { + "value": "192.168.1.1", + "action": "Allow" + } ] ``` -For each condition example below, replace `<field>` with -`"field": "Microsoft.Storage/storageAccounts/networkAcls.ipRules[*].value"`. +For each of the following condition examples, replace `<field>` with `"field": "Microsoft.Storage/storageAccounts/networkAcls.ipRules[*].value"`. -The following outcomes are the result of the combination of the condition and the example policy -rule and array of existing values above: +The following outcomes are the result of the combination of the condition and the example policy rule and array of previous existing values: |Condition |Outcome | Scenario |Explanation | |-|-|-|-|-|`{<field>,"notEquals":"127.0.0.1"}` |Nothing |None match |One array element evaluates as false (127.0.0.1 != 127.0.0.1) and one as true (127.0.0.1 != 192.168.1.1), so the **notEquals** condition is _false_ and the effect isn't triggered. | -|`{<field>,"notEquals":"10.0.4.1"}` |Policy effect |None match |Both array elements evaluate as true (10.0.4.1 != 127.0.0.1 and 10.0.4.1 != 192.168.1.1), so the **notEquals** condition is _true_ and the effect is triggered. | -|`"not":{<field>,"notEquals":"127.0.0.1" }` |Policy effect |One or more match |One array element evaluates as false (127.0.0.1 != 127.0.0.1) and one as true (127.0.0.1 != 192.168.1.1), so the **notEquals** condition is _false_. The logical operator evaluates as true (**not** _false_), so the effect is triggered. | -|`"not":{<field>,"notEquals":"10.0.4.1"}` |Nothing |One or more match |Both array elements evaluate as true (10.0.4.1 != 127.0.0.1 and 10.0.4.1 != 192.168.1.1), so the **notEquals** condition is _true_. The logical operator evaluates as false (**not** _true_), so the effect isn't triggered. | -|`"not":{<field>,"Equals":"127.0.0.1"}` |Policy effect |Not all match |One array element evaluates as true (127.0.0.1 == 127.0.0.1) and one as false (127.0.0.1 == 192.168.1.1), so the **Equals** condition is _false_. The logical operator evaluates as true (**not** _false_), so the effect is triggered. | -|`"not":{<field>,"Equals":"10.0.4.1"}` |Policy effect |Not all match |Both array elements evaluate as false (10.0.4.1 == 127.0.0.1 and 10.0.4.1 == 192.168.1.1), so the **Equals** condition is _false_. The logical operator evaluates as true (**not** _false_), so the effect is triggered. | -|`{<field>,"Equals":"127.0.0.1"}` |Nothing |All match |One array element evaluates as true (127.0.0.1 == 127.0.0.1) and one as false (127.0.0.1 == 192.168.1.1), so the **Equals** condition is _false_ and the effect isn't triggered. | -|`{<field>,"Equals":"10.0.4.1"}` |Nothing |All match |Both array elements evaluate as false (10.0.4.1 == 127.0.0.1 and 10.0.4.1 == 192.168.1.1), so the **Equals** condition is _false_ and the effect isn't triggered. | +|`{<field>,"notEquals":"127.0.0.1"}` |Nothing |None match |One array element evaluates as false (`127.0.0.1 != 127.0.0.1`) and one as true (`127.0.0.1 != 192.168.1.1`), so the `notEquals` condition is _false_ and the effect isn't triggered. | +|`{<field>,"notEquals":"10.0.4.1"}` |Policy effect |None match |Both array elements evaluate as true (`10.0.4.1 != 127.0.0.1 and 10.0.4.1 != 192.168.1.1`), so the `notEquals` condition is _true_ and the effect is triggered. | +|`"not":{<field>,"notEquals":"127.0.0.1" }` |Policy effect |One or more match |One array element evaluates as false (`127.0.0.1 != 127.0.0.1`) and one as true (`127.0.0.1 != 192.168.1.1`), so the `notEquals` condition is _false_. The logical operator evaluates as true (not _false_), so the effect is triggered. | +|`"not":{<field>,"notEquals":"10.0.4.1"}` |Nothing |One or more match |Both array elements evaluate as true (`10.0.4.1 != 127.0.0.1 and 10.0.4.1 != 192.168.1.1`), so the `notEquals` condition is _true_. The logical operator evaluates as false (not _true_), so the effect isn't triggered. | +|`"not":{<field>,"Equals":"127.0.0.1"}` |Policy effect |Not all match |One array element evaluates as true (`127.0.0.1 == 127.0.0.1`) and one as false (`127.0.0.1 == 192.168.1.1`), so the `Equals` condition is _false_. The logical operator evaluates as true (not _false_), so the effect is triggered. | +|`"not":{<field>,"Equals":"10.0.4.1"}` |Policy effect |Not all match |Both array elements evaluate as false (`10.0.4.1 == 127.0.0.1 and 10.0.4.1 == 192.168.1.1`), so the `Equals` condition is _false_. The logical operator evaluates as true (not _false_), so the effect is triggered. | +|`{<field>,"Equals":"127.0.0.1"}` |Nothing |All match |One array element evaluates as true (`127.0.0.1 == 127.0.0.1`) and one as false (`127.0.0.1 == 192.168.1.1`), so the `Equals` condition is _false_ and the effect isn't triggered. | +|`{<field>,"Equals":"10.0.4.1"}` |Nothing |All match |Both array elements evaluate as false (`10.0.4.1 == 127.0.0.1 and 10.0.4.1 == 192.168.1.1`), so the `Equals` condition is _false_ and the effect isn't triggered. | ## Next steps - Review examples at [Azure Policy samples](../samples/index.md).-- Review the [Azure Policy definition structure](../concepts/definition-structure.md).-- Review [Understanding policy effects](../concepts/effects.md).+- Review the [Azure Policy definition structure](../concepts/definition-structure-basics.md). +- Review [Understanding policy effects](../concepts/effect-basics.md). - Understand how to [programmatically create policies](programmatically-create.md). - Learn how to [remediate non-compliant resources](remediate-resources.md).-- Review what a management group is with- [Organize your resources with Azure management groups](../../management-groups/overview.md). +- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md). |
governance | Export Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/export-resources.md | Title: Export Azure Policy resources description: Learn to export Azure Policy resources to GitHub, such as policy definitions and policy assignments. Previously updated : 04/18/2022 Last updated : 09/30/2024 ms.devlang: azurecli + # Export Azure Policy resources -This article provides information on how to export your existing Azure Policy resources. Exporting -your resources is useful and recommended for backup, but is also an important step in your journey -with Cloud Governance and treating your [policy-as-code](../concepts/policy-as-code.md). Azure -Policy resources can be exported through [REST API](/rest/api/policy), [Azure CLI](#export-with-azure-cli), and [Azure PowerShell](#export-with-azure-powershell). +This article provides information on how to export your existing Azure Policy resources. Exporting your resources is useful and recommended for backup, but is also an important step in your journey with Cloud Governance and treating your [policy-as-code](../concepts/policy-as-code.md). Azure Policy resources can be exported through [REST API](/rest/api/policy), [Azure CLI](#export-with-azure-cli), and [Azure PowerShell](#export-with-azure-powershell). > [!NOTE] > The portal experience for exporting definitions to GitHub was deprecated in April 2023. ## Export with Azure CLI -Azure Policy definitions, initiatives, and assignments can each be exported as JSON with -[Azure CLI](/cli/azure/install-azure-cli). Each of these commands uses a **name** parameter to -specify which object to get the JSON for. The **name** property is often a _GUID_ and isn't the -**displayName** of the object. +Azure Policy definitions, initiatives, and assignments can each be exported as JSON with [Azure CLI](/cli/azure/install-azure-cli). Each of these commands uses a `name` parameter to specify which object to get the JSON for. The `name` property is often a _GUID_ and isn't the `displayName` of the object. -- Definition - [az policy definition show](/cli/azure/policy/definition#az-policy-definition-show)-- Initiative - [az policy set-definition show](/cli/azure/policy/set-definition#az-policy-set-definition-show)-- Assignment - [az policy assignment show](/cli/azure/policy/assignment#az-policy-assignment-show)+The Azure CLI and Azure PowerShell example commands use a built-in Policy definition with the `name` `b2982f36-99f2-4db5-8eff-283140c09693` and the `displayName` _Storage accounts should disable public network access_. -Here's an example of getting the JSON for a policy definition with **name** of -_VirtualMachineStorage_: +- Definition - [az policy definition show](/cli/azure/policy/definition#az-policy-definition-show). +- Initiative - [az policy set-definition show](/cli/azure/policy/set-definition#az-policy-set-definition-show). +- Assignment - [az policy assignment show](/cli/azure/policy/assignment#az-policy-assignment-show). ```azurecli-interactive-az policy definition show --name 'VirtualMachineStorage' +az policy definition show --name 'b2982f36-99f2-4db5-8eff-283140c09693' ``` ## Export with Azure PowerShell -Azure Policy definitions, initiatives, and assignments can each be exported as JSON with [Azure -PowerShell](/powershell/azure/). Each of these cmdlets uses a **Name** parameter to specify which -object to get the JSON for. The **Name** property is often a _GUID_ (Globally Unique Identifier) and isn't the **displayName** of -the object. --- Definition - [Get-AzPolicyDefinition](/powershell/module/az.resources/get-azpolicydefinition)-- Initiative - [Get-AzPolicySetDefinition](/powershell/module/az.resources/get-azpolicysetdefinition)-- Assignment - [Get-AzPolicyAssignment](/powershell/module/az.resources/get-azpolicyassignment)+Azure Policy definitions, initiatives, and assignments can each be exported as JSON with [Azure PowerShell](/powershell/azure/). Each of these cmdlets uses a `Name` parameter to specify which object to get the JSON for. The `Name` property is often a _GUID_ (Globally Unique Identifier) and isn't the `displayName` of the object. -Here's an example of getting the JSON for a policy definition with **Name** (as mentioned previously, GUID) of -_d7fff7ea-9d47-4952-b854-b7da261e48f2_: +- Definition - [Get-AzPolicyDefinition](/powershell/module/az.resources/get-azpolicydefinition). +- Initiative - [Get-AzPolicySetDefinition](/powershell/module/az.resources/get-azpolicysetdefinition). +- Assignment - [Get-AzPolicyAssignment](/powershell/module/az.resources/get-azpolicyassignment). ```azurepowershell-interactive-Get-AzPolicyDefinition -Name 'd7fff7ea-9d47-4952-b854-b7da261e48f2' | ConvertTo-Json -Depth 10 +Get-AzPolicyDefinition -Name 'b2982f36-99f2-4db5-8eff-283140c09693' | ConvertTo-Json -Depth 10 ``` -## Export to CSV with Resource Graph in Azure Portal +## Export to CSV with Resource Graph in Azure portal -Azure Resource Graph gives the ability to query at scale with complex filtering, grouping and sorting. Azure Resource Graph supports the policy resources table, which contains policy resources such as definitions, assignments and exemptions. Review our [sample queries.](../../resource-graph/samples/samples-by-table.md#policyresources) -The Resource Graph explorer portal experience allows downloads of query results to CSV using the ["Download to CSV"](../../resource-graph/first-query-portal.md#download-query-results-as-a-csv-file) toolbar option. +Azure Resource Graph gives the ability to query at scale with complex filtering, grouping and sorting. Azure Resource Graph supports the policy resources table, which contains policy resources such as definitions, assignments and exemptions. Review our [sample queries.](../samples/resource-graph-samples.md#azure-policy). The Resource Graph explorer portal experience allows downloads of query results to CSV using the ["Download to CSV"](../../resource-graph/first-query-portal.md#download-query-results-as-a-csv-file) toolbar option. ## Next steps - Review examples at [Azure Policy samples](../samples/index.md).-- Review the [Azure Policy definition structure](../concepts/definition-structure.md).-- Review [Understanding policy effects](../concepts/effects.md).+- Review the [Azure Policy definition structure](../concepts/definition-structure-basics.md). +- Review [Understanding policy effects](../concepts/effect-basics.md). - Understand how to [programmatically create policies](programmatically-create.md). - Learn how to [remediate noncompliant resources](remediate-resources.md). - Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md). |
governance | Programmatically Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/programmatically-create.md | Title: Programmatically create policies description: This article walks you through programmatically creating and managing policies for Azure Policy with Azure CLI, Azure PowerShell, and REST API. Previously updated : 08/17/2021 Last updated : 09/30/2024 + # Programmatically create policies -This article walks you through programmatically creating and managing policies. Azure Policy -definitions enforce different rules and effects over your resources. Enforcement makes sure that -resources stay compliant with your corporate standards and service-level agreements. +This article walks you through programmatically creating and managing policies. Azure Policy definitions enforce different rules and effects over your resources. Enforcement makes sure that resources stay compliant with your corporate standards and service-level agreements. For information about compliance, see [getting compliance data](get-compliance-data.md). For information about compliance, see [getting compliance data](get-compliance-d Before you begin, make sure that the following prerequisites are met: -1. If you haven't already, install the [ARMClient](https://github.com/projectkudu/ARMClient). It's a - tool that sends HTTP requests to Azure Resource Manager-based APIs. +1. If you haven't already, install the [ARMClient](https://github.com/projectkudu/ARMClient). It's a tool that sends HTTP requests to Azure Resource Manager-based APIs. -1. Update your Azure PowerShell module to the latest version. See - [Install Azure PowerShell module](/powershell/azure/install-azure-powershell) for detailed information. For - more information about the latest version, see - [Azure PowerShell](https://github.com/Azure/azure-powershell/releases). +1. Update your Azure PowerShell module to the latest version. See [Install Azure PowerShell module](/powershell/azure/install-azure-powershell) for detailed information. For more information about the latest version, see [Azure PowerShell](https://github.com/Azure/azure-powershell/releases). -1. Register the Azure Policy Insights resource provider using Azure PowerShell to validate that your - subscription works with the resource provider. To register a resource provider, you must have - permission to run the register action operation for the resource provider. This operation is - included in the Contributor and Owner roles. Run the following command to register the resource - provider: +1. Register the Azure Policy Insights resource provider using Azure PowerShell to validate that your subscription works with the resource provider. To register a resource provider, you must have permission to run the register action operation for the resource provider. This operation is included in the Contributor and Owner roles. Run the following command to register the resource provider: ```azurepowershell-interactive Register-AzResourceProvider -ProviderNamespace 'Microsoft.PolicyInsights' ``` - For more information about registering and viewing resource providers, see - [Resource Providers and Types](../../../azure-resource-manager/management/resource-providers-and-types.md). + For more information about registering and viewing resource providers, see [Resource Providers and Types](../../../azure-resource-manager/management/resource-providers-and-types.md). -1. If you haven't already, install Azure CLI. You can get the latest version at - [Install Azure CLI on Windows](/cli/azure/install-azure-cli-windows). +1. If you haven't already, install Azure CLI. You can get the latest version at [Install Azure CLI on Windows](/cli/azure/install-azure-cli-windows). ## Create and assign a policy definition -The first step toward better visibility of your resources is to create and assign policies over your -resources. The next step is to learn how to programmatically create and assign a policy. The example -policy audits storage accounts that are open to all public networks using PowerShell, Azure CLI, and -HTTP requests. +The first step toward better visibility of your resources is to create and assign policies over your resources. The next step is to learn how to programmatically create and assign a policy. The example policy audits storage accounts that are open to all public networks using PowerShell, Azure CLI, and HTTP requests. ### Create and assign a policy definition with PowerShell HTTP requests. ```json {- "if": { - "allOf": [{ - "field": "type", - "equals": "Microsoft.Storage/storageAccounts" - }, - { - "field": "Microsoft.Storage/storageAccounts/networkAcls.defaultAction", - "equals": "Allow" - } - ] - }, - "then": { - "effect": "audit" - } + "if": { + "allOf": [ + { + "field": "type", + "equals": "Microsoft.Storage/storageAccounts" + }, + { + "field": "Microsoft.Storage/storageAccounts/networkAcls.defaultAction", + "equals": "Allow" + } + ] + }, + "then": { + "effect": "audit" + } } ``` - For more information about authoring a policy definition, see [Azure Policy Definition - Structure](../concepts/definition-structure.md). + For more information about authoring a policy definition, see [Azure Policy Definition Structure](../concepts/definition-structure.md). -1. Run the following command to create a policy definition using the AuditStorageAccounts.json file. +1. Run the following command to create a policy definition using the _AuditStorageAccounts.json_ file. ```azurepowershell-interactive New-AzPolicyDefinition -Name 'AuditStorageAccounts' -DisplayName 'Audit Storage Accounts Open to Public Networks' -Policy 'AuditStorageAccounts.json' ``` - The command creates a policy definition named _Audit Storage Accounts Open to Public Networks_. - For more information about other parameters that you can use, see - [New-AzPolicyDefinition](/powershell/module/az.resources/new-azpolicydefinition). + The command creates a policy definition named _Audit Storage Accounts Open to Public Networks_. For more information about other parameters that you can use, see [New-AzPolicyDefinition](/powershell/module/az.resources/new-azpolicydefinition). - When called without location parameters, `New-AzPolicyDefinition` defaults to saving the policy - definition in the selected subscription of the sessions context. To save the definition to a - different location, use the following parameters: + When called without location parameters, `New-AzPolicyDefinition` defaults to saving the policy definition in the selected subscription of the sessions context. To save the definition to a different location, use the following parameters: - **SubscriptionId** - Save to a different subscription. Requires a _GUID_ value. - **ManagementGroupName** - Save to a management group. Requires a _string_ value. -1. After you create your policy definition, you can create a policy assignment by running the - following commands: +1. After you create your policy definition, you can create a policy assignment by running the following commands: ```azurepowershell-interactive $rg = Get-AzResourceGroup -Name 'ContosoRG' HTTP requests. Replace _ContosoRG_ with the name of your intended resource group. - The **Scope** parameter on `New-AzPolicyAssignment` works with management group, subscription, - resource group, or a single resource. The parameter uses a full resource path, which the - **ResourceId** property on `Get-AzResourceGroup` returns. The pattern for **Scope** for each - container is as follows. Replace `{rName}`, `{rgName}`, `{subId}`, and `{mgName}` with your - resource name, resource group name, subscription ID, and management group name, respectively. - `{rType}` would be replaced with the **resource type** of the resource, such as - `Microsoft.Compute/virtualMachines` for a VM. + The `Scope` parameter on `New-AzPolicyAssignment` works with management group, subscription, resource group, or a single resource. The parameter uses a full resource path, which the `ResourceId` property on `Get-AzResourceGroup` returns. The pattern for `Scope` for each container is as follows. Replace `{rName}`, `{rgName}`, `{subId}`, and `{mgName}` with your resource name, resource group name, subscription ID, and management group name, respectively. `{rType}` would be replaced with the _resource type_ of the resource, such as `Microsoft.Compute/virtualMachines` for a virtual machine. - Resource - `/subscriptions/{subID}/resourceGroups/{rgName}/providers/{rType}/{rName}` - Resource group - `/subscriptions/{subId}/resourceGroups/{rgName}` - Subscription - `/subscriptions/{subId}` - Management group - `/providers/Microsoft.Management/managementGroups/{mgName}` -For more information about managing resource policies using the Resource Manager PowerShell -module, see [Az.Resources](/powershell/module/az.resources/#policy). +For more information about managing resource policies using the Resource Manager PowerShell module, see [Az.Resources](/powershell/module/az.resources/#policy). ### Create and assign a policy definition using ARMClient Use the following procedure to create a policy definition. ```json "properties": {- "displayName": "Audit Storage Accounts Open to Public Networks", - "policyType": "Custom", - "mode": "Indexed", - "description": "This policy ensures that storage accounts with exposure to Public Networks are audited.", - "parameters": {}, - "policyRule": { - "if": { - "allOf": [{ - "field": "type", - "equals": "Microsoft.Storage/storageAccounts" - }, - { - "field": "Microsoft.Storage/storageAccounts/networkAcls.defaultAction", - "equals": "Allow" - } - ] + "displayName": "Audit Storage Accounts Open to Public Networks", + "policyType": "Custom", + "mode": "Indexed", + "description": "This policy ensures that storage accounts with exposure to Public Networks are audited.", + "parameters": {}, + "policyRule": { + "if": { + "allOf": [ + { + "field": "type", + "equals": "Microsoft.Storage/storageAccounts" },- "then": { - "effect": "audit" + { + "field": "Microsoft.Storage/storageAccounts/networkAcls.defaultAction", + "equals": "Allow" }+ ] + }, + "then": { + "effect": "audit" }+ } } ``` Use the following procedure to create a policy definition. armclient PUT "/providers/Microsoft.Management/managementgroups/{managementGroupId}/providers/Microsoft.Authorization/policyDefinitions/AuditStorageAccounts?api-version=2021-09-01" @<path to policy definition JSON file> ``` - Replace the preceding {subscriptionId} with the ID of your subscription or {managementGroupId} - with the ID of your [management group](../../management-groups/overview.md). + Replace the preceding `{subscriptionId}` with the ID of your subscription or `{managementGroupId}` with the ID of your [management group](../../management-groups/overview.md). - For more information about the structure of the query, see - [Azure Policy Definitions - Create or Update](/rest/api/policy/policy-definitions/create-or-update) - and - [Policy Definitions - Create or Update At Management Group](/rest/api/policy/policy-definitions/create-or-update-at-management-group). + For more information about the structure of the query, see [Azure Policy Definitions - Create or Update](/rest/api/policy/policy-definitions/create-or-update) and [Policy Definitions - Create or Update At Management Group](/rest/api/policy/policy-definitions/create-or-update-at-management-group). -Use the following procedure to create a policy assignment and assign the policy definition at the -resource group level. +Use the following procedure to create a policy assignment and assign the policy definition at the resource group level. -1. Copy the following JSON snippet to create a JSON policy assignment file. Replace example - information in <> symbols with your own values. +1. Copy the following JSON snippet to create a JSON policy assignment file. Replace example information in <> symbols with your own values. ```json {- "properties": { - "description": "This policy assignment makes sure that storage accounts with exposure to Public Networks are audited.", - "displayName": "Audit Storage Accounts Open to Public Networks Assignment", - "parameters": {}, - "policyDefinitionId": "/subscriptions/<subscriptionId>/providers/Microsoft.Authorization/policyDefinitions/Audit Storage Accounts Open to Public Networks", - "scope": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>" - } + "properties": { + "description": "This policy assignment makes sure that storage accounts with exposure to Public Networks are audited.", + "displayName": "Audit Storage Accounts Open to Public Networks Assignment", + "parameters": {}, + "policyDefinitionId": "/subscriptions/<subscriptionId>/providers/Microsoft.Authorization/policyDefinitions/Audit Storage Accounts Open to Public Networks", + "scope": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>" + } } ``` resource group level. Replace example information in <> symbols with your own values. - For more information about making HTTP calls to the REST API, see - [Azure REST API Resources](/rest/api/resources/). + For more information about making HTTP calls to the REST API, see [Azure REST API Resources](/rest/api/resources/). ### Create and assign a policy definition with Azure CLI To create a policy definition, use the following procedure: ```json {- "if": { - "allOf": [{ - "field": "type", - "equals": "Microsoft.Storage/storageAccounts" - }, - { - "field": "Microsoft.Storage/storageAccounts/networkAcls.defaultAction", - "equals": "Allow" - } - ] - }, - "then": { - "effect": "audit" - } + "if": { + "allOf": [ + { + "field": "type", + "equals": "Microsoft.Storage/storageAccounts" + }, + { + "field": "Microsoft.Storage/storageAccounts/networkAcls.defaultAction", + "equals": "Allow" + } + ] + }, + "then": { + "effect": "audit" + } } ``` - For more information about authoring a policy definition, see [Azure Policy Definition - Structure](../concepts/definition-structure.md). + For more information about authoring a policy definition, see [Azure Policy Definition Structure](../concepts/definition-structure.md). 1. Run the following command to create a policy definition: To create a policy definition, use the following procedure: az policy definition create --name 'audit-storage-accounts-open-to-public-networks' --display-name 'Audit Storage Accounts Open to Public Networks' --description 'This policy ensures that storage accounts with exposures to public networks are audited.' --rules '<path to json file>' --mode All ``` - The command creates a policy definition named _Audit Storage Accounts Open to Public Networks_. - For more information about other parameters that you can use, see - [az policy definition create](/cli/azure/policy/definition#az-policy-definition-create). + The command creates a policy definition named _Audit Storage Accounts Open to Public Networks_. For more information about other parameters that you can use, see [az policy definition create](/cli/azure/policy/definition#az-policy-definition-create). - When called without location parameters, `az policy definition creation` defaults to saving the - policy definition in the selected subscription of the sessions context. To save the definition to - a different location, use the following parameters: + When called without location parameters, `az policy definition creation` defaults to saving the policy definition in the selected subscription of the sessions context. To save the definition to a different location, use the following parameters: - - **subscription** - Save to a different subscription. Requires a _GUID_ value for the - subscription ID or a _string_ value for the subscription name. + - **subscription** - Save to a different subscription. Requires a _GUID_ value for the subscription ID or a _string_ value for the subscription name. - **management-group** - Save to a management group. Requires a _string_ value. -1. Use the following command to create a policy assignment. Replace example information in <> - symbols with your own values. +1. Use the following command to create a policy assignment. Replace example information in angle brackets `< >` symbols with your own values. ```azurecli-interactive az policy assignment create --name '<name>' --scope '<scope>' --policy '<policy definition ID>' ``` - The **scope** parameter on `az policy assignment create` works with management group, - subscription, resource group, or a single resource. The parameter uses a full resource path. The - pattern for **scope** for each container is as follows. Replace `{rName}`, `{rgName}`, `{subId}`, - and `{mgName}` with your resource name, resource group name, subscription ID, and management - group name, respectively. `{rType}` would be replaced with the **resource type** of the resource, - such as `Microsoft.Compute/virtualMachines` for a VM. + The `scope` parameter on `az policy assignment create` works with management group, subscription, resource group, or a single resource. The parameter uses a full resource path. The pattern for `scope` for each container is as follows. Replace `{rName}`, `{rgName}`, `{subId}`, and `{mgName}` with your resource name, resource group name, subscription ID, and management group name, respectively. `{rType}` would be replaced with the _resource type_ of the resource, such as `Microsoft.Compute/virtualMachines` for a virtual machine. - Resource - `/subscriptions/{subID}/resourceGroups/{rgName}/providers/{rType}/{rName}` - Resource group - `/subscriptions/{subID}/resourceGroups/{rgName}` You can get the Azure Policy Definition ID by using PowerShell with the followin az policy definition show --name 'Audit Storage Accounts with Open Public Networks' ``` -The policy definition ID for the policy definition that you created should resemble the following -example: +The policy definition ID for the policy definition that you created should resemble the following example: ```output "/subscription/<subscriptionId>/providers/Microsoft.Authorization/policyDefinitions/Audit Storage Accounts Open to Public Networks" ``` -For more information about how you can manage resource policies with Azure CLI, see -[Azure CLI Resource Policies](/cli/azure/policy). +For more information about how you can manage resource policies with Azure CLI, see [Azure CLI Resource Policies](/cli/azure/policy). ## Next steps Review the following articles for more information about the commands and queries in this article. -- [Azure REST API Resources](/rest/api/resources/)-- [Azure PowerShell Modules](/powershell/module/az.resources/#policy)-- [Azure CLI Policy Commands](/cli/azure/policy)-- [Azure Policy resource provider REST API reference](/rest/api/policy)+- [Azure REST API Resources](/rest/api/resources/). +- [Azure PowerShell Modules](/powershell/module/az.resources/#policy). +- [Azure CLI Policy Commands](/cli/azure/policy). +- [Azure Policy resource provider REST API reference](/rest/api/policy). - [Organize your resources with Azure management groups](../../management-groups/overview.md). |
governance | Australia Ism | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/australia-ism.md | Title: Regulatory Compliance details for Australian Government ISM PROTECTED description: Details of the Australian Government ISM PROTECTED Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Azure Security Benchmark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmark.md | Title: Regulatory Compliance details for Microsoft cloud security benchmark description: Details of the Microsoft cloud security benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Built In Initiatives | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md | Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Azure Machine Configuration, and more. Previously updated : 09/23/2024 Last updated : 09/30/2024 The name on each built-in links to the initiative definition source on the [!INCLUDE [azure-policy-reference-policysets-network](../../../../includes/policy/reference/bycat/policysets-network.md)] +## Nexus ++ ## Regulatory Compliance [!INCLUDE [azure-policy-reference-policysets-regulatory-compliance](../../../../includes/policy/reference/bycat/policysets-regulatory-compliance.md)] |
governance | Built In Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md | Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Azure Machine Configuration, and more. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Canada Federal Pbmm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/canada-federal-pbmm.md | Title: Regulatory Compliance details for Canada Federal PBMM description: Details of the Canada Federal PBMM Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Cis Azure 1 1 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-1-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Cis Azure 1 3 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-3-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Cis Azure 1 4 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-4-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.4.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Cis Azure 2 0 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-2-0-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 2.0.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 2.0.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Cmmc L3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cmmc-l3.md | Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Fedramp High | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-high.md | Title: Regulatory Compliance details for FedRAMP High description: Details of the FedRAMP High Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Fedramp Moderate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-moderate.md | Title: Regulatory Compliance details for FedRAMP Moderate description: Details of the FedRAMP Moderate Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Gov Azure Security Benchmark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-azure-security-benchmark.md | Title: Regulatory Compliance details for Microsoft cloud security benchmark (Azure Government) description: Details of the Microsoft cloud security benchmark (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Gov Cis Azure 1 1 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-1-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Gov Cis Azure 1 3 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-3-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Gov Cmmc L3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cmmc-l3.md | Title: Regulatory Compliance details for CMMC Level 3 (Azure Government) description: Details of the CMMC Level 3 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Gov Fedramp High | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-high.md | Title: Regulatory Compliance details for FedRAMP High (Azure Government) description: Details of the FedRAMP High (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Gov Fedramp Moderate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-moderate.md | Title: Regulatory Compliance details for FedRAMP Moderate (Azure Government) description: Details of the FedRAMP Moderate (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Gov Irs 1075 Sept2016 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-irs-1075-sept2016.md | Title: Regulatory Compliance details for IRS 1075 September 2016 (Azure Government) description: Details of the IRS 1075 September 2016 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Gov Iso 27001 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-iso-27001.md | Title: Regulatory Compliance details for ISO 27001:2013 (Azure Government) description: Details of the ISO 27001:2013 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Gov Nist Sp 800 171 R2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-171-r2.md | Title: Regulatory Compliance details for NIST SP 800-171 R2 (Azure Government) description: Details of the NIST SP 800-171 R2 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Gov Nist Sp 800 53 R4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r4.md | Title: Regulatory Compliance details for NIST SP 800-53 Rev. 4 (Azure Government) description: Details of the NIST SP 800-53 Rev. 4 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Gov Nist Sp 800 53 R5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r5.md | Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 (Azure Government) description: Details of the NIST SP 800-53 Rev. 5 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Gov Soc 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-soc-2.md | Title: Regulatory Compliance details for System and Organization Controls (SOC) 2 (Azure Government) description: Details of the System and Organization Controls (SOC) 2 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Hipaa Hitrust 9 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/hipaa-hitrust-9-2.md | Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Irs 1075 Sept2016 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/irs-1075-sept2016.md | Title: Regulatory Compliance details for IRS 1075 September 2016 description: Details of the IRS 1075 September 2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Iso 27001 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/iso-27001.md | Title: Regulatory Compliance details for ISO 27001:2013 description: Details of the ISO 27001:2013 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Mcfs Baseline Confidential | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/mcfs-baseline-confidential.md | Title: Regulatory Compliance details for Microsoft Cloud for Sovereignty Baseline Confidential Policies description: Details of the Microsoft Cloud for Sovereignty Baseline Confidential Policies Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Mcfs Baseline Global | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/mcfs-baseline-global.md | Title: Regulatory Compliance details for Microsoft Cloud for Sovereignty Baseline Global Policies description: Details of the Microsoft Cloud for Sovereignty Baseline Global Policies Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Nist Sp 800 171 R2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-171-r2.md | Title: Regulatory Compliance details for NIST SP 800-171 R2 description: Details of the NIST SP 800-171 R2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Nist Sp 800 53 R4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r4.md | Title: Regulatory Compliance details for NIST SP 800-53 Rev. 4 description: Details of the NIST SP 800-53 Rev. 4 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Nist Sp 800 53 R5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r5.md | Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 description: Details of the NIST SP 800-53 Rev. 5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Nl Bio Cloud Theme | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nl-bio-cloud-theme.md | Title: Regulatory Compliance details for NL BIO Cloud Theme description: Details of the NL BIO Cloud Theme Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Pci Dss 3 2 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-3-2-1.md | Title: Regulatory Compliance details for PCI DSS 3.2.1 description: Details of the PCI DSS 3.2.1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Pci Dss 4 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-4-0.md | Title: Regulatory Compliance details for PCI DSS v4.0 description: Details of the PCI DSS v4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Rbi Itf Banks 2016 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-banks-2016.md | Title: Regulatory Compliance details for Reserve Bank of India IT Framework for Banks v2016 description: Details of the Reserve Bank of India IT Framework for Banks v2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Rbi Itf Nbfc 2017 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-nbfc-2017.md | Title: Regulatory Compliance details for Reserve Bank of India - IT Framework for NBFC description: Details of the Reserve Bank of India - IT Framework for NBFC Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Resource Graph Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/resource-graph-samples.md | Title: Azure Resource Graph sample queries for Azure Policy description: Sample Azure Resource Graph queries for Azure Policy showing use of resource types and tables to access Azure Policy related resources and properties. Previously updated : 06/10/2024 Last updated : 09/30/2024 |
governance | Rmit Malaysia | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rmit-malaysia.md | Title: Regulatory Compliance details for RMIT Malaysia description: Details of the RMIT Malaysia Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Soc 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/soc-2.md | Title: Regulatory Compliance details for System and Organization Controls (SOC) 2 description: Details of the System and Organization Controls (SOC) 2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Spain Ens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/spain-ens.md | Title: Regulatory Compliance details for Spain ENS description: Details of the Spain ENS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Swift Csp Cscf 2021 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/swift-csp-cscf-2021.md | Title: Regulatory Compliance details for SWIFT CSP-CSCF v2021 description: Details of the SWIFT CSP-CSCF v2021 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Swift Csp Cscf 2022 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/swift-csp-cscf-2022.md | Title: Regulatory Compliance details for SWIFT CSP-CSCF v2022 description: Details of the SWIFT CSP-CSCF v2022 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
governance | Ukofficial Uknhs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/ukofficial-uknhs.md | Title: Regulatory Compliance details for UK OFFICIAL and UK NHS description: Details of the UK OFFICIAL and UK NHS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/23/2024 Last updated : 09/30/2024 |
hdinsight | Hdinsight Component Retirements And Action Required | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-component-retirements-and-action-required.md | + + Title: Azure HDInsight component retirements and action required +description: Learn about HDInsight retirement versions and its components in Azure HDInsight clusters. ++ Last updated : 09/30/2024+++# Azure HDInsight component retirements and action required ++This page lists all the upcoming retirement versions of HDInsight including the components and services. If youΓÇÖre currently on one of the versions mentioned on this page, we strongly recommend that you immediately migrate to the latest version. If you choose not to migrate and continue using any of the following versions, be aware of the risk will remain associated with your continued usage. ++## Call to action ++To maintain the security posture,ΓÇ» migrate to the latest image of [HDInsight 5.1](./hdinsight-5x-component-versioning.md#open-source-components-available-with-hdinsight-5x), which is Generally Available since November 1, 2023. This release contains all theΓÇ»[latest versions of supported software](./hdinsight-5x-component-versioning.md) along with significant improvements on the security patches on open-source components.ΓÇ» +++### Supported HDInsight versions ++This table lists the versions of HDInsight that are available in the Azure portal and other deployment methods like PowerShell, CLI, and the .NET SDK. ++HDInsight bundles open-source components and HDInsight platform into a package that deployed on a cluster. For more information, see [how HDInsight versioning works](hdinsight-overview-versioning.md). ++## Supported HDInsight versions ++### HDInsight versions ++| HDInsight version | VM OS | Release date| Support type | Support expiration date | Retirement date | High availability | Action Required| +| | | | | | | || +| [HDInsight 5.1](./hdinsight-5x-component-versioning.md) |Ubuntu 18.0.4 LTS |November 1, 2023 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced |Not announced| Yes |-| +| [HDInsight 5.0](./hdinsight-5x-component-versioning.md) |Ubuntu 18.0.4 LTS |March 11, 2022 | [Basic](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | March 31, 2025 | March 31, 2025| Yes | [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md)| +| HDInsight 4.0 |Ubuntu 18.0.4 LTS |September 24, 2018 | [Basic](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | March 31, 2025 | March 31, 2025 |Yes |[migrate your HDInsight clusters to 5.1](https://azure.microsoft.com/updates/azure-hdinsight-40-will-be-retired-on-31-march-2025-migrate-your-hdinsight-clusters-to-51/) | +++**Support expiration** means that Microsoft no longer provides support for the specific HDInsight version. You might not be able to create clusters from the Azure portal. ++**Retirement** means that existing clusters of a HDInsight version continue to run as is. You can't create new clusters of this version through any means, which includes the CLI and SDKs. Other control plane features, such as manual scaling and autoscaling, not guaranteed to work after retirement date. Support isn't available for retired versions. +++|Retirement Item | Retirement Date | Action Required by Customers| Cluster creation required?| +|-|-|-|-| +|[Basic and Standard A-series VMs Retirement](https://azure.microsoft.com/updates/basic-and-standard-aseries-vms-on-hdinsight-will-retire-on-31-august-2024/) |August 31, 2024 |[Av1-series retirement - Azure Virtual Machines](/azure/virtual-machines/sizes/migration-guides/av1-series-retirement) |N| +|[Azure Monitor experience (preview)](https://azure.microsoft.com/updates/v2/hdinsight-azure-monitor-experience-retirement/) | February 01, 2025 |[Azure Monitor Agent (AMA) migration guide for Azure HDInsight clusters](./azure-monitor-agent.md) |Y| +++## Next steps ++- [Supported Apache components and versions in HDInsight](./hdinsight-component-versioning.md) |
hdinsight | Hdinsight Version Release | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-version-release.md | - Title: HDInsight 4.0 overview - Azure -description: Compare HDInsight 3.6 to HDInsight 4.0 features, limitations, and upgrade recommendations. -- Previously updated : 12/05/2023---# Azure HDInsight 4.0 overview --Azure HDInsight is one of the most popular services among enterprise customers for Apache Hadoop and Apache Spark. HDInsight 4.0 is a cloud distribution of Apache Hadoop components. This article provides information about the most recent Azure HDInsight release and how to upgrade. --## What's new in HDInsight 4.0? --### Apache Hive 3.0 and low-latency analytical processing --Apache Hive low-latency analytical processing (LLAP) uses persistent query servers and in-memory caching. This process delivers quick SQL query results on data in remote cloud storage. Hive LLAP uses a set of persistent daemons that execute fragments of Hive queries. Query execution on LLAP is similar to Hive without LLAP, with worker tasks running inside LLAP daemons instead of containers. --Benefits of Hive LLAP include: --* Ability to do deep SQL analytics without sacrificing performance and adaptability. Such as complex joins, subqueries, windowing functions, sorting, user-defined functions, and complex aggregations. --* Interactive queries against data in the same storage where data is prepared, eliminating the need to move data from storage to another engine for analytical processing. --* Caching query results allows previously computed query results to be reused. This cache saves time and resources spent running the cluster tasks required for the query. --### Hive dynamic materialized views --Hive now supports dynamic materialized views, or pre-computation of relevant summaries. The views accelerate query processing in data warehouses. Materialized views can be stored natively in Hive, and can seamlessly use LLAP acceleration. --### Hive transactional tables --HDI 4.0 includes Apache Hive 3. Hive 3 requires atomicity, consistency, isolation, and durability compliance for transactional tables that live in the Hive warehouse. ACID-compliant tables and table data are accessed and managed by Hive. Data in create, retrieve, update, and delete (CRUD) tables must be in Optimized Row Column (ORC) file format. Insert-only tables support all file formats. --> [!Note] -> ACID/transactional support only works for managed tables and not external tables. Hive external tables are designed so that external parties can read and write table data, without Hive perfoming any alteration of the underlying data. For ACID tables, Hive may alter the underlying data with compactions and transactions. --Some benefits of ACID tables are --* ACID v2 has performance improvements in both storage format and the execution engine. --* ACID is enabled by default to allow full support for data updates. --* Improved ACID capabilities allow you to update and delete at row level. --* No Performance overhead. --* No Bucketing required. --* Spark can read and write to Hive ACID tables via Hive Warehouse Connector. ---### Apache Spark --Apache Spark gets updatable tables and ACID transactions with Hive Warehouse Connector. Hive Warehouse Connector allows you to register Hive transactional tables as external tables in Spark to access full transactional functionality. Previous versions only supported table partition manipulation. Hive Warehouse Connector also supports Streaming DataFrames. This process streams reads and writes into transactional and streaming Hive tables from Spark. --Spark executors can connect directly to Hive LLAP daemons to retrieve and update data in a transactional manner, allowing Hive to keep control of the data. --Apache Spark on HDInsight 4.0 supports the following scenarios: --* Run machine learning model training over the same transactional table used for reporting. -* Run a Spark streaming job on the change feed from a Hive streaming table. -* Create ORC files directly from a Spark Structured Streaming job. --You no longer have to worry about accidentally trying to access Hive transactional tables directly from Spark. Resulting in inconsistent results, duplicate data, or data corruption. In HDInsight 4.0, Spark tables and Hive tables are kept in separate Metastores. Use Hive Data Warehouse Connector to explicitly register Hive transactional tables as Spark external tables. ---### Apache Oozie --Apache Oozie 4.3.1 is included in HDI 4.0 with the following changes: --* Oozie no longer runs Hive actions. Hive CLI has been removed and replaced with BeeLine. --* You can exclude unwanted dependencies from share lib by including an exclude pattern in your **job.properties** file. --## How to upgrade to HDInsight 4.0 --Thoroughly test your components before implementing the latest version in a production environment. HDInsight 4.0 is available for you to begin the upgrade process. HDInsight 3.6 is the default option to prevent accidental mishaps. --There's no supported upgrade path from previous versions of HDInsight to HDInsight 4.0. Because Metastore and blob data formats have changed, 4.0 isn't compatible with previous versions. It's important you keep your new HDInsight 4.0 environment separate from your current production environment. If you deploy HDInsight 4.0 to your current environment, your Metastore will be permanently upgraded. --## Limitations --* HDInsight 4.0 doesn't support Apache Storm. -* HDInsight 4.0 doesn't support the ML Services cluster type. -* Shell interpreter in Apache Zeppelin isn't supported in Spark and Interactive Query clusters. -* Apache Pig runs on Tez by default. However, you can change it to MapReduce. -* Spark SQL Ranger integration for row and column security is deprecated. --## Next steps --* [HBase migration guide](./hbase/apache-hbase-migrate-new-version.md) -* [Hive migration guide](./interactive-query/apache-hive-migrate-workloads.md) -* [Kafka migration guide](./kafk) -* [Azure HDInsight Documentation](index.yml) -* [Release Notes](hdinsight-release-notes.md) |
iot-hub | Module Twins Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-dotnet.md | Now let's communicate to the cloud from your simulated device. Once a module ide To retrieve your module connection string, navigate to your [IoT hub](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Devices%2FIotHubs) then select **Devices**. Find and select **myFirstDevice** to open it and then select **myFirstModule** to open it. In **Module Identity Details**, copy the **Connection string (primary key)** and save it for the console app. - 1. In Visual Studio, add a new project to your solution by selecting **File** > **New** > **Project**. In **Create a new project**, select **Console App (.NET Framework)**, and select **Next**. 1. In **Configure your new project**, name the project *UpdateModuleTwinReportedProperties*, then select **Next**. |
iot-hub | Module Twins Portal Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-portal-dotnet.md | Within one device identity, you can create up to 20 module identities. To add an 1. Enter the name *myFirstModule*. Save your module identity. - :::image type="content" source="./media/module-twins-portal-dotnet/add-module-identity.png" alt-text="Screenshot that shows the 'Module Identity Details' page." lightbox="./media/module-twins-portal-dotnet/add-module-identity.png"::: +1. Your new module identity appears at the bottom of the screen. Select it to see module identity details. - Your new module identity appears at the bottom of the screen. Select it to see module identity details. +1. Save the **Connection string (primary key)**. You use it in the next section to set up your module on the device in a console app. :::image type="content" source="./media/module-twins-portal-dotnet/module-identity-details.png" alt-text="Screenshot that shows the Module Identity Details menu."::: -Save the **Connection string (primary key)**. You use it in the next section to set up your module on the device in a console app. - ## Update the module twin using .NET device SDK Now let's communicate to the cloud from your simulated device. Once a module identity is created, a module twin is implicitly created in IoT Hub. In this section, you create a .NET console app on your simulated device that updates the module twin reported properties. |
load-balancer | Create Custom Http Health Probe Howto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/create-custom-http-health-probe-howto.md | |
load-balancer | Gateway Deploy Dual Stack Load Balancer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-deploy-dual-stack-load-balancer.md | description: In this tutorial, you deploy IPv6 configurations to an existing IPv Previously updated : 09/25/2023 Last updated : 09/25/2024 Along with the Gateway Load Balancer, this scenario includes the following alrea ## Add IPv6 address ranges to an existing subnet -This article assumes you already have a Gateway Load Balancer configured for IPv4 traffic, with a corresponding VNET and subnet. In this step, you add IPv6 ranges to your Gateway Load Balancer's VNET and subnet. This range is need when creating an IPv6 frontend configuration for your Gateway Load Balancer using a private IP address from this subnet/VNET. +This article assumes you already have a Gateway Load Balancer configured for IPv4 traffic, with a corresponding virtual network and subnet. In this step, you add IPv6 ranges to your Gateway Load Balancer's virtual network and subnet. This range is need when creating an IPv6 frontend configuration for your Gateway Load Balancer using a private IP address from this subnet/virtual network. # [PowerShell](#tab/powershell) ```powershell-interactive -#Add IPv6 ranges to the VNET and subnet -#Retrieve the VNET object +#Add IPv6 ranges to the virtual network and subnet +#Retrieve the virtual network object $rg = Get-AzResourceGroup -ResourceGroupName "myResourceGroup" $vnet = Get-AzVirtualNetwork -ResourceGroupName $rg.ResourceGroupName -Name "myVNet" -#Add IPv6 prefix to the VNET +#Add IPv6 prefix to the virtual network $vnet.addressspace.addressprefixes.add("fd00:db8:deca::/48") -#Update the running VNET +#Update the running virtual network $vnet | Set-AzVirtualNetwork -#Retrieve the subnet object from the local copy of the VNET +#Retrieve the subnet object from the local copy of the virtual network $subnet= $vnet.subnets[0] #Add IPv6 prefix to the subnet $subnet.addressprefix.add("fd00:db8:deca::/64") -#Update the running VNET with the new subnet configuration +#Update the running virtual network with the new subnet configuration $vnet | Set-AzVirtualNetwork ``` az network vnet subnet update ## Add an IPv6 frontend to gateway load balancer -Now that you've added IPv6 prefix ranges to your Gateway Load Balancer's subnet and VNET, we can create a new IPv6 frontend configuration on the Gateway Load Balancer, with an IPv6 address from your subnet's range. +Now that you've added IPv6 prefix ranges to your Gateway Load Balancer's subnet and virtual network, we can create a new IPv6 frontend configuration on the Gateway Load Balancer, with an IPv6 address from your subnet's range. # [PowerShell](#tab/powershell) |
load-balancer | Ipv6 Add To Existing Vnet Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/ipv6-add-to-existing-vnet-cli.md | description: This article shows how to deploy IPv6 addresses to an existing appl Previously updated : 09/27/2023 Last updated : 09/30/2024 ms.devlang: azurecli az network nic ip-config create \ --lb-name myLoadBalancer ``` -## View IPv6 dual stack virtual network in Azure portal +## View IPv6 dual-stack virtual network in Azure portal -You can view the IPv6 dual stack virtual network in Azure portal as follows: +You can view the IPv6 dual-stack virtual network in Azure portal as follows: 1. In the portal's search bar, enter **virtual networks** and 1. In the **Virtual Networks** window, select **myVNet**. 1. Select **Connected devices** under **Settings** to view the attached network interfaces. The dual stack virtual network shows the three NICs with both IPv4 and IPv6 configurations. |
load-balancer | Ipv6 Add To Existing Vnet Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/ipv6-add-to-existing-vnet-powershell.md | $NIC_3 | Add-AzNetworkInterfaceIpConfig -Name MyIPv6Config -Subnet $vnet.Subnets $NIC_3 | Set-AzNetworkInterface ``` -## View IPv6 dual stack virtual network in Azure portal +## View IPv6 dual-stack virtual network in Azure portal -You can view the IPv6 dual stack virtual network in Azure portal as follows: +You can view the IPv6 dual-stack virtual network in Azure portal as follows: 1. In the portal's search bar, enter **virtual networks** and 1. In the **Virtual Networks** window, select **myVNet**. 1. Select **Connected devices** under **Settings** to view the attached network interfaces. The dual stack virtual network shows the three NICs with both IPv4 and IPv6 configurations. |
load-balancer | Load Balancer Basic Upgrade Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-basic-upgrade-guidance.md | Use these PowerShell scripts to help with upgrading from Basic to Standard SKU: When manually migrating from a Basic to Standard SKU Load Balancer, there are a couple key considerations to keep in mind: -- It is not possible to mix Basic and Standard SKU IPs or Load Balancers. All Public IPs associated with a Load Balancer and its backend pool members must match.+- It isn't possible to mix Basic and Standard SKU IPs or Load Balancers. All Public IPs associated with a Load Balancer and its backend pool members must match. - Public IP allocation method must be set to 'static' when a Public IP is disassociated from a Load Balancer or Virtual Machine, or the allocated IP will be lost. - Standard SKU public IP addresses are secure by default, requiring that a Network Security Group explicitly allow traffic to any public IPs - Standard SKU Load Balancers block outbound access by default. To enable outbound access, a public load balancer needs an outbound rule for backend members. For private load balancers, either configure a NAT Gateway on the backend pool members' subnet or add instance-level public IP addresses to each backend member. Suggested order of operations for manually upgrading a Basic Load Balancer in co 1. Health probes 1. Load balancing rules - use the temporary frontend configuration 1. NAT rules - use the temporary frontend configuration-1. For public load balancers, if you do not have one already, [create a new Network Security Group](../virtual-network/tutorial-filter-network-traffic.md) with allow rules for the traffic coming through the Load Balancer rules +1. For public load balancers, if you don't have one already, [create a new Network Security Group](../virtual-network/tutorial-filter-network-traffic.md) with allow rules for the traffic coming through the Load Balancer rules 1. For Virtual Machine Scale Set backends, remove the Load Balancer association in the Networking settings and [update the instances](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-perform-manual-upgrades) 1. Delete the Basic Load Balancer > [!NOTE] Suggested order of operations for manually upgrading a Basic Load Balancer in co ## FAQ ### Will the Basic Load Balancer retirement impact Cloud Services Extended Support (CSES) deployments?-No, this retirement will not impact your existing or new deployments on CSES. This means that you can still create and use Basic Load Balancers for CSES deployments. However, we advise using Standard SKU on ARM native resources (those that do not depend on CSES) when possible, because Standard has more advantages than Basic. +No, this retirement won't impact your existing or new deployments on CSES. This means that you can still create and use Basic Load Balancers for CSES deployments. However, we advise using Standard SKU on Azure Resource Manager (ARM) native resources (those that don't depend on CSES) when possible, because Standard has more advantages than Basic. ## Next Steps |
load-balancer | Quickstart Load Balancer Standard Internal Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-bicep.md | |
load-balancer | Quickstart Load Balancer Standard Internal Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-cli.md | description: This quickstart shows how to create an internal load balancer using Previously updated : 05/01/2023 Last updated : 09/30/2024 #Customer intent: I want to create a load balancer so that I can load balance internal traffic to VMs. az network bastion create \ --name myBastionHost \ --public-ip-address myBastionIP \ --vnet-name myVNet \- --location westus2 + --location westus2 \ + --only-show-errors \ + --no-wait ``` It can take a few minutes for the Azure Bastion host to deploy. |
load-balancer | Quickstart Load Balancer Standard Public Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-cli.md | description: This quickstart shows how to create a public load balancer using th Previously updated : 09/25/2023 Last updated : 09/30/2024 #Customer intent: I want to create a load balancer so that I can load balance internet traffic to VMs. |
load-balancer | Troubleshoot Outbound Connection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/troubleshoot-outbound-connection.md | -Learn troubleshooting guidance for outbound connections in Azure Load Balancer. This includes understanding source network address translation (SNAT) and it's impact on connections, using individual public IPs on VMs, and designing applications for connection efficiency to avoid SNAT port exhaustion. Most problems with outbound connectivity that customers experience is due to SNAT port exhaustion and connection timeouts leading to dropped packets. +Learn troubleshooting guidance for outbound connections in Azure Load Balancer. This includes understanding source network address translation (SNAT) and its impact on connections, using individual public IPs on VMs, and designing applications for connection efficiency to avoid SNAT port exhaustion. Most problems with outbound connectivity that customers experience is due to SNAT port exhaustion and connection timeouts leading to dropped packets. To learn more about SNAT ports, see [Source Network Address Translation for outbound connections](load-balancer-outbound-connections.md). Azure NAT Gateway is a highly resilient and scalable Azure service that provides A NAT gateway selects ports at random from the available pool of ports. If there aren't available ports, SNAT ports are reused as long as there's no existing connection to the same destination public IP and port. This port selection and reuse behavior of a NAT gateway makes it less likely to experience connection timeouts. - To learn more about how SNAT and port usage works for NAT gateway, see [SNAT fundamentals](../virtual-network/nat-gateway/nat-gateway-resource.md#fundamentals). There are a few conditions in which you won't be able to use NAT gateway for outbound connections. For more information on NAT gateway limitations, see [NAT Gateway limitations](../virtual-network/nat-gateway/nat-gateway-resource.md#limitations). + To learn more about how SNAT and port usage works for NAT gateway, see [SNAT fundamentals](../virtual-network/nat-gateway/nat-gateway-resource.md#fundamentals). There are a few conditions where you can't use NAT gateway for outbound connections. For more information on NAT gateway limitations, see [NAT Gateway limitations](../virtual-network/nat-gateway/nat-gateway-resource.md#limitations). If you're unable to use a NAT gateway for outbound connectivity, refer to the other migration options described in this article. ### Configure load balancer outbound rules to maximize SNAT ports per VM -If youΓÇÖre using a public standard load balancer and experience SNAT exhaustion or connection failures, ensure youΓÇÖre using outbound rules with manual port allocation. Otherwise, youΓÇÖre likely relying on load balancerΓÇÖs default port allocation. Default port allocation automatically assigns a conservative number of ports, which is based on the number of instances in your backend pool. Default port allocation isn't a recommended method for enabling outbound connections. When your backend pool scales, your connections may be impacted if ports need to be reallocated. +If youΓÇÖre using a public standard load balancer and experience SNAT exhaustion or connection failures, ensure youΓÇÖre using outbound rules with manual port allocation. Otherwise, youΓÇÖre likely relying on load balancerΓÇÖs default port allocation. Default port allocation automatically assigns a conservative number of ports, which is based on the number of instances in your backend pool. Default port allocation isn't a recommended method for enabling outbound connections. When your backend pool scales, your connections can be impacted if ports need to be reallocated. To learn more about default port allocation, see [Source Network Address Translation for outbound connections](load-balancer-outbound-connections.md). |
load-balancer | Tutorial Nat Rule Multi Instance Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-nat-rule-multi-instance-portal.md | |
load-balancer | Tutorial Protect Load Balancer Ddos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-protect-load-balancer-ddos.md | description: Learn how to set up a public load balancer and protect it with Azur Previously updated : 06/06/2023 Last updated : 09/30/2024 |
load-balancer | Virtual Network Ipv4 Ipv6 Dual Stack Standard Load Balancer Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-cli.md | description: This article shows how to deploy an IPv6 dual stack application in Previously updated : 04/17/2023 Last updated : 09/30/2024 |
load-balancer | Virtual Network Ipv4 Ipv6 Dual Stack Standard Load Balancer Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md | description: This article shows how to deploy an IPv6 dual stack application wit Previously updated : 04/17/2023 Last updated : 09/30/2024 |
load-testing | How To Parameterize Load Tests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-parameterize-load-tests.md | You also need to grant Azure Load Testing access to your Azure key vault to retr > If you restricted access to your Azure key vault by a firewall or virtual networking, follow these steps to [grant access to trusted Azure services](/azure/key-vault/general/overview-vnet-service-endpoints#grant-access-to-trusted-azure-services). 1. Retrieve the key vault **secret identifier** for your secret. You use this secret identifier to configure your load test.-- :::image type="content" source="media/how-to-parameterize-load-tests/key-vault-secret.png" alt-text="Screenshot that shows the details of a secret in an Azure key vault."::: The **secret identifier** is the full URI of the secret in the Azure key vault. Optionally, you can also include a version number. For example, `https://myvault.vault.azure.net/secrets/mysecret/` or `https://myvault.vault.azure.net/secrets/mysecret/abcdef01-2345-6789-0abc-def012345678`. |
load-testing | How To Test Secured Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-secured-endpoints.md | To add a client certificate to your load test in the Azure portal: | **Name** | Name of the certificate. | | **Value** | Matches the Azure Key Vault **Secret identifier** of the certificate. | - :::image type="content" source="media/how-to-test-secured-endpoints/load-test-certificates.png" alt-text="Screenshot that shows how to add a certificate to a load test in the Azure portal." lightbox="media/how-to-test-secured-endpoints/load-test-certificates.png"::: - 1. Select **Apply**, to save the load test configuration changes. # [GitHub Actions](#tab/github) |
network-watcher | Effective Security Rules Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/effective-security-rules-overview.md | Effective security rules view is a feature in Azure Network Watcher that you can You can define a prescriptive set of security rules as a model for security governance in your organization. Then, you can implement a periodic compliance audit in a programmatic way by comparing the prescriptive rules with the effective rules for each of the virtual machines in your network. -The effective security rules applied to a network interface are an aggregation of the rules that exist in the network security group associated to a network interface and the subnet the network interface is in. For more information, see [Network security groups](../virtual-network/network-security-groups-overview.md?toc=%2Fazure%2Fnetwork-watcher%2Ftoc.json) and [How network security groups filter network traffic](../virtual-network/network-security-group-how-it-works.md?toc=%2Fazure%2Fnetwork-watcher%2Ftoc.json). Additionally, the effective security rules include the admin rules that are applied to the virtual network using the Azure Virtual Network Manager. For more information, see [Azure Virtual Network Manager](../virtual-network-manager/overview.md?toc=%2Fazure%2Fnetwork-watcher%2Ftoc.json). +The effective security rules applied to a network interface are an aggregation of rules that exist in the network security group associated to a network interface and the subnet the network interface is in. For more information, see [Network security groups](../virtual-network/network-security-groups-overview.md?toc=%2Fazure%2Fnetwork-watcher%2Ftoc.json) and [How network security groups filter network traffic](../virtual-network/network-security-group-how-it-works.md?toc=%2Fazure%2Fnetwork-watcher%2Ftoc.json). Additionally, the effective security rules include the admin rules that are applied to the virtual network using the Azure Virtual Network Manager. For more information, see [Azure Virtual Network Manager](../virtual-network-manager/overview.md?toc=%2Fazure%2Fnetwork-watcher%2Ftoc.json). ## Effective security rules in the Azure portal You can select a rule to see associated source and destination prefixes. To learn how to use effective security rules, continue to: > [!div class="nextstepaction"]-> [View details of a security rule](diagnose-vm-network-traffic-filtering-problem.md#view-details-of-a-security-rule) +> [View details of a security rule](diagnose-vm-network-traffic-filtering-problem.md#view-details-of-a-security-rule) |
network-watcher | Ip Flow Verify Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/ip-flow-verify-overview.md | -IP flow verify is a feature in Azure Network Watcher that you can use to check if a packet is allowed or denied to or from an Azure virtual machine based on the configured security and admin rules. It helps you to troubleshoot virtual machine connectivity issues by checking network security group (NSG) rules and Azure Virtual Network Manager admin rules. It's a quick and simple tool to diagnose connectivity issues to or from other Azure resources, the internet and on-premises environment. +IP flow verify is a feature in Azure Network Watcher that you can use to check if a packet is allowed or denied to or from an Azure virtual machine based on the configured security and admin rules. It helps you to troubleshoot virtual machine connectivity issues by checking network security group (NSG) rules and Azure Virtual Network Manager admin rules. It's a quick and simple tool to diagnose connectivity issues to or from other Azure resources, the internet, and on-premises environment. IP flow verify looks at the rules of all network security groups applied to a virtual machine's network interface, whether the network security group is associated to the virtual machine's subnet or network interface. It additionally, looks at the Azure Virtual Network Manager rules applied to the virtual network of the virtual machine. IP flow verify returns **Access denied** or **Access allowed**, the name of the - You must have a Network Watcher instance in the Azure subscription and region of the virtual machine. For more information, see [Enable or disable Azure Network Watcher](network-watcher-create.md). - You must have the necessary permissions to access the feature. For more information, see [RBAC permissions required to use Network Watcher capabilities](required-rbac-permissions.md).-- IP flow verify only tests TCP and UDP rules, to test ICMP traffic rules, use [NSG diagnostics](network-watcher-network-configuration-diagnostics-overview.md).-- IP flow verify only tests security and admin rules applied to a virtual machine's network interface, to test rules applied to virtual machine scale sets, use [NSG diagnostics](network-watcher-network-configuration-diagnostics-overview.md).+- IP flow verify only tests TCP and UDP rules. To test ICMP traffic rules, use [NSG diagnostics](network-watcher-network-configuration-diagnostics-overview.md). +- IP flow verify only tests security and admin rules applied to a virtual machine's network interface. To test rules applied to virtual machine scale sets, use [NSG diagnostics](network-watcher-network-configuration-diagnostics-overview.md). ## Next step |
network-watcher | Monitor Vm Communication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/monitor-vm-communication.md | In this tutorial, you learn how to: > * Monitor communication between the two virtual machines > * Diagnose a communication problem between the two virtual machines If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. In this section, you create **myVNet** virtual network with two subnets and an A 1. In the search box at the top of the portal, enter *virtual networks*. Select **Virtual networks** from the search results. - :::image type="content" source="./media/monitor-vm-communication/virtual-network-azure-portal.png" alt-text="Screenshot shows searching for virtual networks in the Azure portal."::: + :::image type="content" source="./media/monitor-vm-communication/virtual-network-azure-portal.png" alt-text="Screenshot shows how to search for virtual networks in the Azure portal." lightbox="./media/monitor-vm-communication/virtual-network-azure-portal.png"::: 1. Select **+ Create**. In **Create virtual network**, enter or select the following values in the **Basics** tab: In this section, you create a connection monitor to monitor communication over T 1. Select **+ Create**. - :::image type="content" source="./media/monitor-vm-communication/connection-monitor.png" alt-text="Screenshot shows Connection monitor page in the Azure portal."::: + :::image type="content" source="./media/monitor-vm-communication/connection-monitor.png" alt-text="Screenshot shows Connection monitor page in the Azure portal." lightbox="./media/monitor-vm-communication/connection-monitor.png"::: 1. Enter or select the following information in the **Basics** tab of **Create Connection Monitor**: In this section, you create a connection monitor to monitor communication over T | Region | Select **(US) East US**. | | Workspace configuration | Leave the default. | - :::image type="content" source="./media/monitor-vm-communication/create-connection-monitor-basics.png" alt-text="Screenshot shows the Basics tab of creating a connection monitor in the Azure portal."::: + :::image type="content" source="./media/monitor-vm-communication/create-connection-monitor-basics.png" alt-text="Screenshot shows the Basics tab of creating a connection monitor in the Azure portal." lightbox="./media/monitor-vm-communication/create-connection-monitor-basics.png"::: 1. Select the **Test groups** tab, or select **Next: Test groups** button. In this section, you create a connection monitor to monitor communication over T 1. In the **Add sources** page, select **myVM1** as the source endpoint, and then select **Add endpoints**. - :::image type="content" source="./media/monitor-vm-communication/add-source-endpoint.png" alt-text="Screenshot shows how to add a source endpoint for a connection monitor in the Azure portal."::: + :::image type="content" source="./media/monitor-vm-communication/add-source-endpoint.png" alt-text="Screenshot shows how to add a source endpoint for a connection monitor in the Azure portal." lightbox="./media/monitor-vm-communication/add-source-endpoint.png"::: > [!NOTE] > You can use **Subscription**, **Resource group**, **VNET**, or **Subnet** filters to narrow down the list of virtual machines. In this section, you create a connection monitor to monitor communication over T | Destination port | Enter *22*. | | Test frequency | Select the default **Every 30 seconds**. | - :::image type="content" source="./media/monitor-vm-communication/add-test-configuration.png" alt-text="Screenshot shows how to add a test configuration for a connection monitor in the Azure portal."::: + :::image type="content" source="./media/monitor-vm-communication/add-test-configuration.png" alt-text="Screenshot shows how to add a test configuration for a connection monitor in the Azure portal." lightbox="./media/monitor-vm-communication/add-test-configuration.png"::: 1. Select **Add test configuration**. In this section, you create a connection monitor to monitor communication over T 1. In the **Add Destinations** page, select **myVM2** as the destination endpoint, and then select **Add endpoints**. - :::image type="content" source="./media/monitor-vm-communication/add-destination-endpoint.png" alt-text="Screenshot shows how to add a destination endpoint for a connection monitor in the Azure portal."::: + :::image type="content" source="./media/monitor-vm-communication/add-destination-endpoint.png" alt-text="Screenshot shows how to add a destination endpoint for a connection monitor in the Azure portal." lightbox="./media/monitor-vm-communication/add-destination-endpoint.png"::: > [!NOTE] > In addition to the **Subscription**, **Resource group**, **VNET**, and **Subnet** filters, you can use the **Region** filter to narrow down the list of virtual machines. In this section, you view all the details of the connection monitor that you cre 1. Go to the **Connection monitor** page. If you don't see **myConnectionMonitor** in the list of connection monitors, wait a few minutes, then select **Refresh**. - :::image type="content" source="./media/monitor-vm-communication/new-connection-monitor.png" alt-text="Screenshot shows the new connection monitor that you've just created." lightbox="./media/monitor-vm-communication/new-connection-monitor.png"::: + :::image type="content" source="./media/monitor-vm-communication/new-connection-monitor.png" alt-text="Screenshot shows the new connection monitor." lightbox="./media/monitor-vm-communication/new-connection-monitor.png"::: 1. Select **myConnectionMonitor** to see the performance metrics of the connection monitor like round trip time and percentage of failed checks - :::image type="content" source="./media/monitor-vm-communication/connection-monitor-summary.png" alt-text="Screenshot shows the new connection monitor." lightbox="./media/monitor-vm-communication/connection-monitor-summary.png"::: + :::image type="content" source="./media/monitor-vm-communication/connection-monitor-summary.png" alt-text="Screenshot that shows the summary page of the new connection monitor." lightbox="./media/monitor-vm-communication/connection-monitor-summary.png"::: 1. Select **Time Intervals** to adjust the time range to see the performance metrics for a specific time period. Available time intervals are **Last 1 hour**, **Last 6 hours**, **Last 24 hours**, **Last 7 days**, and **Last 30 days**. You can also select **Custom** to specify a custom time range. When no longer needed, delete **myResourceGroup** resource group and all of the To learn how to monitor virtual machine scale set network communication, advance to the next tutorial: > [!div class="nextstepaction"]-> [Monitor network communication with a scale set](diagnose-communication-problem-between-networks.md) +> [Diagnose communication problems between networks](diagnose-communication-problem-between-networks.md) |
network-watcher | Nsg Flow Logs Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/nsg-flow-logs-tutorial.md | Title: 'Tutorial: Log network traffic flow to and from a virtual machine' + Title: 'Tutorial: Log network traffic flow to and from a VM' -description: In this tutorial, you learn how to log network traffic flow to and from a virtual machine (VM) using Network Watcher NSG flow logs capability. +description: In this tutorial, you learn how to log network traffic flow to and from a virtual machine (VM) using Network Watcher NSG flow logs. Previously updated : 09/26/2024 Last updated : 09/30/2024+ # CustomerIntent: As an Azure administrator, I need to log the network traffic to and from a virtual machine (VM) so I can analyze the data for anomalies. Network security group flow logging is a feature of Azure Network Watcher that a This tutorial helps you use NSG flow logs to log a virtual machine's network traffic that flows through the [network security group](../virtual-network/network-security-groups-overview.md) associated to its network interface. In this tutorial, you learn how to: In this section, you create **myVNet** virtual network with one subnet for the v 1. In the search box at the top of the portal, enter ***virtual networks***. Select **Virtual networks** from the search results. - :::image type="content" source="./media/nsg-flow-logs-tutorial/virtual-network-azure-portal.png" alt-text="Screenshot shows searching for virtual networks in the Azure portal."::: + :::image type="content" source="./media/nsg-flow-logs-tutorial/virtual-network-azure-portal.png" alt-text="Screenshot shows searching for virtual networks in the Azure portal." lightbox="./media/nsg-flow-logs-tutorial/virtual-network-azure-portal.png"::: 1. Select **+ Create**. In **Create virtual network**, enter or select the following values in the **Basics** tab: NSG flow logging requires the **Microsoft.Insights** provider. To check its stat 1. Confirm the status of the provider displayed is **Registered**. If the status is **NotRegistered**, select the **Microsoft.Insights** provider then select **Register**. - :::image type="content" source="./media/nsg-flow-logs-tutorial/register-microsoft-insights.png" alt-text="Screenshot of registering Microsoft Insights provider in the Azure portal."::: + :::image type="content" source="./media/nsg-flow-logs-tutorial/register-microsoft-insights.png" alt-text="Screenshot of registering Microsoft Insights provider in the Azure portal." lightbox="./media/nsg-flow-logs-tutorial/register-microsoft-insights.png"::: ## Create a storage account In this section, you go to the storage account you previously selected and downl 6. Select the ellipsis **...** to the right of the PT1H.json file, then select **Download**. - :::image type="content" source="./media/nsg-flow-logs-tutorial/nsg-log-file.png" alt-text="Screenshot showing how to download nsg flow log from the storage account container in the Azure portal."::: + :::image type="content" source="./media/nsg-flow-logs-tutorial/nsg-log-file.png" alt-text="Screenshot showing how to download nsg flow log from the storage account container in the Azure portal." lightbox="./media/nsg-flow-logs-tutorial/nsg-log-file.png"::: > [!NOTE] > You can use Azure Storage Explorer to access and download flow logs from your storage account. Fore more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md). |
network-watcher | Quickstart Configure Network Security Group Flow Logs From Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/quickstart-configure-network-security-group-flow-logs-from-bicep.md | In this quickstart, you learn how to enable [NSG flow logs](nsg-flow-logs-overvi - To deploy the Bicep files, either Azure CLI or PowerShell installed. - # [CLI](#tab/cli) + # [Azure CLI](#tab/cli) 1. [Install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands. The highlighted code in the preceding sample shows an NSG flow log resource defi This quickstart assumes that you have a network security group that you can enable flow logging on. -1. Save the Bicep file as **main.bicep** to your local computer. +1. Save the [Bicep file](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.network/networkwatcher-flowLogs-create/main.bicep) as **main.bicep** to your local computer. 1. Deploy the Bicep file using either Azure CLI or Azure PowerShell. - # [CLI](#tab/cli) + # [Azure CLI](#tab/cli) ```azurecli az group create --name exampleRG --location eastus |
operator-nexus | Howto Cluster Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-cluster-manager.md | Title: How to guide for running commands for Cluster Manager on Azure Operator Nexus + Title: "Cluster description: Learn to create, view, list, update, delete commands for Cluster Manager on Operator Nexus Some arguments that are available for every Azure CLI command ## Cluster Manager Identity -Starting with the 2024-06-01-preview API version, a customer can assign managed identity to a Cluster Manager. Both System-assigned and User-Assigned managed identities are supported. +Starting with the 2024-07-01 API version, a customer can assign managed identity to a Cluster Manager. Both System-assigned and User-Assigned managed identities are supported. If a Cluster Manager is created with the User-assigned managed identity, a customer is required to provision access to that identity for the Nexus platform. Specifically, `Microsoft.ManagedIdentity/userAssignedIdentities/assign/action` permission needs to be added to the User-assigned identity for `AFOI-NC-MGMT-PME-PROD` Microsoft Entra ID. It is a known limitation of the platform that will be addressed in the future. az networkcloud clustermanager update \ ### Update Cluster Manager Identities via APIs Cluster Manager managed identities can be assigned via CLI. The un-assignment of the identities can be done via API calls.-Note, `<APIVersion>` is the API version 2024-06-01-preview or newer. +Note, `<APIVersion>` is the API version 2024-07-01 or newer. - To remove all managed identities, execute: |
operator-nexus | Howto Configure Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-cluster.md | az networkcloud cluster create --name "$CLUSTER_NAME" --location "$LOCATION" \ ## Cluster Identity -Starting with the 2024-06-01-preview API version, a customer can assign managed identity to a Cluster. Both System-assigned and User-Assigned managed identities are supported. +Starting with the 2024-07-01 API version, a customer can assign managed identity to a Cluster. Both System-assigned and User-Assigned managed identities are supported. Managed Identity can be assigned to the Cluster during creation or update operations by providing the following parameters: Cluster create Logs can be viewed in the following locations: ## Update Cluster Identities via APIs Cluster managed identities can be assigned via CLI. The unassignment of the identities can be done via API calls.-Note, `<APIVersion>` is the API version 2024-06-01-preview or newer. +Note, `<APIVersion>` is the API version 2024-07-01 or newer. - To remove all managed identities, execute: |
role-based-access-control | Transfer Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/transfer-subscription.md | Several Azure resources have a dependency on a subscription or a directory. Depe | User-assigned managed identities | Yes | Yes | [List managed identities](#list-role-assignments-for-managed-identities) | You must delete, re-create, and attach the managed identities to the appropriate resource. You must re-create the role assignments. | | Azure Key Vault | Yes | Yes | [List Key Vault access policies](#list-key-vaults) | You must update the tenant ID associated with the key vaults. You must remove and add new access policies. | | Azure SQL databases with Microsoft Entra authentication integration enabled | Yes | No | [Check Azure SQL databases with Microsoft Entra authentication](#list-azure-sql-databases-with-azure-ad-authentication) | You cannot transfer an Azure SQL database with Microsoft Entra authentication enabled to a different directory. For more information, see [Use Microsoft Entra authentication](/azure/azure-sql/database/authentication-aad-overview). |-| Azure database for MySQL with Microsoft Entra authentication integration enabled | Yes | No | | You cannot transfer an Azure database for MySQL (Single and Flexible server) with Microsoft Entra authentication enabled to a different directory. | +| Azure Database for MySQL with Microsoft Entra authentication integration enabled | Yes | No | | You cannot transfer an Azure database for MySQL (Single and Flexible server) with Microsoft Entra authentication enabled to a different directory. | +| Azure Database for PostgreSQL Flexible Server with Microsoft Entra authentication integration enabled or with Customer Managed Key enabled | Yes | No | | You cannot transfer an Azure Database for PostgreSQL with Microsoft Entra authentication or with Customer Managed Key enabled to a different directory. You have to disable these features first, transfer the server, and then re-enable these features. | | Azure Storage and Azure Data Lake Storage Gen2 | Yes | Yes | | You must re-create any ACLs. | | Azure Files | Yes | In most scenarios | | You must re-create any ACLs. For storage accounts with Entra Kerberos authentication enabled, you must disable and re-enable Entra Kerberos authentication after the transfer. For Entra Domain Services, transferring to another Microsoft Entra directory where Entra Domain Services is not enabled is not supported. | | Azure File Sync | Yes | Yes | | The storage sync service and/or storage account can be moved to a different directory. For more information, see [Frequently asked questions (FAQ) about Azure Files](../storage/files/storage-files-faq.md#azure-file-sync) | |
security | Secure Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-deploy.md | Title: Deploy secure applications on Microsoft Azure description: This article discusses best practices to consider during the release and response phases of your web application project.-+ - Previously updated : 08/29/2023+ Last updated : 09/29/2024 |
security | Secure Design | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-design.md | Title: Design secure applications on Microsoft Azure description: This article discusses best practices to consider during the requirement and design phases of your web application project.-+ - Previously updated : 09/26/2023+ Last updated : 09/29/2024 |
security | Secure Dev Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-dev-overview.md | Title: Secure development best practices on Microsoft Azure description: Best practices to help you develop more secure code and deploy a more secure application in the cloud.-+ - Previously updated : 09/26/2023+ Last updated : 09/29/2024 applications and to help secure your applications on Azure: [Open Worldwide Application Security Project](https://www.owasp.org/) (OWASP) - OWASP is an online community that produces freely available articles, methodologies, documentation, tools, and technologies in the field of web application security. -[Pushing Left, Like a Boss](https://wehackpurple.com/pushing-left-like-a-boss-part-1/) - A series of online articles that outline different types of application security activities that developers should complete to create more secure code. - [Microsoft identity platform](../../active-directory/develop/index.yml) - The Microsoft identity platform is an evolution of the Microsoft Entra identity service and developer platform. It's a full-featured platform that consists of an authentication service, open-source libraries, application registration and configuration, full developer documentation, code samples, and other developer content. The Microsoft identity platform supports industry-standard protocols like OAuth 2.0 and OpenID Connect. [Azure security best practices and patterns](../fundamentals/best-practices-and-patterns.md) - A collection of security best practices to use when you design, deploy, and manage cloud solutions by using Azure. Guidance is intended to be a resource for IT pros. This might include designers, architects, developers, and testers who build and deploy secure Azure solutions. |
security | Secure Develop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-develop.md | Title: Develop secure applications on Microsoft Azure description: This article discusses best practices to consider during the implementation and verification phases of your web application project.-+ - Previously updated : 08/30/2023+ Last updated : 09/29/2024 |
security | Antimalware Code Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/antimalware-code-samples.md | Title: Microsoft Antimalware code samples for Azure | Microsoft Docs description: PowerShell code samples to enable and configure Microsoft Antimalware. -+ ms.assetid: 265683c8-30d7-4f2b-b66c-5082a18f7a8b Previously updated : 01/25/2023- Last updated : 09/25/2024+ # Enable and configure Microsoft Antimalware for Azure Resource Manager VMs |
security | Antimalware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/antimalware.md | Title: Microsoft Antimalware for Azure | Microsoft Docs description: Learn about Microsoft Antimalware for Azure Cloud Services and Virtual Machines. See information about topics like architecture and deployment scenarios. -+ ms.assetid: 265683c8-30d7-4f2b-b66c-5082a18f7a8b |
security | Azure Marketplace Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-marketplace-images.md | Title: Security Recommendations for Azure Marketplace Images | Microsoft Docs description: This article provides recommendations for images included in the market place -+ Previously updated : 02/06/2024- Last updated : 09/06/2024+ # Security Recommendations for Azure Marketplace Images |
security | Backup Plan To Protect Against Ransomware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/backup-plan-to-protect-against-ransomware.md | Title: Azure backup and restore plan to protect against ransomware | Microsoft Docs description: Learn what to do before and during a ransomware attack to protect your critical business systems and ensure a rapid recovery of business operations.-+ -+ Last updated 06/27/2024 |
security | Best Practices And Patterns | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/best-practices-and-patterns.md | Title: Security best practices and patterns - Microsoft Azure | Microsoft Docs description: This article links you to security best practices and patterns for different Azure resources.-+ ms.assetid: 1cbbf8dc-ea94-4a7e-8fa0-c2cb198956c5 Previously updated : 03/27/2024- Last updated : 09/27/2024+ # Azure security best practices and patterns |
security | Cyber Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/cyber-services.md | Title: Microsoft Services in Cybersecurity | Microsoft Docs description: The article provides an introduction about Microsoft services related to cybersecurity and how to obtain more information about these services. -+ ms.assetid: 925ba3c6-fe35-413a-98ea-e1a1461f3022 |
security | Data Encryption Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/data-encryption-best-practices.md | Title: Data security and encryption best practices - Microsoft Azure description: This article provides a set of best practices for data security and encryption using built in Azure capabilities. -+ ms.assetid: 17ba67ad-e5cd-4a8f-b435-5218df753ca4 Previously updated : 03/27/2024- Last updated : 09/27/2024+ # Azure data security and encryption best practices |
security | Database Security Checklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/database-security-checklist.md | Title: Azure database security checklist| Microsoft Docs description: Use the Azure database security checklist to make sure that you address important cloud computing security issues. -+ Previously updated : 01/29/2023- Last updated : 09/29/2024+ # Azure database security checklist |
security | Double Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/double-encryption.md | Title: Double Encryption in Microsoft Azure description: This article describes how Microsoft Azure provides double encryption for data at rest and data in transit. -+ ms.assetid: 9dcb190e-e534-4787-bf82-8ce73bf47dba |
security | End To End | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/end-to-end.md | Title: End-to-end security in Azure | Microsoft Docs description: The article provides a map of Azure services that help you secure and protect your cloud resources and detect and investigate threats. -+ ms.assetid: a5a7f60a-97e2-49b4-a8c5-7c010ff27ef8 |
security | Feature Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md | Title: Cloud feature availability for commercial and US Government customers description: This article describes security feature availability in Azure and Azure Government clouds-+ -+ |
security | Firmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/firmware.md | |
security | Hypervisor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/hypervisor.md | |
security | Iaas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/iaas.md | Title: Security best practices for IaaS workloads in Azure | Microsoft Docs description: " The migration of workloads to Azure IaaS brings opportunities to reevaluate our designs " -+ ms.assetid: 02c5b7d2-a77f-4e7f-9a1e-40247c57e7e2 Previously updated : 08/29/2023- Last updated : 09/29/2024+ # Security best practices for IaaS workloads in Azure |
security | Identity Management Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/identity-management-best-practices.md | Title: Azure identity & access security best practices | Microsoft Docs description: This article provides a set of best practices for identity management and access control using built in Azure capabilities. -+ ms.assetid: 07d8e8a8-47e8-447c-9c06-3a88d2713bc1 Previously updated : 08/29/2023- Last updated : 09/29/2024+ # Azure Identity Management and access control security best practices |
security | Identity Management Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/identity-management-overview.md | Title: Azure security features that help with identity management | Microsoft Docs description: Learn about the core Azure security features that help with identity management. See information about topics like single sign-on and reverse proxy. -+ ms.assetid: 5aa0a7ac-8f18-4ede-92a1-ae0dfe585e28 Previously updated : 01/25/2024- Last updated : 09/25/2024+ # Customer intent: As an IT Pro or decision maker, I am trying to learn about identity management capabilities in Azure # Azure identity management security overview |
security | Infrastructure Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-availability.md | Title: Azure infrastructure availability - Azure security description: This article provides information about what Microsoft does to secure the Azure infrastructure and provide maximum availability of customers' data. -+ ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e |
security | Infrastructure Components | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-components.md | Title: Azure information system components and boundaries description: This article provides a general description of the Microsoft Azure architecture and management. -+ ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e |
security | Infrastructure Integrity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-integrity.md | Title: Azure infrastructure integrity description: Learn about Azure infrastructure integrity and the steps Microsoft takes to secure it, such as virus scans on software component builds. -+ ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e Previously updated : 01/30/2023- Last updated : 09/29/2024+ |
security | Infrastructure Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-monitoring.md | Title: Azure infrastructure monitoring description: Learn about infrastructure monitoring aspects of the Azure production network, such as vulnerability scanning. -+ ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e Previously updated : 08/29/2023- Last updated : 09/29/2024+ # Azure infrastructure monitoring |
security | Infrastructure Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-network.md | Title: Azure network architecture description: This article provides a general description of the Microsoft Azure infrastructure network. -+ ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e |
security | Infrastructure Operations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-operations.md | Title: Management of Azure production network - Microsoft Azure description: This article describes how Microsoft manages and operates the Azure production network to secure the Azure datacenters.-+ ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e Previously updated : 08/29/2023- Last updated : 09/29/2024+ # Management and operation of the Azure production network |
security | Infrastructure Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-sql.md | Title: Azure SQL Database security features description: This article provides a general description of how Azure SQL Database protects customer data in Azure. -+ ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e Previously updated : 08/29/2023- Last updated : 09/29/2024+ |
security | Infrastructure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure.md | Title: Azure infrastructure security | Microsoft Docs description: Learn how Microsoft works to secure the Azure datacenters. The datacenters are managed, monitored, and administered by Microsoft operations staff. -+ ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e |
security | Isolation Choices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/isolation-choices.md | Title: Isolation in the Azure Public Cloud | Microsoft Docs description: Learn how Azure provides isolation against both malicious and non-malicious users and offers various isolation choices to architects. -+ Previously updated : 08/29/2023- Last updated : 09/29/2024+ |
security | Log Audit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/log-audit.md | Title: Azure security logging and auditing | Microsoft Docs description: Learn about the logs available in Azure and the security insights you can gain. -+ Previously updated : 08/29/2023- Last updated : 09/29/2024+ # Azure security logging and auditing |
security | Management Monitoring Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/management-monitoring-overview.md | Title: Management and monitoring security features - Microsoft Azure | Microsoft Docs description: This article provides an overview of the security features and services that Azure provides to aid in the management and monitoring of Azure cloud services and virtual machines. -+ ms.assetid: 5cf2827b-6cd3-434d-9100-d7411f7ed424 |
security | Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/management.md | Title: Enhance remote management security in Azure | Microsoft Docs description: "This article discusses steps for enhancing remote management security while administering Microsoft Azure environments, including cloud services, virtual machines, and custom applications." -+ ms.assetid: 2431feba-3364-4a63-8e66-858926061dd3 Previously updated : 04/03/2023- Last updated : 09/03/2024+ # Security management in Azure |
security | Measured Boot Host Attestation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/measured-boot-host-attestation.md | |
security | Network Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/network-best-practices.md | Title: Best practices for network security - Microsoft Azure description: This article provides a set of best practices for network security using built in Azure capabilities.-+ ms.assetid: 7f6aa45f-138f-4fde-a611-aaf7e8fe56d1 Previously updated : 03/27/2024- Last updated : 09/27/2024+ # Azure best practices for network security |
security | Network Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/network-overview.md | Title: Network security concepts and requirements in Azure | Microsoft Docs description: This article provides basic explanations about core network security concepts and requirements, and information on what Azure offers in each of these areas. -+ ms.assetid: bedf411a-0781-47b9-9742-d524cf3dbfc1 |
security | Operational Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-best-practices.md | Title: Security best practices for your Azure assets description: This article provides a set of operational best practices for protecting your data, applications, and other assets in Azure. -+ Last updated 06/27/2024-+ |
security | Operational Checklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-checklist.md | Title: Azure operational security checklist| Microsoft Docs description: Review this checklist to help your enterprise think through Azure operational security considerations. -+ Last updated 06/27/2024-+ |
security | Operational Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-overview.md | Title: Azure operational security overview| Microsoft Docs description: Learn about Azure operational security in this overview. Operational security refers to asset protection services, controls, and features. -+ Previously updated : 08/29/2023- Last updated : 09/29/2024+ |
security | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/overview.md | Title: Introduction to Azure security | Microsoft Docs description: Introduce yourself to Azure Security, its various services, and how it works by reading this overview. -+ Previously updated : 10/22/2023- Last updated : 09/22/2024+ |
security | Paas Applications Using App Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/paas-applications-using-app-services.md | Title: Securing PaaS web & mobile applications description: "Learn about Azure App Service security best practices for securing your PaaS web and mobile applications. " -+ Previously updated : 08/29/2023- Last updated : 09/29/2024+ # Best practices for securing PaaS web and mobile applications using Azure App Service |
security | Paas Applications Using Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/paas-applications-using-sql.md | Title: Securing PaaS Databases in Azure | Microsoft Docs description: "Learn about Azure SQL Database and Azure Synapse Analytics security best practices for securing your PaaS web and mobile applications. " -+ Previously updated : 03/31/2023- Last updated : 09/29/2023+ # Best practices for securing PaaS databases in Azure |
security | Paas Applications Using Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/paas-applications-using-storage.md | Title: Securing PaaS applications using Azure Storage | Microsoft Docs description: "Learn about Azure Storage security best practices for securing your PaaS web and mobile applications." -+ Previously updated : 01/23/2023- Last updated : 09/29/2024+ # Best practices for securing PaaS web and mobile applications using Azure Storage |
security | Paas Deployments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/paas-deployments.md | Title: Best practices for secure PaaS deployments - Microsoft Azure description: "Learn best practices for designing, building, and managing secure cloud applications on Azure and understand the security advantages of PaaS versus other cloud service models." -+ Last updated 06/27/2024-+ # Securing PaaS deployments |
security | Pen Testing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/pen-testing.md | Title: Penetration testing | Microsoft Docs description: The article provides an overview of the penetration testing process and how to perform a pen test against your app running in Azure infrastructure. -+ ms.assetid: 695d918c-a9ac-4eba-8692-af4526734ccc Last updated 06/27/2024-+ # Penetration testing |
security | Physical Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/physical-security.md | Title: Physical security of Azure datacenters - Microsoft Azure | Microsoft Docs description: The article describes what Microsoft does to secure the Azure datacenters, including physical infrastructure, security, and compliance offerings. -+ ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e |
security | Platform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/platform.md | |
security | Production Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/production-network.md | Title: Azure production network description: Learn about the Azure production network. See security access methods and protection mechanisms for establishing a connection to the network. -+ ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e Previously updated : 03/31/2023- Last updated : 09/29/2024+ |
security | Project Cerberus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/project-cerberus.md | |
security | Protection Customer Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/protection-customer-data.md | Title: Protection of customer data in Azure description: Learn how Azure protects customer data through data segregation, data redundancy, and data destruction. -+ ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e Previously updated : 08/29/2023- Last updated : 09/29/2024+ # Azure customer data protection -Access to customer data by Microsoft operations and support personnel is denied by default. When access to data related to a support case is granted, it is only granted using a just-in-time (JIT) model using policies that are audited and vetted against our compliance and privacy policies. The access-control requirements are established by the following Azure Security Policy: +Access to customer data by Microsoft operations and support personnel is denied by default. When access to data related to a support case is granted, it's only granted using a just-in-time (JIT) model using policies that are audited and vetted against our compliance and privacy policies. The access-control requirements are established by the following Azure Security Policy: - No access to customer data, by default. - No user or administrator accounts on customer virtual machines (VMs). - Grant the least privilege that's required to complete task; audit and log access requests. -Azure support personnel are assigned unique corporate Active Directory accounts by Microsoft. Azure relies on Microsoft corporate Active Directory, managed by Microsoft Information Technology (MSIT), to control access to key information systems. Multi-factor authentication is required, and access is granted only from secure consoles. +Azure support personnel are assigned unique corporate Active Directory accounts by Microsoft. Azure relies on Microsoft corporate Active Directory, managed by Microsoft Information Technology (MSIT), to control access to key information systems. Multifactor authentication is required, and access is granted only from secure consoles. ## Data protection Azure provides customers with strong data security, both by default and as customer options. -**Data segregation**: Azure is a multi-tenant service, which means that multiple customer deployments and VMs are stored on the same physical hardware. Azure uses logical isolation to segregate each customerΓÇÖs data from the data of others. Segregation provides the scale and economic benefits of multi-tenant services while rigorously preventing customers from accessing one anotherΓÇÖs data. +**Data segregation**: Azure is a multitenant service, which means that multiple customer deployments and VMs are stored on the same physical hardware. Azure uses logical isolation to segregate each customerΓÇÖs data from the data of others. Segregation provides the scale and economic benefits of multitenant services while rigorously preventing customers from accessing one anotherΓÇÖs data. **At-rest data protection**: Customers are responsible for ensuring that data stored in Azure is encrypted in accordance with their standards. Azure offers a wide range of encryption capabilities, giving customers the flexibility to choose the solution that best meets their needs. Azure Key Vault helps customers easily maintain control of keys that are used by cloud applications and services to encrypt data. Azure Disk Encryption enables customers to encrypt VMs. Azure Storage Service Encryption makes it possible to encrypt all data placed into a customer's storage account. -**In-transit data protection**: Microsoft provides a number of options that can be utilized by customers for securing data in transit internally within the Azure network and externally across the Internet to the end user. These include communication through Virtual Private Networks (utilizing IPsec/IKE encryption), Transport Layer Security (TLS) 1.2 or later (via Azure components such as Application Gateway or Azure Front Door), protocols directly on the Azure virtual machines (such as Windows IPsec or SMB), and more. +**In-transit data protection**: Microsoft provides many options that can be utilized by customers for securing data in transit internally within the Azure network and externally across the Internet to the end user. These include communication through Virtual Private Networks (utilizing IPsec/IKE encryption), Transport Layer Security (TLS) 1.2 or later (via Azure components such as Application Gateway or Azure Front Door), protocols directly on the Azure virtual machines (such as Windows IPsec or SMB), and more. -Additionally, "encryption by default" using MACsec (an IEEE standard at the data-link layer) is enabled for all Azure traffic traveling between Azure datacenters to ensure confidentiality and integrity of customer data. +Additionally, "encryption by default" using MACsec (an IEEE standard at the data-link layer) is enabled for all Azure traffic traveling between Azure datacenters to ensure confidentiality and integrity of customer data. -**Data redundancy**: Microsoft helps ensure that data is protected if there is a cyberattack or physical damage to a datacenter. Customers may opt for: +**Data redundancy**: Microsoft helps ensure that data is protected if there's a cyberattack or physical damage to a datacenter. Customers may opt for: - In-country/region storage for compliance or latency considerations. - Out-of-country/region storage for security or disaster recovery purposes. -Data can be replicated within a selected geographic area for redundancy but cannot be transmitted outside it. Customers have multiple options for replicating data, including the number of copies and the number and location of replication datacenters. +Data can be replicated within a selected geographic area for redundancy but can't be transmitted outside it. Customers have multiple options for replicating data, including the number of copies and the number and location of replication datacenters. When you create your storage account, select one of the following replication options: - **Locally redundant storage (LRS)**: Locally redundant storage maintains three copies of your data. LRS is replicated three times within a single facility in a single region. LRS protects your data from normal hardware failures, but not from a failure of a single facility. - **Zone-redundant storage (ZRS)**: Zone-redundant storage maintains three copies of your data. ZRS is replicated three times across two to three facilities to provide higher durability than LRS. Replication occurs within a single region or across two regions. ZRS helps ensure that your data is durable within a single region.-- **Geo-redundant storage (GRS)**: Geo-redundant storage is enabled for your storage account by default when you create it. GRS maintains six copies of your data. With GRS, your data is replicated three times within the primary region. Your data is also replicated three times in a secondary region hundreds of miles away from the primary region, providing the highest level of durability. In the event of a failure at the primary region, Azure Storage fails over to the secondary region. GRS helps ensure that your data is durable in two separate regions.+- **Geo-redundant storage (GRS)**: Geo-redundant storage is enabled for your storage account by default when you create it. GRS maintains six copies of your data. With GRS, your data is replicated three times within the primary region. Your data is also replicated three times in a secondary region hundreds of miles away from the primary region, providing the highest level of durability. If a failure at the primary region, Azure Storage fails over to the secondary region. GRS helps ensure that your data is durable in two separate regions. -**Data destruction**: When customers delete data or leave Azure, Microsoft follows strict standards for deleting data, as well as the physical destruction of decommissioned hardware. Microsoft executes a complete deletion of data on customer request and on contract termination. For more information, see [Data management at Microsoft](https://www.microsoft.com/trust-center/privacy/data-management). +**Data destruction**: When customers delete data or leave Azure, Microsoft follows strict standards for deleting data and the physical destruction of decommissioned hardware. Microsoft executes a complete deletion of data on customer request and on contract termination. For more information, see [Data management at Microsoft](https://www.microsoft.com/trust-center/privacy/data-management). ## Customer data ownership-Microsoft does not inspect, approve, or monitor applications that customers deploy to Azure. Moreover, Microsoft does not know what kind of data customers choose to store in Azure. Microsoft does not claim data ownership over the customer information that's entered into Azure. +Microsoft doesn't inspect, approve, or monitor applications that customers deploy to Azure. Moreover, Microsoft doesn't know what kind of data customers choose to store in Azure. Microsoft doesn't claim data ownership over the customer information entered into Azure. ## Records management-Azure has established internal records-retention requirements for back-end data. Customers are responsible for identifying their own record retention requirements. For records that are stored in Azure, customers are responsible for extracting their data and retaining their content outside of Azure for a customer-specified retention period. +Azure established internal records-retention requirements for back-end data. Customers are responsible for identifying their own record retention requirements. For records that are stored in Azure, customers are responsible for extracting their data and retaining their content outside of Azure for a customer-specified retention period. Azure allows customers to export data and audit reports from the product. The exports are saved locally to retain the information for a customer-defined retention time period. |
security | Ransomware Protection With Azure Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ransomware-protection-with-azure-firewall.md | |
security | Ransomware Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ransomware-protection.md | Title: Ransomware protection in Azure description: Ransomware protection in Azure-+ -+ Last updated 06/28/2024 |
security | Secure Boot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/secure-boot.md | -Secure Boot is a feature of the [Unified Extensible Firmware Interface (UEFI)](https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface) that requires all low-level firmware and software components to be verified prior to loading. During boot, UEFI Secure Boot checks the signature of each piece of boot software, including UEFI firmware drivers (also known as option ROMs), Extensible Firmware Interface (EFI) applications, and the operating system drivers and binaries. If the signatures are valid or trusted by the Original Equipment Manufacturer (OEM), the machine boots and the firmware gives control to the operating system. +Secure Boot is a feature of the [Unified Extensible Firmware Interface (UEFI)](https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface) that requires all low-level firmware and software components to be verified before loading. During boot, UEFI Secure Boot checks the signature of each piece of boot software, including UEFI firmware drivers (also known as option ROMs), Extensible Firmware Interface (EFI) applications, and the operating system drivers and binaries. If the signatures are valid or trusted by the Original Equipment Manufacturer (OEM), the machine boots and the firmware gives control to the operating system. ## Components and process Secure Boot relies on these critical components: - Platform key (PK) - Establishes trust between the platform owner (Microsoft) and the firmware. The public half is PKpub and the private half is PKpriv. - Key enrollment key database (KEK) - Establishes trust between the OS and the platform firmware. The public half is KEKpub and the private half is KEKpriv. - Signature database (db) - Holds the digests for trusted signers (public keys and certificates) of the firmware and software code modules authorized to interact with platform firmware.-- Revoked signatures database (dbx) ΓÇô Holds revoked digests of code modules that have been identified to be malicious, vulnerable, compromised, or untrusted. If a hash is in the signature db and the revoked signatures db, the revoked signatures database takes precedent.+- Revoked signatures database (dbx) ΓÇô Holds revoked digests of code modules that are identified to be malicious, vulnerable, compromised, or untrusted. If a hash is in the signature db and the revoked signatures db, the revoked signatures database takes precedent. The following figure and process explains how these components are updated: ![Diagram that shows Secure Boot components.](./media/secure-boot/secure-boot.png) -The OEM stores the Secure Boot digests on the machineΓÇÖs nonvolatile RAM (NV-RAM) at the time of manufacturing. +The OEM stores the Secure Boot digests on the machine's nonvolatile RAM (NV-RAM) at the time of manufacturing. -1. The signature database (db) is populated with the signers or image hashes of UEFI applications, operating system loaders (such as the Microsoft Operating System Loader or Boot Manager), and UEFI drivers that are trusted. -2. The revoked signatures database (dbx) is populated with digests of modules that are no longer trusted. +1. The signature db is populated with the signers or image hashes of UEFI applications, operating system loaders (such as the Microsoft Operating System Loader or Boot Manager), and UEFI drivers that are trusted. +2. The revoked signatures dbx is populated with digests of modules that are no longer trusted. 3. The key enrollment key (KEK) database is populated with signing keys that can be used to update the signature database and revoked signatures database. The databases can be edited via updates that are signed with the correct key or via updates by a physically present authorized user using firmware menus.-4. After the db, dbx, and KEK databases have been added and final firmware validation and testing is complete, the OEM locks the firmware from editing and generates a platform key (PK). The PK can be used to sign updates to the KEK or to turn off Secure Boot. +4. After the db, dbx, and KEK databases are added and final firmware validation and testing is complete, the OEM locks the firmware from editing and generates a platform key (PK). The PK can be used to sign updates to the KEK or to turn off Secure Boot. -During each stage in the boot process, the digests of the firmware, bootloader, operating system, kernel drivers, and other boot chain artifacts are calculated and compared to acceptable values. Firmware and software that are discovered to be untrusted are not allowed to load. Thus, low-level malware injection or pre-boot malware attacks can be blocked. +During each stage in the boot process, the digests of the firmware, bootloader, operating system, kernel drivers, and other boot chain artifacts are calculated and compared to acceptable values. Firmware and software that are discovered to be untrusted aren't allowed to load. Thus, low-level malware injection or preboot malware attacks can be blocked. ## Secure Boot on the Azure fleet-Today, every machine that is onboarded and deployed to the Azure compute fleet to host customer workloads comes from factory floors with Secure Boot enabled. Targeted tooling and processes are in place at every stage in the hardware buildout and integration pipeline to ensure that Secure Boot enablement is not reverted either by accident or by malicious intent. +Today, every machine that is onboarded and deployed to the Azure compute fleet to host customer workloads comes from factory floors with Secure Boot enabled. Targeted tooling and processes are in place at every stage in the hardware buildout and integration pipeline to ensure that Secure Boot enablement isn't reverted by accident or by malicious intent. Validating that the db and dbx digests are correct ensures: - Bootloader is present in one of the db entries-- BootloaderΓÇÖs signature is valid+- Bootloader's signature is valid - Host boots with trusted software By validating the signatures of KEKpub and PKpub, we can confirm that only trusted parties have permission to modify the definitions of what software is considered trusted. Lastly, by ensuring that secure boot is active, we can validate that these definitions are being enforced. |
security | Service Fabric Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/service-fabric-best-practices.md | Title: Best practices for Azure Service Fabric security description: This article provides a set of best practices for Azure Service Fabric security.-+ -+ Previously updated : 08/29/2023 Last updated : 09/29/2024 # Azure Service Fabric security best practices In addition to this article, please also review [Service Fabric security checklist](/azure/service-fabric/service-fabric-best-practices-security) for more information. |
security | Services Technologies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/services-technologies.md | Title: Azure Security Services and Technologies | Microsoft Docs description: The article provides a curated list of Azure Security services and technologies. -+ ms.assetid: a5a7f60a-97e2-49b4-a8c5-7c010ff27ef8 |
security | Shared Responsibility Ai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/shared-responsibility-ai.md | |
security | Shared Responsibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/shared-responsibility.md | Title: Shared responsibility in the cloud - Microsoft Azure description: "Understand the shared responsibility model and which security tasks are handled by the cloud provider and which tasks are handled by you." -+ Previously updated : 09/28/2023- Last updated : 09/29/2024+ # Shared responsibility in the cloud |
security | Subdomain Takeover | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/subdomain-takeover.md | Title: Prevent subdomain takeovers with Azure DNS alias records and Azure App Service's custom domain verification description: Learn how to avoid the common high-severity threat of subdomain takeover -+ Last updated 03/27/2024-+ # Prevent dangling DNS entries and avoid subdomain takeover |
security | Technical Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/technical-capabilities.md | Title: Security technical capabilities in Azure - Microsoft Azure description: Introduction to security services in Azure that help you protect you data, resources, and applications in the cloud. -+ Last updated 06/28/2024-+ # Azure security technical capabilities |
security | Threat Detection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/threat-detection.md | Title: Azure threat protection | Microsoft Docs description: Learn about built-in threat protection functionality for Azure, such as the Microsoft Entra ID Protection service. -+ Last updated 06/27/2024-+ |
security | Virtual Machines Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/virtual-machines-overview.md | Title: Security features used with Azure VMs description: This article provides an overview of the core Azure security features that can be used with Azure Virtual Machines. -+ ms.assetid: 467b2c83-0352-4e9d-9788-c77fb400fe54 |
security | Zero Trust | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/zero-trust.md | Title: Zero Trust security in Azure description: Learn about the guiding principles of Zero Trust and find resources to help you implement Zero Trust.-+ -+ Last updated 06/28/2024 |
sentinel | Monitor Data Connector Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-data-connector-health.md | The *SentinelHealth* data table is currently supported only for the following da - [Microsoft Defender for Endpoint](connect-microsoft-defender-advanced-threat-protection.md) - [Threat Intelligence - TAXII](connect-threat-intelligence-taxii.md) - [Threat Intelligence Platforms](connect-threat-intelligence-tip.md)+- Any connector based on [Codeless Connector Platform](create-codeless-connector.md) ### Understanding SentinelHealth table events |
sentinel | Ueba Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ueba-reference.md | While the initial synchronization may take a few days, once the data is fully sy - Default retention time in the **IdentityInfo** table is 30 days. +#### Limitations -> [!NOTE] -> - Currently, only built-in roles are supported. -> -> - Data about deleted groups, where a user was removed from a group, is not currently supported. -> -> - There are actually two versions of the *IdentityInfo* table: one serving Microsoft Sentinel, in the *Log Analytics* schema, the other serving the Microsoft Defender portal via Microsoft Defender for Identity, in what's known as the *Advanced hunting* schema. Both versions of this table are fed by Microsoft Entra ID, but the Log Analytics version added a few fields. -> -> [The unified security operations platform in the Defender portal](https://go.microsoft.com/fwlink/p/?linkid=2263690) uses the *Advanced hunting* version of this table, so, to minimize the differences between the versions of the table, most of the unique fields in the Log Analytics version are gradually being added to the *Advanced hunting* version as well. Regardless of in which portal you're using Microsoft Sentinel, you'll have access to nearly all the same information, though there may be a small time lag in synchronization between the versions. +- Currently, only built-in roles are supported. ++- Data about deleted groups, where a user was removed from a group, is not currently supported. ++#### Versions of the IdentityInfo table ++There are actually two versions of the *IdentityInfo* table: +- The *Log Analytics* schema version serves Microsoft Sentinel in the Azure portal. +- The *Advanced hunting* schema version serves Microsoft Sentinel in the Microsoft Defender portal via Microsoft Defender for Identity. ++Both versions of this table are fed by Microsoft Entra ID, but the Log Analytics version added a few fields. ++[The unified security operations platform](https://go.microsoft.com/fwlink/p/?linkid=2263690), being in the Defender portal, uses the *Advanced hunting* version of this table. To minimize the differences between the two versions of the table, most of the unique fields in the Log Analytics version are gradually being added to the *Advanced hunting* version as well. Regardless of in which portal you're using Microsoft Sentinel, you'll have access to nearly all the same information, though there may be a small time lag in synchronization between the versions. For more information, see the [documentation of the *Advanced hunting* version of this table](/defender-xdr/advanced-hunting-identityinfo-table). The following table describes the user identity data included in the **IdentityInfo** table in Log Analytics in the Azure portal. The fourth column shows the corresponding fields in the *Advanced hunting* version of the table, that Microsoft Sentinel uses in the Defender portal. Field names in boldface are named differently in the *Advanced hunting* schema than they are in the Microsoft Sentinel Log Analytics version. The following table describes the user identity data included in the **IdentityI | **AccountUPN** | string | The user principal name of the user account. | AccountUPN | | **AdditionalMailAddresses** | dynamic | The additional email addresses of the user. | -- | | **AssignedRoles** | dynamic | The Microsoft Entra roles the user account is assigned to. | AssignedRoles |-| **BlastRadius** | string | A calculation based on the position of the user in the org tree and the user's Microsoft Entra roles and permissions. <br>Possible values: *Low, Medium, High* | -- | +| **BlastRadius** | string | A calculation based on the position of the user in the org tree and the user's Microsoft Entra roles and permissions. <br>Possible values: *Low, Medium, High* | -- | | **ChangeSource** | string | The source of the latest change to the entity. <br>Possible values: <li>*AzureActiveDirectory*<li>*ActiveDirectory*<li>*UEBA*<li>*Watchlist*<li>*FullSync* | ChangeSource | | **CompanyName** | | The company name to which the user belongs. | -- | | **City** | string | The city of the user account. | City | The following table describes the user identity data included in the **IdentityI | **JobTitle** | string | The job title of the user account. | JobTitle | | **MailAddress** | string | The primary email address of the user account. | **EmailAddress** | | **Manager** | string | The manager alias of the user account. | Manager |-| **OnPremisesDistinguishedName** | string | The Microsoft Entra ID distinguished name (DN). A distinguished name is a sequence of relative distinguished names (RDN), connected by commas. | **DistinguishedName** | +| **OnPremisesDistinguishedName** | string | The Microsoft Entra ID distinguished name (DN). A distinguished name is a sequence of relative distinguished names (RDN), connected by commas. | **DistinguishedName** | | **Phone** | string | The phone number of the user account. | Phone | | **SourceSystem** | string | The system where the user is managed. <br>Possible values: <li>*AzureActiveDirectory*<li>*ActiveDirectory*<li>*Hybrid* | **SourceProvider** | | **State** | string | The geographical state of the user account. | State | |
spring-apps | Application Observability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/application-observability.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ❌ C# |
spring-apps | Concepts For Java Memory Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/concepts-for-java-memory-management.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ❌ Enterprise |
spring-apps | How To Access Data Plane Azure Ad Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/how-to-access-data-plane-azure-ad-rbac.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ❌ Enterprise |
spring-apps | How To Appdynamics Java Agent Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/how-to-appdynamics-java-agent-monitor.md | ms.devlang: azurecli # How to monitor Spring Boot apps with the AppDynamics Java Agent -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ❌️ Enterprise |
spring-apps | How To Built In Persistent Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/how-to-built-in-persistent-storage.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | How To Dynatrace One Agent Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/how-to-dynatrace-one-agent-monitor.md | ms.devlang: azurecli # How to monitor Spring Boot apps with Dynatrace Java OneAgent -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ❌️ Enterprise |
spring-apps | How To Elastic Apm Java Agent Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/how-to-elastic-apm-java-agent-monitor.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ❌ Enterprise |
spring-apps | How To Launch From Source | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/how-to-launch-from-source.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ❌ C# |
spring-apps | How To New Relic Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/how-to-new-relic-monitor.md | ms.devlang: azurecli # How to monitor Spring Boot apps using New Relic Java agent -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ❌ Enterprise |
spring-apps | How To Service Registration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/how-to-service-registration.md | zone_pivot_groups: programming-languages-spring-apps # Discover and register your Spring Boot applications -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ❌ Enterprise |
spring-apps | Quickstart Deploy Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/quickstart-deploy-apps.md | zone_pivot_groups: programming-languages-spring-apps # Quickstart: Build and deploy apps to Azure Spring Apps -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ❌ Enterprise |
spring-apps | Quickstart Integrate Azure Database Mysql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/quickstart-integrate-azure-database-mysql.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ❌ Enterprise |
spring-apps | Quickstart Logs Metrics Tracing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/quickstart-logs-metrics-tracing.md | zone_pivot_groups: programming-languages-spring-apps # Quickstart: Monitoring Azure Spring Apps apps with logs, metrics, and tracing -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ❌ Enterprise |
spring-apps | Quickstart Provision Service Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/quickstart-provision-service-instance.md | zone_pivot_groups: programming-languages-spring-apps # Provision an Azure Spring Apps service instance -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ❌ Enterprise |
spring-apps | Quickstart Setup Config Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/quickstart-setup-config-server.md | zone_pivot_groups: programming-languages-spring-apps # Quickstart: Set up Spring Cloud Config Server for Azure Spring Apps -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ❌ Enterprise |
spring-apps | Quickstart Setup Log Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/quickstart-setup-log-analytics.md | ms.devlang: azurecli # Quickstart: Set up a Log Analytics workspace -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ❌ Enterprise |
spring-apps | Retirement Announcement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/retirement-announcement.md | + + Title: Azure Spring Apps retirement announcement +description: Announces the retirement of the Azure Spring Apps service. ++++ Last updated : 09/30/2024++++# Azure Spring Apps retirement announcement ++Azure Spring Apps is a fully managed service for running Java Spring applications, jointly built by Microsoft and VMware by Broadcom. After careful consideration and analysis, Microsoft and Broadcom have made the difficult decision to retire the Azure Spring Apps service. We recommend Azure Container Apps as the primary service for your migration of workloads running on Azure Spring Apps. Azure Container Apps is a strong and enterprise ready platform that provides fully managed, serverless container service for polyglot apps and enhanced Java features to help you manage, monitor, and troubleshoot Java apps at scale. ++We're committed to supporting you with a long-term platform with migration tools, expert resources, and technical support through the end of the service. ++## Timeline ++Azure Spring Apps, including the Standard consumption and dedicated (currently in Public Preview only), Basic, Standard, and Enterprise plans, will be retired through a two-phased retirement plan: ++- On September 30, 2024, the Standard consumption and dedicated plan (preview) will enter a six-month retirement period and will be retired on March 31, 2025. +- In mid-March 2025, all other Azure Spring Apps plans, including Basic, Standard, and Enterprise plans, will enter a three-year retirement period and will be retired on March 31, 2028. +++## Migration recommendation ++To ensure that you maintain high performance and achieve scalability, flexibility, and cost-efficiency for your business, we recommend Azure Container Apps as the primary service for your migration of workloads running on Azure Spring Apps. ++Azure Container Apps is a fully managed, serverless container service for polyglot apps and offers enhanced Java features to help you manage, monitor, and troubleshoot Java apps at scale. ++Key Features of Azure Container Apps: ++- Fully managed, serverless container platform +- Scale to zero capability +- Open-source foundation and add-ons +- [Enhanced Java support](../../container-apps/java-overview.md) + - Managed Spring components support (Eureka Server, Config Server, Spring Boot Admin) + - Built-in JVM metrics + - Diagnostics for Java apps ++For more information about Azure Container Apps, see [Azure Container Apps overview](../../container-apps/overview.md). ++## Migration guidance and tooling for the Azure Spring Apps Standard consumption and dedicated plan ++For the Azure Spring Apps Standard consumption and dedicated plan (preview), new customers will no longer be able to sign up for the service after September 30, 2024, while existing customers will remain operational until this plan is retired on March 31, 2025. ++Migration guidance and tooling will be enabled for the Azure Spring Apps Standard consumption and dedicated plan (preview) by mid-October, providing customers with a transition from Azure Spring Apps to Azure Container Apps. For more information, see [Migrate Azure Spring Apps Standard consumption and dedicated plan to Azure Container Apps](../consumption-dedicated/overview-migration.md). ++## Migration guidance and tooling for the Azure Spring Apps Basic, Standard, and Enterprise plans ++For the Azure Spring Apps Basic, Standard, and Enterprise plans, new customers will no longer be able to sign up for the service after March 31, 2025, while existing customers will remain operational until the plans are phased out on March 31, 2028. ++We encourage you to start testing out Azure Container Apps for your Java Spring workloads and get prepared for the migration when the retirement for the Basic, Standard, and Enterprise plans starts in mid-March 2025. ++Migration guidance will be ready by the end of December 2024 and the migration tool assisting with Azure Container Apps environment setup will be available by mid-March 2025 before the retirement starts. ++## What is the impact for customers using Tanzu Components with Azure Spring Apps Enterprise? ++If customers are interested in obtaining or continuing Spring commercial support and using Tanzu Components while migrating to Azure Container Apps, the components can download and run as JAR files on top of Azure Container Apps. For more information, please work with Broadcom sellers. ++## FAQ ++### What are the migration destinations? ++We recommend Azure Container Apps as the primary service for your migration of workloads running on Azure Spring Apps. Azure Container Apps is a fully managed serverless container service for polyglot apps and offers enhanced Java features to help you manage, monitor, and troubleshoot Java apps at scale. ++Migration guidance and tooling will be enabled for the Azure Spring Apps Standard consumption and dedicated plan (currently in Public Preview only) by mid-October, providing customers with a transition from Azure Spring Apps to Azure Container Apps. For more information, see [Migrate Azure Spring Apps Standard consumption and dedicated plan to Azure Container Apps](../consumption-dedicated/overview-migration.md). ++We're working on the migration guidance and tooling from the Azure Spring Apps Basic, Standard, and Enterprise plans to Azure Container Apps. This guidance and tooling will be available by March 2025. ++You might also consider the following alternative solutions: ++- PaaS solution: Azure App Service is a fully managed platform for building, deploying, and scaling web apps, mobile app backends, and RESTful APIs. It supports multiple programming languages (such as Java and .NET), integrates with various development tools, and provides features like autoscaling, load balancing, and security for applications. Learn more: [App Service Overview](../../app-service/overview.md). +- Containerized solution: Azure Kubernetes Service (AKS) is a managed container orchestration service that simplifies the deployment, management, and scaling of containerized applications using Kubernetes. It offers features like automated updates, monitoring, and scaling, enabling developers to focus on application development rather than infrastructure management. Learn more: [What is Azure Kubernetes Service (AKS)?](/azure/aks/what-is-aks). +- If you're currently using Spring commercial support or Tanzu components as part of Azure Spring Apps Enterprise, you need to switch to using Tanzu Platform Spring Essentials on Azure Container Apps. Learn more: [VMware Tanzu Spring](https://tanzu.vmware.com/spring). ++### What is the migration timeline? ++Existing customers are required to migrate their Azure Spring Apps Standard consumption and dedicated workloads to Azure Container Apps by March 31, 2025. Customers on Basic, Standard, and Enterprise plans are required to complete this transition by March 31, 2028. Azure Spring Apps will be entirely retired by March 31, 2028. ++### Will Azure Spring Apps still allow new customer sign-ups? ++For Azure Spring Apps Standard consumption and dedicated plan (preview), new customers will no longer be able to sign up for the service after September 30, 2024, while existing customers will remain operational until these plans are retired on March 31, 2025. ++For Azure Spring Apps Basic, Standard, and Enterprise plans, new customers will no longer be able to sign up for the service after March 31, 2025, while existing customers will remain operational until the plans are phased out on March 31, 2028. ++### Will Microsoft continue to support my current workload? ++Yes, support will continue for your workloads on Azure Spring Apps until the retirement date. You'll continue to receive SLA assurance, infrastructure updates/maintenance (VM and AKS), management of OSS/Tanzu components, and updates for container images of your apps including base OS, runtime (JDK, dotnet runtime, and so on), and APM agents. You can still raise support tickets as usual for prompt assistance through the end of the service. ++### Will Azure Spring Apps provide any new features during the retirement period? ++No, we won't take up any feature requests from customers and won't be building any features in the Azure Spring Apps service. Instead, we'll prioritize new features and enhancements on Azure Container Apps. ++### What will happen after retirement date? ++After March 31, 2025, the Azure Spring Apps Standard consumption and dedicated plan (preview) will be completely discontinued. As a result, you will no longer receive support and access to your workloads and Azure Spring Apps services. ++After March 31, 2028, Azure Spring Apps Basic, Standard and Enterprise plans will be completely discontinued. As a result, you will no longer receive support and access to your workloads and Azure Spring Apps services. We strongly suggest you migrate your workloads to Azure Container Apps by March 31, 2028. ++### Does Microsoft Container Apps offer feature parity with Azure Spring Apps? ++Customers should be able to achieve most of the desired capabilities to host their Spring applications on Azure Container Apps. Managed Spring components, Java metrics, and diagnostics support are available for you to use on Azure Container Apps. For more information, see [Java on Azure Container Apps overview](../../container-apps/java-overview.md). If you have any questions, open a support ticket from the Azure portal or open an issue in the [azure-container-apps](https://github.com/microsoft/azure-container-apps/issues) repository on GitHub. ++If you have interest in obtaining or continuing Spring commercial support and using Tanzu components, you can download the components and run them as JAR files on top of Azure Container Apps. Please work with your Broadcom sales to explore running Tanzu Platform Spring Essentials on top of Azure Container Apps. ++### Will Microsoft Azure Container Apps be available in the same Azure regions as Azure Spring Apps? ++For the Standard consumption and dedicated plan (preview), Azure Container Apps and Azure Spring Apps are available in the same regions. ++Azure Container Apps will be available in the same Azure regions as Azure Spring Apps for customers under the Basic, Standard, and Enterprise plans before the migration starts in March 2025. ++### Are there pricing differences across Microsoft solutions? ++Azure Spring Apps operates on a consumption-based model with a basic unit where you only pay for vCPU and memory for your apps. ++[Azure Container Apps](https://azure.microsoft.com/pricing/details/container-apps/) offers the following two pricing models: ++- A consumption model billed based on per-second resource allocation (on VCPU and memory) and requests. +- A dedicated model with a single tenancy guarantee, access to specialized hardware, and more predictable pricing. ++Billing for the dedicated plan is based on the number of vCPU seconds and gibibyte (GiB) seconds allocated across Azure Container App instances. Azure Container Apps also provides savings plan. ++The costs for Microsoft solutions will vary based on their pricing model and optimizations that can be enabled. We recommend using the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/?ef_id=_k_8d2e1450f88b14d2046272e613f0ee0b_k_&OCID=AIDcmm5edswduu_SEM__k_8d2e1450f88b14d2046272e613f0ee0b_k_&msclkid=8d2e1450f88b14d2046272e613f0ee0b), which provides details on meters, usage prices, and available savings plans to accurately assess anticipated costs. ++### What is the impact for customers using Tanzu Components within Azure Spring Apps Enterprise? ++If you're interested in obtaining or continuing Spring commercial support and using Tanzu Components while migrating to Azure Container Apps, you can download the components and run them as JAR files on top of Azure Container Apps. For more information, please work with your Broadcom sales contact. ++### How can I stay up to date with Azure Spring Apps retirement guidance? ++The following table indicates the overall release timeline for whole Azure Spring Apps retirement period. We'll keep it updated when the corresponding guidance and tooling is ready for release. ++| Item | Target plans | Release date | +||--|--| +| Official retirement start date | Standard consumption and dedicated plan | September 30, 2024 | +| Block new service instance creation for all customers | Standard consumption and dedicated plan | September 30, 2024 | +| Guidance and tooling for migration to Azure Container Apps | Standard consumption and dedicated plan | October 2024 | +| Guidance for helping switch from Tanzu components to alternative solutions | Enterprise plan | October 2024 | +| Guidance for migrating to Azure Container Apps (without migration tooling support) | Basic, Standard, and Enterprise plans | December 2024 | +| Official retirement date after a half year retirement period | Standard consumption and dedicated plan | March 31, 2025 | +| Official retirement start date | Basic, Standard, and Enterprise plans | Mid-March 2025 | +| Guidance for migrating to Azure Container Apps with migration tooling support | Basic, Standard, and Enterprise plans | Mid-March 2025 | +| Block new customer sign-ups | Basic, Standard, and Enterprise plans | April 2025 | +| Official retirement date after a three-year retirement period | Basic, Standard, and Enterprise plans | March 31 2028 | ++### How can I get transition help and support during migration? ++If you have any questions, you can open a support ticket through the Azure portal for technical help: create an [Azure Support Request](/azure/azure-portal/supportability/how-to-create-azure-support-request). |
spring-apps | How To Custom Persistent Storage With Standard Consumption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/how-to-custom-persistent-storage-with-standard-consumption.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Standard consumption and dedicated (Preview) ❌ Basic/Standard ❌ Enterprise |
spring-apps | Overview Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/overview-migration.md | + + Title: Migrate Azure Spring Apps Standard consumption and dedicated plan to Azure Container Apps +description: The complete overview guide for migrating Azure Spring Apps Standard consumption and dedicated plan to Azure Container Apps, including steps, benefits, and frequently asked questions. +++ Last updated : 09/30/2024+++#Customer intent: As an Azure Cloud user, I want to deploy, run, and monitor Spring applications. +++# Migrate Azure Spring Apps Standard consumption and dedicated plan to Azure Container Apps ++This article describes when and how to migrate Azure Spring Apps Standard consumption and dedicated plan (currently in Public Preview only) to Azure Container Apps. To consolidate cloud-native benefits and streamline our offerings, the Azure Spring Apps service is retiring, including the Standard consumption and dedicated (preview), Basic, Standard, and Enterprise plans. The Standard consumption and dedicated plan (preview) enters its six-month sunset period on September 30, 2024 and retires in March 2025. ++We recommend Azure Container Apps as the best destination for your migration. Azure Container Apps is a fully managed, serverless container platform for polyglot apps and offers enhanced Java features previously available in Azure Spring Apps. ++We've introduced a migration feature to ease the transition from the Azure Spring Apps Standard consumption and dedicated plan (preview) to Azure Container Apps. Select **Migrate** in the Azure portal and confirm the action. ++++This feature will be available mid-October 2024 and you can start the migration process as soon as it's available. ++After the migration finishes, the app appears as a standard app inside Azure Container Apps, with the Java development stack turned on. With this option enabled, you get access to Java specific metrics and logs to monitor and troubleshoot your apps. For more information, see [Java metrics for Java apps in Azure Container Apps](../../container-apps/java-metrics.md) and [Set dynamic logger level to troubleshoot Java applications in Azure Container Apps](../../container-apps/java-dynamic-log-level.md). ++The following video announces the general availability of Java experiences on Azure Container Apps: ++<br> ++> [!VIDEO https://www.youtube.com/embed/-T90dC2CCPA] ++## Frequently asked questions ++The following section addresses several questions you might have about the migration process. ++### Are there plans to retire any other Azure Spring Apps SKUs? ++Yes, other Azure Spring Apps plans are also retiring, with a three-year sunset period. For more information, see the [Azure Spring Apps retirement announcement](../basic-standard/retirement-announcement.md?toc=/azure/spring-apps/consumption-dedicated/toc.json&bc=/azure/spring-apps/consumption-dedicated/breadcrumb/toc.json). ++### What happens if I don't take any actions by March 30, 2025? ++Your apps are automatically migrated to Azure Container Apps. ++### Can I continue to use the Azure Spring Apps Standard consumption and dedicated plan? ++You can continue to run existing apps until March 30, 2025, but you can't create new apps and service instances after September 30, 2024. ++### How can I get help if the migration process fails? ++Fill out the support request form on the Azure portal, using the following values: ++- For **Issue type**, select **Technical**. +- For **Subscription**, select your subscription. +- For **Service**, select **Azure Spring Apps**. +- For **Resource**, select your Azure Spring Apps resource. +- For **Summary**, type a description of your issue. +- For **Problem type**, select **My issue is not listed**. ++### Do I need to manually create Spring Cloud Config Server and Spring Cloud Service Registry instances in Azure Container Apps? ++Yes, you must recreate Spring Cloud Config Server and Spring Cloud Service Registry instances in Azure Container Apps. Both Spring Cloud Config Server and Spring Cloud Service Registry are also managed components in Azure Container Apps, but there are some experiential differences. For more information, see [Tutorial: Connect to a managed Eureka Server for Spring in Azure Container Apps](../../container-apps/java-eureka-server.md) and [Tutorial: Connect to a managed Config Server for Spring in Azure Container Apps](../../container-apps/java-config-server.md). ++If you need assistance creating and migrating Spring Cloud Config Server and Spring Cloud Service Registry to Azure Container Apps, create a support request. ++### Is there any downtime during the migration process? ++There's no downtime unless you're using Spring Cloud Config Server and Spring Cloud Service Registry, which you must manually recreate in Azure Container Apps. ++### What happens to apps that have in-flight transactions during the migration? ++All in-flight transactions execute without any interruptions, unless you're using Spring Cloud Config Server and Spring Cloud Service Registry, which you must manually recreate in Azure Container Apps. ++### Is there any change in IP address/FQDN after the migration? ++There's no change. All IP addresses/FQDNs remain the same after the migration. ++### I'm using persistent storage. How do I recreate it in Azure Container Apps? ++Persistent storage migrates automatically to Azure Container Apps. ++### What are the pricing implications when moving to Azure Container Apps? ++Azure Container Apps has the same pricing structure as Azure Spring Apps for the consumption and dedicated plans. Charges for active and idle CPU/memory use, along with virtual machine SKUs in dedicated workloads, are identical in Azure Spring Apps and Azure Container Apps. The monthly free grant also applies directly to Azure Container Apps. The only exception to the rule is number of requests for managed Java components are billed in Azure Container Apps consumption plan. ++The following table describes the differences: ++| Resources used for managed Java components | Azure Spring Apps Standard consumption plan | Azure Container Apps consumption plan | +|||--| +| Spring Cloud Service Registry active CPU | No change. | No change. | +| Spring Cloud Service Registry idle CPU | No change. | No change. | +| Spring Cloud Config Server active CPU | No change. | No change. | +| Spring Cloud Config Server idle CPU | No change. | No change. | +| One million requests made to Spring Cloud Service Registry | No extra cost. | See [Azure Container Apps pricing](https://azure.microsoft.com/pricing/details/container-apps/). | +| One million requests made to Spring Cloud Config Server | No extra cost. | See [Azure Container Apps pricing](https://azure.microsoft.com/pricing/details/container-apps/). | ++Also, with Azure Container Apps, you can take advantage of the Azure savings plan and benefit from savings through commitment. For more information, see [Azure savings plan for compute](https://azure.microsoft.com/pricing/offers/savings-plan-compute/). ++### How do I continue to use my own virtual network in Azure Container Apps? ++There's no change to the virtual network experience. You can continue using your own virtual network. ++### Will my app be migrated to the consumption plan or the consumption and dedicated plan with workload profiles in Azure Container Apps? ++There's a direct mapping between the service plans in Azure Spring Apps and Azure Container Apps. If your app is currently running on the consumption plan, it moves to the consumption only plan in Azure Container Apps. If your app is currently running on a consumption and dedicated workload profile, it transitions to the corresponding workload profile in Azure Container Apps. ++### How can I continue to keep my deployment pipelines/workflow working? ++Your deployment pipelines/workflow must point to Azure Container Apps to work properly. For more information, see [Introducing more ways to deploy Azure Container Apps](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/introducing-more-ways-to-deploy-azure-container-apps/ba-p/3678390). ++### How do I continue to make my automation scripts work using Azure CLI? ++Azure CLI scripts must change to make them work in Azure Container Apps. For more information, see [az containerapp](/cli/azure/containerapp). ++ |
spring-apps | Quickstart Access Standard Consumption Within Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/quickstart-access-standard-consumption-within-virtual-network.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Standard consumption and dedicated (Preview) ❌ Basic/Standard ❌ Enterprise |
spring-apps | Quickstart Analyze Logs And Metrics Standard Consumption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/quickstart-analyze-logs-and-metrics-standard-consumption.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Standard consumption and dedicated (Preview) ❌ Basic/Standard ❌ Enterprise |
spring-apps | Quickstart Apps Autoscale Standard Consumption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/quickstart-apps-autoscale-standard-consumption.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Standard consumption and dedicated (Preview) ❌ Basic/Standard ❌ Enterprise |
spring-apps | Quickstart Provision Standard Consumption App Environment With Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/quickstart-provision-standard-consumption-app-environment-with-virtual-network.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Standard consumption and dedicated (Preview) ❌ Basic/Standard ❌ Enterprise |
spring-apps | Quickstart Provision Standard Consumption Service Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/quickstart-provision-standard-consumption-service-instance.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Standard consumption and dedicated (Preview) ❌ Basic/Standard ❌ Enterprise |
spring-apps | Quickstart Standard Consumption Config Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/quickstart-standard-consumption-config-server.md | |
spring-apps | Quickstart Standard Consumption Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/quickstart-standard-consumption-custom-domain.md | |
spring-apps | Quickstart Standard Consumption Eureka Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/quickstart-standard-consumption-eureka-server.md | |
spring-apps | Standard Consumption Customer Responsibilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/consumption-dedicated/standard-consumption-customer-responsibilities.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Standard consumption and dedicated (Preview) ❌ Basic/Standard ❌ Enterprise |
spring-apps | Access App Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/access-app-virtual-network.md | ms.devlang: azurecli # Access an app in Azure Spring Apps in a virtual network -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | Breaking Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/breaking-changes.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | Concept App Customer Responsibilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concept-app-customer-responsibilities.md | Last updated 08/28/2024 # Version support for Java, Spring Boot, and more -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | Concept App Status | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concept-app-status.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | Concept Job | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concept-job.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | Concept Manage Monitor App Spring Boot Actuator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concept-manage-monitor-app-spring-boot-actuator.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ❌ C# |
spring-apps | Concept Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concept-metrics.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | Concept Outbound Type | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concept-outbound-type.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | Concept Security Controls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concept-security-controls.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | Concept Understand App And Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concept-understand-app-and-deployment.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | Concept Zero Downtime Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concept-zero-downtime-deployment.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | Concepts Blue Green Deployment Strategies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concepts-blue-green-deployment-strategies.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | Connect Managed Identity To Azure Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/connect-managed-identity-to-azure-sql.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | Cost Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/cost-management.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | Diagnostic Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/diagnostic-services.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | Expose Apps Gateway End To End Tls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/expose-apps-gateway-end-to-end-tls.md | ms.devlang: java # Expose applications with end-to-end TLS in a virtual network -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | Expose Apps Gateway Tls Termination | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/expose-apps-gateway-tls-termination.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. This article explains how to expose applications to the internet using Application Gateway. |
spring-apps | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/faq.md | zone_pivot_groups: programming-languages-spring-apps # Azure Spring Apps FAQ -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise Azure Spring Apps continuously probes port `1025` for customer's applications wi Yes. For more information, see [Monitor app lifecycle events using Azure Activity log and Azure Service Health](./monitor-app-lifecycle-events.md). -### What are the best practices for migrating existing Spring applications to Azure Spring Apps? --For more information, see [Migrate Spring applications to Azure Spring Apps](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-apps). - ::: zone pivot="programming-language-csharp" ## .NET Core versions |
spring-apps | Github Actions Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/github-actions-key-vault.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | How To Access App From Internet Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-access-app-from-internet-virtual-network.md | ms.devlang: azurecli # Expose applications on Azure Spring Apps to the internet from a public network -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. This article describes how to expose applications on Azure Spring Apps to the internet from a public network. |
spring-apps | How To Application Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-application-insights.md | zone_pivot_groups: spring-apps-tier-selection # Use Application Insights Java In-Process Agent in Azure Spring Apps -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. > > With Spring Boot Native Image applications, use the [Azure Monitor OpenTelemetry Distro / Application Insights in Spring Boot native image Java application](https://aka.ms/AzMonSpringNative) project instead of the Application Insights Java agent. |
spring-apps | How To Bind Cosmos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-bind-cosmos.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | How To Bind Mysql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-bind-mysql.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | How To Bind Postgres | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-bind-postgres.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | How To Bind Redis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-bind-redis.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | How To Capture Dumps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-capture-dumps.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Cicd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-cicd.md | zone_pivot_groups: programming-languages-spring-apps # Automate application deployments to Azure Spring Apps -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Circuit Breaker Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-circuit-breaker-metrics.md | zone_pivot_groups: spring-apps-tier-selection # Collect Spring Cloud Resilience4J Circuit Breaker Metrics with Micrometer (Preview) -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Config Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-config-server.md | zone_pivot_groups: spring-apps-tier-selection # Configure a managed Spring Cloud Config Server in Azure Spring Apps -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | How To Configure Enterprise Spring Cloud Gateway Filters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-configure-enterprise-spring-cloud-gateway-filters.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | How To Configure Enterprise Spring Cloud Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-configure-enterprise-spring-cloud-gateway.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | How To Configure Health Probes Graceful Termination | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-configure-health-probes-graceful-termination.md | |
spring-apps | How To Configure Ingress | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-configure-ingress.md | |
spring-apps | How To Configure Palo Alto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-configure-palo-alto.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | How To Configure Planned Maintenance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-configure-planned-maintenance.md | Last updated 11/07/2023 # How to configure planned maintenance (preview) -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Connect To App Instance For Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-connect-to-app-instance-for-troubleshooting.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Create User Defined Route Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-create-user-defined-route-instance.md | |
spring-apps | How To Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-custom-domain.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | How To Custom Persistent Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-custom-persistent-storage.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | How To Deploy In Azure Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-deploy-in-azure-virtual-network.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | How To Deploy Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-deploy-powershell.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Deploy With Custom Container Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-deploy-with-custom-container-image.md | Last updated 06/27/2024 > [!CAUTION] > This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Standard ✔️ Enterprise |
spring-apps | How To Dump Jvm Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-dump-jvm-options.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ❌ C# |
spring-apps | How To Elastic Diagnostic Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-elastic-diagnostic-settings.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | How To Enable Ingress To App Tls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enable-ingress-to-app-tls.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic ✔️ Standard ✔️ Enterprise |
spring-apps | How To Enable System Assigned Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enable-system-assigned-managed-identity.md | zone_pivot_groups: spring-apps-tier-selection # Enable system-assigned managed identity for an application in Azure Spring Apps -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Enterprise Application Configuration Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enterprise-application-configuration-service.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | How To Enterprise Build Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enterprise-build-service.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | How To Enterprise Configure Apm Integration And Ca Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enterprise-configure-apm-integration-and-ca-certificates.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | How To Enterprise Deploy App At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enterprise-deploy-app-at-scale.md | Last updated 08/28/2024 # Scale out to deploy over 500 and up to 1000 application instances using Azure Spring Apps Enterprise + This article applies to ❌ Basic/Standard ✔️ Enterprise This article guides you on deploying up to 1000 application instances in Azure Spring Apps Enterprise. The feature supporting deployment of more than 500 instances is currently in Preview. This article outlines the limitations during the Preview stage. The Enterprise plan, crafted for handling substantial production workloads, supports a maximum of 1000 application instances per service. However, we recommend using a maximum of 500 instances in your production environment. |
spring-apps | How To Enterprise Deploy Polyglot Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enterprise-deploy-polyglot-apps.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | How To Enterprise Deploy Static File | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enterprise-deploy-static-file.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | How To Enterprise Large Cpu Memory Applications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enterprise-large-cpu-memory-applications.md | |
spring-apps | How To Enterprise Marketplace Offer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enterprise-marketplace-offer.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | How To Enterprise Service Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enterprise-service-registry.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | How To Fix App Restart Issues Caused By Out Of Memory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-fix-app-restart-issues-caused-by-out-of-memory.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Github Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-github-actions.md | zone_pivot_groups: programming-languages-spring-apps # Use Azure Spring Apps CI/CD with GitHub Actions -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Integrate Azure Load Balancers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-integrate-azure-load-balancers.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | How To Intellij Deploy Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-intellij-deploy-apps.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ❌ C# |
spring-apps | How To Job Log Streaming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-job-log-streaming.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | How To Log Streaming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-log-streaming.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | How To Manage Job | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-manage-job.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | How To Manage User Assigned Managed Identities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-manage-user-assigned-managed-identities.md | zone_pivot_groups: spring-apps-tier-selection # Manage user-assigned managed identities for an application in Azure Spring Apps -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Managed Component Log Streaming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-managed-component-log-streaming.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | How To Map Dns Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-map-dns-virtual-network.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Maven Deploy Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-maven-deploy-apps.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ❌ C# |
spring-apps | How To Migrate Standard Tier To Enterprise Tier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-migrate-standard-tier-to-enterprise-tier.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Move Across Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-move-across-regions.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Outbound Public Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-outbound-public-ip.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-permissions.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Prepare App Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-prepare-app-deployment.md | zone_pivot_groups: programming-languages-spring-apps # Prepare an application for deployment in Azure Spring Apps -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise Azure Spring Apps supports the latest Spring Boot or Spring Cloud major version The following table lists the supported Spring Boot and Spring Cloud combinations: +### [Basic/Standard plan](#tab/basic-standard-plan) ++| Spring Boot version | Spring Cloud version | End of support | +|||-| +| 3.2.x | 2023.0.x also known as Leyton | 2024-11-23 | +| 3.1.x | 2022.0.3+ also known as Kilburn | 2024-05-18 | +| 3.0.x | 2022.0.3+ also known as Kilburn | 2023-11-24 | +| 2.7.x | 2021.0.3+ also known as Jubilee | 2023-11-24 | + ### [Enterprise plan](#tab/enterprise-plan) | Spring Boot version | Spring Cloud version | End of commercial support | The following table lists the supported Spring Boot and Spring Cloud combination | 2.7.x | 2021.0.3+ also known as Jubilee | 2025-08-24 | | 2.6.x | 2021.0.3+ also known as Jubilee | 2024-02-24 | -### [Basic/Standard plan](#tab/basic-standard-plan) --| Spring Boot version | Spring Cloud version | End of support | -|||-| -| 3.2.x | 2023.0.x also known as Leyton | 2024-11-23 | -| 3.1.x | 2022.0.3+ also known as Kilburn | 2024-05-18 | -| 3.0.x | 2022.0.3+ also known as Kilburn | 2023-11-24 | -| 2.7.x | 2021.0.3+ also known as Jubilee | 2023-11-24 | - For more information, see the following pages: public class GatewayApplication { ### Distributed configuration +#### [Basic/Standard plan](#tab/basic-standard-plan) ++To enable distributed configuration, include the following `spring-cloud-config-client` dependency in the dependencies section of your *pom.xml* file: ++```xml +<dependency> + <groupId>org.springframework.cloud</groupId> + <artifactId>spring-cloud-config-client</artifactId> +</dependency> +<dependency> + <groupId>org.springframework.cloud</groupId> + <artifactId>spring-cloud-starter-bootstrap</artifactId> +</dependency> +``` ++> [!WARNING] +> Don't specify `spring.cloud.config.enabled=false` in your bootstrap configuration. Otherwise, your application stops working with Config Server. + #### [Enterprise plan](#tab/enterprise-plan) To enable distributed configuration in the Enterprise plan, use [Application Configuration Service for VMware Tanzu](https://docs.vmware.com/en/Application-Configuration-Service-for-VMware-Tanzu/2.3/acs/GUID-overview.html), which is one of the proprietary VMware Tanzu components. Application Configuration Service for Tanzu is Kubernetes-native, and different from Spring Cloud Config Server. Application Configuration Service for Tanzu enables the management of Kubernetes-native ConfigMap resources that are populated from properties defined in one or more Git repositories. To use Application Configuration Service for Tanzu, do the following steps for e --config-file-pattern <config-file-pattern> ``` -#### [Basic/Standard plan](#tab/basic-standard-plan) --To enable distributed configuration, include the following `spring-cloud-config-client` dependency in the dependencies section of your *pom.xml* file: --```xml -<dependency> - <groupId>org.springframework.cloud</groupId> - <artifactId>spring-cloud-config-client</artifactId> -</dependency> -<dependency> - <groupId>org.springframework.cloud</groupId> - <artifactId>spring-cloud-starter-bootstrap</artifactId> -</dependency> -``` --> [!WARNING] -> Don't specify `spring.cloud.config.enabled=false` in your bootstrap configuration. Otherwise, your application stops working with Config Server. - ### Metrics |
spring-apps | How To Private Network Access Backend Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-private-network-access-backend-storage.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Standard ✔️ Enterprise |
spring-apps | How To Remote Debugging App Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-remote-debugging-app-instance.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Scale Manual | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-scale-manual.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | How To Self Diagnose Running In Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-self-diagnose-running-in-vnet.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Self Diagnose Solve | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-self-diagnose-solve.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | How To Set Up Sso With Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-set-up-sso-with-azure-ad.md | |
spring-apps | How To Setup Autoscale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-setup-autoscale.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | How To Staging Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-staging-environment.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ❌ C# |
spring-apps | How To Start Stop Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-start-stop-delete.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | How To Start Stop Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-start-stop-service.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Troubleshoot Enterprise Spring Cloud Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-troubleshoot-enterprise-spring-cloud-gateway.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | How To Use Accelerator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-use-accelerator.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | How To Use Application Live View | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-use-application-live-view.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | How To Use Dev Tool Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-use-dev-tool-portal.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | How To Use Enterprise Api Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-use-enterprise-api-portal.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | How To Use Enterprise Spring Cloud Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-use-enterprise-spring-cloud-gateway.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | How To Use Flush Dns Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-use-flush-dns-settings.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic ✔️ Standard ✔️ Enterprise |
spring-apps | How To Use Grpc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-use-grpc.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Use Managed Identities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-use-managed-identities.md | zone_pivot_groups: spring-apps-tier-selection # Use managed identities for applications in Azure Spring Apps -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Use Tls Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-use-tls-certificate.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | How To Write Log To Custom Persistent Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-write-log-to-custom-persistent-storage.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ❌ C# |
spring-apps | Monitor App Lifecycle Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/monitor-app-lifecycle-events.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | Monitor Apps By Application Live View | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/monitor-apps-by-application-live-view.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/overview.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Enterprise ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard The following articles help you get started: * [Deploy your first application to Azure Spring Apps](quickstart.md) * [Introduction to the sample app](quickstart-sample-app-introduction.md) -The following articles help you migrate existing Spring Boot apps to Azure Spring Apps: --* [Migrate Spring Boot applications to Azure Spring Apps](/azure/developer/java/migration/migrate-spring-boot-to-azure-spring-apps) -* [Migrate Spring Cloud applications to Azure Spring Apps](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-apps?pivots=sc-standard-tier) - The following quickstarts apply to the Basic/Standard plan only. For Enterprise quickstarts, see the [Get started with the Enterprise plan](#get-started-with-the-enterprise-plan) section. * [Provision an Azure Spring Apps service instance](../basic-standard/quickstart-provision-service-instance.md) |
spring-apps | Plan Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/plan-comparison.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. This article provides a comparison of plans available in Azure Spring Apps. Each plan is designed to cater to different customer scenarios and purposes, as described in the following list: |
spring-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/policy-reference.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | Quickstart Automate Deployments Github Actions Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-automate-deployments-github-actions-enterprise.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | Quickstart Configure Single Sign On Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-configure-single-sign-on-enterprise.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | Quickstart Deploy Apps Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-apps-enterprise.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | Quickstart Deploy Event Driven App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-event-driven-app.md | zone_pivot_groups: spring-apps-plan-selection # Quickstart: Deploy an event-driven application to Azure Spring Apps -> [!NOTE] -> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog). --> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | Quickstart Deploy Infrastructure Vnet Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-infrastructure-vnet-azure-cli.md | Last updated 08/28/2024 # Quickstart: Provision Azure Spring Apps using Azure CLI -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic ✔️ Standard ✔️ Enterprise The Enterprise deployment plan includes the following Tanzu components: The deployment script used in this quickstart is from the [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture). -### [Enterprise plan](#tab/azure-spring-apps-enterprise) -- ### [Standard plan](#tab/azure-spring-apps-standard) :::code language="azurecli" source="~/azure-spring-apps-reference-architecture/CLI/brownfield-deployment/azuredeploySpringStandard.sh"::: +### [Enterprise plan](#tab/azure-spring-apps-enterprise) ++ ## Deploy the cluster |
spring-apps | Quickstart Deploy Infrastructure Vnet Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-infrastructure-vnet-bicep.md | Last updated 08/28/2024 # Quickstart: Provision Azure Spring Apps using Bicep -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic ✔️ Standard ✔️ Enterprise To deploy the cluster, use the following steps. First, create an *azuredeploy.bicep* file with the following contents: -### [Enterprise plan](#tab/azure-spring-apps-enterprise) -- ### [Standard plan](#tab/azure-spring-apps-standard) :::code language="bicep" source="~/azure-spring-apps-reference-architecture/Bicep/brownfield-deployment/azuredeploySpringStandard.bicep"::: +### [Enterprise plan](#tab/azure-spring-apps-enterprise) ++ Next, open a Bash window and run the following Azure CLI command, replacing the *\<value>* placeholders with the following values: |
spring-apps | Quickstart Deploy Infrastructure Vnet Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-infrastructure-vnet-terraform.md | Last updated 04/23/2024 # Quickstart: Provision Azure Spring Apps using Terraform -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic ✔️ Standard ✔️ Enterprise For more customization including custom domain support, see the [Azure Spring Ap The configuration file used in this quickstart is from the [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture). -### [Enterprise plan](#tab/azure-spring-apps-enterprise) -- ### [Standard plan](#tab/azure-spring-apps-standard) :::code language="hcl" source="~/azure-spring-apps-reference-architecture/terraform/brownfield-deployment/Standard/main.tf"::: +### [Enterprise plan](#tab/azure-spring-apps-enterprise) ++ ## Apply the Terraform plan |
spring-apps | Quickstart Deploy Infrastructure Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-infrastructure-vnet.md | Last updated 08/28/2024 # Quickstart: Provision Azure Spring Apps using an ARM template -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic ✔️ Standard ✔️ Enterprise The Enterprise deployment plan includes the following Tanzu components: The templates used in this quickstart are from the [Azure Spring Apps Reference Architecture](/previous-versions/azure/spring-apps/reference-architecture). -### [Enterprise plan](#tab/azure-spring-apps-enterprise) -- ### [Standard plan](#tab/azure-spring-apps-standard) :::code language="json" source="~/azure-spring-apps-reference-architecture/ARM/brownfield-deployment/azuredeploySpringStandard.json"::: +### [Enterprise plan](#tab/azure-spring-apps-enterprise) ++ Two Azure resources are defined in the template: To deploy the template, use the following steps. First, select the following image to sign in to Azure and open a template. The template creates an Azure Spring Apps instance in an existing Virtual Network and a workspace-based Application Insights instance in an existing Azure Monitor Log Analytics Workspace. -### [Enterprise plan](#tab/azure-spring-apps-enterprise) -- ### [Standard plan](#tab/azure-spring-apps-standard) :::image type="content" source="~/reusable-content/ce-skilling/azure/media/template-deployments/deploy-to-azure-button.svg" alt-text="Button to deploy the Resource Manager template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-spring-apps-landing-zone-accelerator%2Freference-architecture%2FARM%2Fbrownfield-deployment%2fazuredeploySpringStandard.json"::: +### [Enterprise plan](#tab/azure-spring-apps-enterprise) ++ Next, enter values for the following fields: |
spring-apps | Quickstart Deploy Java Native Image App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-java-native-image-app.md | -> [!NOTE] -> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog). --> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | Quickstart Deploy Microservice Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-microservice-apps.md | zone_pivot_groups: spring-apps-tier-selection # Quickstart: Deploy microservice applications to Azure Spring Apps -> [!NOTE] -> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog). --> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. This article explains how to deploy microservice applications to Azure Spring Apps using the well-known sample app [PetClinic](https://github.com/spring-petclinic/spring-petclinic-microservices). |
spring-apps | Quickstart Deploy Restful Api App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-restful-api-app.md | zone_pivot_groups: spring-apps-enterprise-or-consumption-plan-selection # Quickstart: Deploy RESTful API application to Azure Spring Apps -> [!NOTE] -> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog). --> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. This article describes how to deploy a RESTful API application protected by [Microsoft Entra ID](/entra/fundamentals/whatis) to Azure Spring Apps. The sample project is a simplified version based on the [Simple Todo](https://github.com/Azure-Samples/ASA-Samples-Web-Application) web application, which only provides the backend service and uses Microsoft Entra ID to protect the RESTful APIs. |
spring-apps | Quickstart Deploy Spring Batch App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-spring-batch-app.md | -> [!NOTE] -> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog). --> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. This quickstart shows how to deploy a Spring Batch ephemeral application to Azure Spring Apps. The sample project is derived from the Spring Batch sample [Football Job](https://github.com/spring-projects/spring-batch/blob/main/spring-batch-samples/src/main/jav). It's a statistics loading job. In the original sample, a unit test triggers the job. In the adapted sample, the `main` method of `FootballJobApplication` initiates the job. |
spring-apps | Quickstart Deploy Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-web-app.md | zone_pivot_groups: spring-apps-plan-selection # Quickstart: Deploy your first web application to Azure Spring Apps -> [!NOTE] -> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog). --> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. This quickstart shows how to deploy a Spring Boot web application to Azure Spring Apps. The sample project is a simple ToDo application to add tasks, mark when they're complete, and then delete them. The following screenshot shows the application: |
spring-apps | Quickstart Fitness Store Azure Openai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-fitness-store-azure-openai.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | Quickstart Integrate Azure Database And Redis Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-integrate-azure-database-and-redis-enterprise.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | Quickstart Key Vault Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-key-vault-enterprise.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | Quickstart Monitor End To End Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-monitor-end-to-end-enterprise.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | Quickstart Sample App Acme Fitness Store Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-sample-app-acme-fitness-store-introduction.md | -> [!NOTE] -> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog). --> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | Quickstart Sample App Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-sample-app-introduction.md | -> [!NOTE] -> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog). --> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise PetClinic is decomposed into four core Spring apps. All of them are independentl There are several common patterns in distributed systems that support core services. Azure Spring Apps provides tools that enhance Spring Boot applications to implement the following patterns: -### [Enterprise plan](#tab/enterprise-plan) --* **Application Configuration Service for Tanzu**: Application Configuration Service for Tanzu is one of the commercial VMware Tanzu components. It enables the management of Kubernetes-native ConfigMap resources that are populated from properties defined in one or more Git repositories. -* **Tanzu Service Registry**: Tanzu Service Registry is one of the commercial VMware Tanzu components. It provides your apps with an implementation of the Service Discovery pattern, one of the key tenets of a Spring-based architecture. Your apps can use the Service Registry to dynamically discover and call registered services. - ### [Basic/Standard plan](#tab/basic-standard-plan) * **Config service**: Azure Spring Apps Config is a horizontally scalable centralized configuration service for distributed systems. It uses a pluggable repository that currently supports local storage, Git, and Subversion. * **Service discovery**: It allows automatic detection of network locations for service instances, which could have dynamically assigned addresses because of autoscaling, failures, and upgrades. +### [Enterprise plan](#tab/enterprise-plan) ++* **Application Configuration Service for Tanzu**: Application Configuration Service for Tanzu is one of the commercial VMware Tanzu components. It enables the management of Kubernetes-native ConfigMap resources that are populated from properties defined in one or more Git repositories. +* **Tanzu Service Registry**: Tanzu Service Registry is one of the commercial VMware Tanzu components. It provides your apps with an implementation of the Service Discovery pattern, one of the key tenets of a Spring-based architecture. Your apps can use the Service Registry to dynamically discover and call registered services. + ## Database configuration For full implementation details, see our fork of [PetClinic](https://github.com/ ## Next steps -### [Enterprise plan](#tab/enterprise-plan) +### [Basic/Standard plan](#tab/basic-standard-plan) > [!div class="nextstepaction"]-> [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise plan](quickstart-deploy-apps-enterprise.md) +> [Quickstart: Provision an Azure Spring Apps service instance](../basic-standard/quickstart-provision-service-instance.md) -### [Basic/Standard plan](#tab/basic-standard-plan) +### [Enterprise plan](#tab/enterprise-plan) > [!div class="nextstepaction"]-> [Quickstart: Provision an Azure Spring Apps service instance](../basic-standard/quickstart-provision-service-instance.md) +> [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise plan](quickstart-deploy-apps-enterprise.md) |
spring-apps | Quickstart Set Request Rate Limits Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-set-request-rate-limits-enterprise.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart.md | zone_pivot_groups: spring-apps-plan-selection # Quickstart: Deploy your first application to Azure Spring Apps -> [!NOTE] -> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog). --> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. This article explains how to deploy a small application to run on Azure Spring Apps. |
spring-apps | Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quotas.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/resources.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | Secure Communications End To End | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/secure-communications-end-to-end.md | Azure Spring Apps is jointly built, operated, and supported by Microsoft and VMw - [Deploy Spring microservices to Azure](/training/modules/azure-spring-cloud-workshop/) - [Azure Key Vault Certificates Spring Cloud Azure Starter (GitHub.com)](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/spring/spring-cloud-azure-starter-keyvault-certificates/pom.xml) - [Azure Spring Apps architecture design](/azure/architecture/web-apps/spring-apps?toc=/azure/spring-apps/toc.json&bc=/azure/spring-apps/breadcrumb/toc.json)-- Migrate your [Spring Boot](/azure/developer/java/migration/migrate-spring-boot-to-azure-spring-apps), [Spring Cloud](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-apps), and [Tomcat](/azure/developer/java/migration/migrate-tomcat-to-azure-spring-apps) applications to Azure Spring Apps |
spring-apps | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/security-controls-policy.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | Structured App Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/structured-app-log.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | Tools To Troubleshoot Memory Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/tools-to-troubleshoot-memory-issues.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | Troubleshoot Build Exit Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/troubleshoot-build-exit-code.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | Troubleshoot Exit Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/troubleshoot-exit-code.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Enterprise ✔️ Basic/Standard |
spring-apps | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/troubleshoot.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | Troubleshooting Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/troubleshooting-vnet.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | Tutorial Alerts Action Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/tutorial-alerts-action-groups.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ✔️ C# |
spring-apps | Tutorial Authenticate Client With Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/tutorial-authenticate-client-with-gateway.md | -> [!NOTE] -> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog). --> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Standard consumption and dedicated (Preview) |
spring-apps | Tutorial Circuit Breaker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/tutorial-circuit-breaker.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ❌ C# |
spring-apps | Tutorial Managed Identities Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/tutorial-managed-identities-functions.md | Last updated 08/29/2024 # Tutorial: Use a managed identity to invoke Azure Functions from an Azure Spring Apps app -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | Tutorial Managed Identities Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/tutorial-managed-identities-key-vault.md | zone_pivot_groups: spring-apps-tier-selection # Connect Azure Spring Apps to Key Vault using managed identities -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Java ❌ C# |
spring-apps | Vmware Tanzu Components | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/vmware-tanzu-components.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard ✔️ Enterprise |
spring-apps | Vnet Customer Responsibilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/vnet-customer-responsibilities.md | -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard ✔️ Enterprise |
spring-apps | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/whats-new.md | Last updated 07/29/2024 # What's new in Azure Spring Apps? -> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. Azure Spring Apps is improved on an ongoing basis. To help you stay up to date with the most recent developments, this article provides you with information about the latest releases. |
static-web-apps | Add Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/add-api.md | Before you can deploy your app to Azure, update your repository's GitHub Actions ```yaml ###### Repository/Build Configurations - These values can be configured to match your app requirements. ###### # For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig- app_location: "/" # App source code path + app_location: "src" # App source code path api_location: "api" # Api source code path - optional- output_location: "build" # Built app content directory - optional + output_location: "" # Built app content directory - optional ###### End of Repository/Build Configurations ###### ```+ **Note**: The above values of `api_location` ,`app_location`,`output_location` are for no framework and these value changes based on framework. 1. Save the file. |
static-web-apps | Deploy Blazor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-blazor.md | +> [!NOTE] +> For steps to deploy a Blazor app via Visual Studio, see [Deploy a Blazor app on Azure Static Web Apps](/aspnet/core/blazor/host-and-deploy/webassembly). + ## Prerequisites - [GitHub](https://github.com) account |
storage | Lifecycle Management Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md | description: Use Azure Blob Storage lifecycle management policies to create auto Previously updated : 05/01/2024 Last updated : 09/30/2024 For more information about pricing, see [Block Blob pricing](https://azure.micro - Each rule can have up to 10 case-sensitive prefixes and up to 10 blob index tag conditions. -- If you enable firewall rules for your storage account, lifecycle management requests may be blocked. You can unblock these requests by providing exceptions for trusted Microsoft services. For more information, see the **Exceptions** section in [Configure firewalls and virtual networks](../common/storage-network-security.md#exceptions).- - A lifecycle management policy can't change the tier of a blob that uses an encryption scope. - The delete action of a lifecycle management policy won't work with any blob in an immutable container. With an immutable policy, objects can be created and read, but not modified or deleted. For more information, see [Store business-critical blob data with immutable storage](./immutable-storage-overview.md). |
storage | Storage Blob Download Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-go.md | The following example downloads a blob to a stream, and reads from the stream by :::code language="go" source="~/blob-devguide-go/cmd/download-blob/download_blob.go" id="snippet_download_blob_stream"::: +### Specify data transfer options for download ++You can set configuration options when downloading a blob to optimize performance. The following configuration options are available for download operations: ++- `BlockSize`: The size of each block when downloading a block blob. The default value is 4 MB. +- `Concurrency`: The maximum number of parallel connections to use during download. The default value is 5. ++These options are available when downloading using the following methods: ++- [DownloadBuffer](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#Client.DownloadBuffer) +- [DownloadFile](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#Client.DownloadFile) ++The [DownloadStream](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#Client.DownloadStream) method doesn't support these options, and downloads data in a single request. ++For more information on transfer size limits for Blob Storage, see [Scale targets for Blob storage](scalability-targets.md#scale-targets-for-blob-storage). ++The following code example shows how to specify data transfer options using the [DownloadFileOptions](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob#DownloadFileOptions). The values provided in this sample aren't intended to be a recommendation. To properly tune these values, you need to consider the specific needs of your app. +++To learn more about tuning data transfer options, see [Performance tuning for uploads and downloads with Go](storage-blobs-tune-upload-download-go.md). + [!INCLUDE [storage-dev-guide-code-samples-note-go](../../../includes/storage-dev-guides/storage-dev-guide-code-samples-note-go.md)] ## Resources |
storage | Storage Blob Upload Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-go.md | The authorization mechanism must have the necessary permissions to upload a blob To upload a blob, call any of the following methods from the client object: +- [Upload](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob#Client.Upload) - [UploadBuffer](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#Client.UploadBuffer) - [UploadFile](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#Client.UploadFile) - [UploadStream](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#Client.UploadStream) You can define client library configuration options when uploading a blob. These You can set configuration options when uploading a blob to optimize performance. The following configuration options are available for upload operations: -- `BlockSize`: The size of each block when uploading a block blob. The default value is 1 MiB.-- `Concurrency`: The maximum number of parallel connections to use during upload. The default value is 1.+- `BlockSize`: The size of each block when uploading a block blob. The default value is 4 MB. +- `Concurrency`: The maximum number of parallel connections to use during upload. The default value is 5. ++These configuration options are available when uploading using the following methods: ++- [UploadBuffer](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#Client.UploadBuffer) +- [UploadStream](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#Client.UploadStream) +- [UploadFile](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#Client.UploadFile) ++The [Upload](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob#Client.Upload) method doesn't support these options, and uploads data in a single request. For more information on transfer size limits for Blob Storage, see [Scale targets for Blob storage](scalability-targets.md#scale-targets-for-blob-storage). The following code example shows how to specify data transfer options using the :::code language="go" source="~/blob-devguide-go/cmd/upload-blob/upload_blob.go" id="snippet_upload_blob_transfer_options"::: +To learn more about tuning data transfer options, see [Performance tuning for uploads and downloads with Go](storage-blobs-tune-upload-download-go.md). + [!INCLUDE [storage-dev-guide-code-samples-note-go](../../../includes/storage-dev-guides/storage-dev-guide-code-samples-note-go.md)] ## Resources |
storage | Storage Blobs Tune Upload Download Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download-go.md | + + Title: Performance tuning for uploads and downloads with Azure Storage client library for Go ++description: Learn how to tune your uploads and downloads for better performance with Azure Storage client library for Go. +++++ Last updated : 09/30/2024+ms.devlang: golang ++++# Performance tuning for uploads and downloads with Go ++When an application transfers data using the Azure Storage client library for Go, there are several factors that can affect speed, memory usage, and even the success or failure of the request. To maximize performance and reliability for data transfers, it's important to be proactive in configuring client library transfer options based on the environment your app runs in. ++This article walks through several considerations for tuning data transfer options. When properly tuned, the client library can efficiently distribute data across multiple requests, which can result in improved operation speed, memory usage, and network stability. ++## Performance tuning for uploads ++Properly tuning data transfer options is key to reliable performance for uploads. Storage transfers are partitioned into several subtransfers based on the values of these properties. The maximum supported transfer size varies by operation and service version, so be sure to check the documentation to determine the limits. For more information on transfer size limits for Blob storage, see [Scale targets for Blob storage](scalability-targets.md#scale-targets-for-blob-storage). ++### Set transfer options for uploads ++If the total blob size is less than or equal to 256 MB, the data is uploaded with a single [Put Blob](/rest/api/storageservices/put-blob) request. If the blob size is greater than 256 MB, or if the blob size is unknown, the blob is uploaded in chunks using a series of [Put Block](/rest/api/storageservices/put-block) calls followed by [Put Block List](/rest/api/storageservices/put-block-list). ++The following properties can be configured and tuned based on the needs of your app: ++- `BlockSize`: The maximum length of a transfer in bytes when uploading a block blob in chunks. Defaults to 4 MB. +- `Concurrency`: The maximum number of subtransfers that can be used in parallel. Defaults to 5. ++These configuration options are available when uploading using the following methods: ++- [UploadBuffer](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#Client.UploadBuffer) +- [UploadStream](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#Client.UploadStream) +- [UploadFile](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#Client.UploadFile) ++The [Upload](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob#Client.Upload) method doesn't support these options, and uploads data in a single request. ++> [!NOTE] +> The client libraries use defaults for each data transfer option, if not provided. These defaults are typically performant in a data center environment, but not likely to be suitable for home consumer environments. Poorly tuned data transfer options can result in excessively long operations and even request timeouts. It's best to be proactive in testing these values, and tuning them based on the needs of your application and environment. ++#### BlockSize ++The `BlockSize` argument is the maximum length of a transfer in bytes when uploading a block blob in chunks. ++To keep data moving efficiently, the client libraries might not always reach the `BlockSize` value for every transfer. Depending on the operation, the maximum supported value for transfer size can vary. For more information on transfer size limits for Blob storage, see the chart in [Scale targets for Blob storage](scalability-targets.md#scale-targets-for-blob-storage). ++#### Code example ++The following code example shows how to define values for an [UploadFileOptions](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#UploadFileOptions) instance and pass these configuration options as a parameter to [UploadFile](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#Client.UploadFile). ++The values provided in this sample aren't intended to be a recommendation. To properly tune these values, you need to consider the specific needs of your app. ++```go +func uploadBlobWithTransferOptions(client *azblob.Client, containerName string, blobName string) { + // Open the file for reading + file, err := os.OpenFile("path/to/sample/file", os.O_RDONLY, 0) + handleError(err) ++ defer file.Close() ++ // Upload the data to a block blob with transfer options + _, err = client.UploadFile(context.TODO(), containerName, blobName, file, + &azblob.UploadFileOptions{ + BlockSize: int64(8 * 1024 * 1024), // 8 MiB + Concurrency: uint16(2), + }) + handleError(err) +} +``` ++In this example, we set the number of parallel transfer workers to 2, using the `Concurrency` field. This configuration opens up to two connections simultaneously, allowing the upload to happen in parallel. If the blob size is larger than 256 MB, the blob is uploaded in chunks with a maximum chunk size of 8 MiB, as set by the `Block_Size` field. ++### Performance considerations for uploads ++During an upload, the Storage client libraries split a given upload stream into multiple subuploads based on the configuration options defined during client construction. Each subupload has its own dedicated call to the REST operation. The Storage client library manages these REST operations in parallel (depending on transfer options) to complete the full upload. ++You can learn how the client library handles buffering in the following sections. ++> [!NOTE] +> Block blobs have a maximum block count of 50,000 blocks. The maximum size of your block blob, then, is 50,000 times `Block_Size`. ++#### Buffering during uploads ++The Storage REST layer doesnΓÇÖt support picking up a REST upload operation where you left off; individual transfers are either completed or lost. To ensure resiliency for stream uploads, the Storage client libraries buffer data for each individual REST call before starting the upload. In addition to network speed limitations, this buffering behavior is a reason to consider a smaller value for `BlockSize`, even when uploading in sequence. Decreasing the value of `BlockSize` decreases the maximum amount of data that is buffered on each request and each retry of a failed request. If you're experiencing frequent timeouts during data transfers of a certain size, reducing the value of `BlockSize` reduces the buffering time, and might result in better performance. ++## Performance tuning for downloads ++Properly tuning data transfer options is key to reliable performance for downloads. Storage transfers are partitioned into several subtransfers based on the values of these properties. ++### Set transfer options for downloads ++The following properties can be tuned based on the needs of your app: ++- `BlockSize`: The maximum chunk size used for downloading a blob. Defaults to 4 MB. +- `Concurrency`: The maximum number of subtransfers that can be used in parallel. Defaults to 5. ++These options are available when downloading using the following methods: ++- [DownloadBuffer](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#Client.DownloadBuffer) +- [DownloadFile](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#Client.DownloadFile) ++The [DownloadStream](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#Client.DownloadStream) method doesn't support these options, and downloads data in a single request. ++#### Code example ++The following code example shows how to define values for an [DownloadFileOptions](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#DownloadFileOptions) instance and pass these configuration options as a parameter to [DownloadFile](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#Client.DownloadFile). ++The values provided in this sample aren't intended to be a recommendation. To properly tune these values, you need to consider the specific needs of your app. ++```go +func downloadBlobTransferOptions(client *azblob.Client, containerName string, blobName string) { + // Create or open a local file where we can download the blob + file, err := os.Create("path/to/sample/file") + handleError(err) ++ // Download the blob to the local file + _, err = client.DownloadFile(context.TODO(), containerName, blobName, file, + &azblob.DownloadFileOptions{ + BlockSize: int64(4 * 1024 * 1024), // 4 MiB + Concurrency: uint16(2), + }) + handleError(err) +} +``` ++### Performance considerations for downloads ++During a download, the Storage client libraries split a given download request into multiple subdownloads based on the configuration options defined during client construction. Each subdownload has its own dedicated call to the REST operation. Depending on transfer options, the client libraries manage these REST operations in parallel to complete the full download. ++## Related content ++- This article is part of the Blob Storage developer guide for Go. See the full list of developer guide articles at [Build your app](storage-blob-go-get-started.md#build-your-app). +- To understand more about factors that can influence performance for Azure Storage operations, see [Latency in Blob storage](storage-blobs-latency.md). +- To see a list of design considerations to optimize performance for apps using Blob storage, see [Performance and scalability checklist for Blob storage](storage-performance-checklist.md). |
synapse-analytics | Apache Spark External Metastore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-external-metastore.md | Azure Synapse Analytics allows Apache Spark pools in the same workspace to share The feature works with Spark 3.1. The following table shows the supported Hive Metastore versions for each Spark version. -|Spark Version|HMS 0.13.X|HMS 1.2.X|HMS 2.1.X|HMS 2.3.x|HMS 3.1.X| -|--|--|--|--|--|--| -|2.4|Yes|Yes|Yes|Yes|No| -|3.1|Yes|Yes|Yes|Yes|Yes| +|Spark Version|HMS 2.3.x|HMS 3.1.X| +|--|--|--| +|3.3|Yes|Yes| ## Set up linked service to Hive Metastore Here are the configurations and descriptions: |Spark config|Description| |--|--|-|`spark.sql.hive.metastore.version`|Supported versions: <ul><li>`0.13`</li><li>`1.2`</li><li>`2.1`</li><li>`2.3`</li><li>`3.1`</li></ul> Make sure you use the first 2 parts without the 3rd part| -|`spark.sql.hive.metastore.jars`|<ul><li>Version 0.13: `/opt/hive-metastore/lib-0.13/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-client/*` </li><li>Version 1.2: `/opt/hive-metastore/lib-1.2/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-client/*` </li><li>Version 2.1: `/opt/hive-metastore/lib-2.1/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-client/*` </li><li>Version 2.3: `/opt/hive-metastore/lib-2.3/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-client/*` </li><li>Version 3.1: `/opt/hive-metastore/lib-3.1/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-client/*`</li></ul>| +|`spark.sql.hive.metastore.version`|Supported versions: <ul><li>`2.3`</li><li>`3.1`</li></ul> Make sure you use the first 2 parts without the 3rd part| +|`spark.sql.hive.metastore.jars`|<ul><li>Version 2.3: `/opt/hive-metastore/lib-2.3/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-client/*` </li><li>Version 3.1: `/opt/hive-metastore/lib-3.1/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-client/*`</li></ul>| |`spark.hadoop.hive.synapse.externalmetastore.linkedservice.name`|Name of your linked service|+|`spark.sql.hive.metastore.sharedPrefixes`|`com.mysql.jdbc,com.microsoft.sqlserver,com.microsoft.vegas`| + ### Configure at Spark pool level When creating the Spark pool, under **Additional Settings** tab, put below configurations in a text file and upload it in **Apache Spark configuration** section. You can also use the context menu for an existing Spark pool, choose Apache Spark configuration to add these configurations. Update metastore version and linked service name, and save below configs in a te spark.sql.hive.metastore.version <your hms version, Make sure you use the first 2 parts without the 3rd part> spark.hadoop.hive.synapse.externalmetastore.linkedservice.name <your linked service name> spark.sql.hive.metastore.jars /opt/hive-metastore/lib-<your hms version, 2 parts>/*:/usr/hdp/current/hadoop-client/lib/*+spark.sql.hive.metastore.sharedPrefixes com.mysql.jdbc,com.microsoft.sqlserver,com.microsoft.vegas ``` -Here is an example for metastore version 2.1 with linked service named as HiveCatalog21: +Here is an example for metastore version 2.3 with linked service named as HiveCatalog21: ```properties-spark.sql.hive.metastore.version 2.1 +spark.sql.hive.metastore.version 2.3 spark.hadoop.hive.synapse.externalmetastore.linkedservice.name HiveCatalog21-spark.sql.hive.metastore.jars /opt/hive-metastore/lib-2.1/*:/usr/hdp/current/hadoop-client/lib/* +spark.sql.hive.metastore.jars /opt/hive-metastore/lib-2.3/*:/usr/hdp/current/hadoop-client/lib/* +spark.sql.hive.metastore.sharedPrefixes com.mysql.jdbc,com.microsoft.sqlserver,com.microsoft.vegas ``` ### Configure at Spark session level For notebook session, you can also configure the Spark session in notebook using "conf":{ "spark.sql.hive.metastore.version":"<your hms version, 2 parts>", "spark.hadoop.hive.synapse.externalmetastore.linkedservice.name":"<your linked service name>",- "spark.sql.hive.metastore.jars":"/opt/hive-metastore/lib-<your hms version, 2 parts>/*:/usr/hdp/current/hadoop-client/lib/*" + "spark.sql.hive.metastore.jars":"/opt/hive-metastore/lib-<your hms version, 2 parts>/*:/usr/hdp/current/hadoop-client/lib/*", + "spark.sql.hive.metastore.sharedPrefixes":"com.mysql.jdbc,com.microsoft.sqlserver,com.microsoft.vegas" } } ``` |
synapse-analytics | Apache Spark External Metastore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/third-party-notices.md | - Title: Legal notices -description: Legal notices for Azure documentation ---- Previously updated : 03/08/2019---# Legal Notices --Microsoft and any contributors grant you a license to the Microsoft documentation and other content -in this repository under the [Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/legalcode), and grant you a license to any code in the repository under the [MIT License](https://opensource.org/licenses/MIT). --Microsoft, Windows, Microsoft Azure and/or other Microsoft products and services referenced in the documentation -may be either trademarks or registered trademarks of Microsoft in the United States and/or other countries. -The licenses for this project do not grant you rights to use any Microsoft names, logos, or trademarks. -Microsoft's general trademark guidelines can be found at [Microsoft Trademark and Brand Guidelines](https://www.microsoft.com/legal/intellectualproperty/trademarks). --Privacy information can be found at [https://privacy.microsoft.com/en-us/](https://privacy.microsoft.com/en-us/) --Microsoft and any contributors reserve all others rights, whether under their respective copyrights, patents, -or trademarks, whether by implication, estoppel or otherwise. --The Go gopher was designed by [Renee French](https://reneefrench.blogspot.com/). -The design is licensed under the [Creative Commons 3.0 Attributions license](https://creativecommons.org/licenses/by/3.0/us/). |
web-application-firewall | Afds Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/afds-overview.md | If bot protection is enabled, incoming requests that match bot rules are blocked The Bot Manager 1.1 rule set is available on Azure Front Door premium version. +For more information, see [Azure WAFΓÇÖs Bot Manager 1.1 and JavaScript Challenge: Navigating the Bot Threat Terrain](https://techcommunity.microsoft.com/t5/azure-network-security-blog/azure-waf-s-bot-manager-1-1-and-javascript-challenge-navigating/ba-p/4249652). ++ ## Configuration You can configure and deploy all WAF policies by using the Azure portal, REST APIs, Azure Resource Manager templates, and Azure PowerShell. You can also configure and manage Azure WAF policies at scale by using Firewall Manager integration. For more information, see [Use Azure Firewall Manager to manage Azure Web Application Firewall policies](../shared/manage-policies.md). |