Updates from: 11/06/2024 02:04:59
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/openid-connect.md
client_id=00001111-aaaa-2222-bbbb-3333cccc4444
| {tenant} | Yes | Name of your [Azure AD B2C tenant]( tenant-management-read-tenant-name.md#get-your-tenant-name). If you're using a [custom domain](custom-domain.md), replace `tenant.b2clogin.com` with your domain, such as `fabrikam.com`. | | {policy} | Yes | The user flow or policy that the app runs. Specify the name of a user flow that you create in your Azure AD B2C tenant. For example: `b2c_1_sign_in`, `b2c_1_sign_up`, or `b2c_1_edit_profile`. | | client_id | Yes | The application ID that the [Azure portal](https://portal.azure.com/) assigned to your application. |
-| nonce | Yes | A value included in the request (generated by the application) that is included in the resulting ID token as a claim. The application can then verify this value to mitigate token replay attacks. The value is typically a randomized unique string that can be used to identify the origin of the request. |
+| nonce | Recommended | A value included in the request (generated by the application) that is included in the resulting ID token as a claim. The application can then verify this value to mitigate token replay attacks. The value is typically a randomized unique string that can be used to identify the origin of the request. |
| response_type | Yes | Must include an ID token for OpenID Connect. If your web application also needs tokens for calling a web API, you can use `code+id_token`.| | scope | Yes | A space-separated list of scopes. The `openid` scope indicates a permission to sign in the user and get data about the user in the form of ID tokens. The `offline_access` scope is optional for web applications. It indicates that your application need a *refresh token* for extended access to resources. The `https://{tenant-name}/{app-id-uri}/{scope}` indicates a permission to protected resources, such as a web API. For more information, see [Request an access token](access-tokens.md#scopes). | | prompt | No | The type of user interaction that you require. The only valid value at this time is `login`, which forces the user to enter their credentials on that request. |
active-directory-b2c Partner Web Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-web-application-firewall.md
Previously updated : 01/26/2024 Last updated : 10/29/2024
# Tutorial: Configure Azure Active Directory B2C with Azure Web Application Firewall
-Learn how to enable the Azure Web Application Firewall (WAF) service for an Azure Active Directory B2C (Azure AD B2C) tenant, with a custom domain. WAF protects web applications from common exploits and vulnerabilities.
+Learn how to enable the Azure Web Application Firewall (WAF) service for an Azure Active Directory B2C (Azure AD B2C) tenant with a custom domain. WAF protects web applications from common exploits and vulnerabilities such as cross-site scripting, DDoS attacks, and malicious bot activity.
->[!NOTE]
->This feature is in public preview.
-
-See, [What is Azure Web Application Firewall?](../web-application-firewall/overview.md)
+See [What is Azure Web Application Firewall?](../web-application-firewall/overview.md)
## Prerequisites
To get started, you need:
* If you don't have one, get an [Azure free account](https://azure.microsoft.com/free/) * **An Azure AD B2C tenant** ΓÇô authorization server that verifies user credentials using custom policies defined in the tenant * Also known as the identity provider (IdP)
- * See, [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md)
-* **Azure Front Door (AFD)** ΓÇô enables custom domains for the Azure AD B2C tenant
- * See, [Azure Front Door and CDN documentation](../frontdoor/index.yml)
+ * See [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md)
+* **Azure Front Door premium** ΓÇô enables custom domains for the Azure AD B2C tenant and is security optimized with access to WAF managed rulesets
+ * See [Azure Front Door and CDN documentation](../frontdoor/index.yml)
* **WAF** ΓÇô manages traffic sent to the authorization server
- * [Azure Web Application Firewall](https://azure.microsoft.com/services/web-application-firewall/#overview)
+ * [Azure Web Application Firewall](https://azure.microsoft.com/services/web-application-firewall/#overview) (requires Premium SKU)
## Custom domains in Azure AD B2C
-To use custom domains in Azure AD B2C, use the custom domain features in AFD. See, [Enable custom domains for Azure AD B2C](./custom-domain.md?pivots=b2c-user-flow).
+To use custom domains in Azure AD B2C, use the custom domain features in Azure Front Door. See [Enable custom domains for Azure AD B2C](./custom-domain.md?pivots=b2c-user-flow).
- > [!IMPORTANT]
- > After you configure the custom domain, see [Test your custom domain](./custom-domain.md?pivots=b2c-custom-policy#test-your-custom-domain).
+> [!IMPORTANT]
+> After you configure the custom domain, see [Test your custom domain](./custom-domain.md?pivots=b2c-custom-policy#test-your-custom-domain).
## Enable WAF
-To enable WAF, configure a WAF policy and associate it with the AFD for protection.
+To enable WAF, configure a WAF policy and associate it with your Azure Front Door premium for protection. Azure Front Door premium comes optimized for security and gives you access to rulesets managed by Azure that protect against common vulnerabilities and exploits including cross site scripting and Java exploits. The WAF provides rulesets that help protect you against malicious bot activity. The WAF offers you layer 7 DDoS protection for your application.
### Create a WAF policy
-Create a WAF policy with Azure-managed default rule set (DRS). See, [Web Application Firewall DRS rule groups and rules](../web-application-firewall/afds/waf-front-door-drs.md).
+Create a WAF policy with Azure-managed default rule set (DRS). See [Web Application Firewall DRS rule groups and rules](../web-application-firewall/afds/waf-front-door-drs.md).
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **Create a resource**.
-3. Search for Azure WAF.
-4. Select **Azure Web Application Firewall (WAF)**.
-5. Select **Create**.
-6. Go to the **Create a WAF policy** page.
-7. Select the **Basics** tab.
-8. For **Policy for**, select **Global WAF (Front Door)**.
-9. For **Front Door SKU**, select between **Basic**, **Standard**, or **Premium** SKU.
-10. For **Subscription**, select your Front Door subscription name.
-11. For **Resource group**, select your Front Door resource group name.
-12. For **Policy name**, enter a unique name for your WAF policy.
-13. For **Policy state**, select **Enabled**.
-14. For **Policy mode**, select **Detection**.
-15. Select **Review + create**.
-16. Go to the **Association** tab of the Create a WAF policy page.
-17. Select **+ Associate a Front Door profile**.
-18. For **Front Door**, select your Front Door name associated with Azure AD B2C custom domain.
-19. For **Domains**, select the Azure AD B2C custom domains to associate the WAF policy to.
-20. Select **Add**.
-21. Select **Review + create**.
-22. Select **Create**.
+1. Select **Create a resource**.
+1. Search for Azure WAF.
+1. Select the **Azure Service Web Application Firewall (WAF) from Microsoft**.
+1. Select **Create**.
+1. Go to the **Create a WAF policy** page.
+1. Select the **Basics** tab.
+1. For **Policy for**, select **Global WAF (Front Door)**.
+1. For **Front Door SKU**, select the **Premium** SKU.
+1. For **Subscription**, select your Front Door subscription name.
+1. For **Resource group**, select your Front Door resource group name.
+1. For **Policy name**, enter a unique name for your WAF policy.
+1. For **Policy state**, select **Enabled**.
+1. For **Policy mode**, select **Detection**.
+1. Go to the **Association** tab of the Create a WAF policy page.
+1. Select **+ Associate a Front Door profile**.
+1. For **Front Door**, select your Front Door name associated with Azure AD B2C custom domain.
+1. For **Domains**, select the Azure AD B2C custom domains to associate the WAF policy to.
+1. Select **Add**.
+1. Select **Review + create**.
+1. Select **Create**.
+
+### Default Ruleset
+
+When you create a new WAF policy for Azure Front Door, it automatically deploys with the latest version of Azure-managed default ruleset (DRS). This ruleset protects web applications from common vulnerabilities and exploits. Azure-managed rule sets provide an easy way to deploy protection against a common set of security threats. Because Azure manages these rule sets, the rules are updated as needed to protect against new attack signatures. The DRS includes the Microsoft Threat Intelligence Collection rules that are written in partnership with the Microsoft Intelligence team to provide increased coverage, patches for specific vulnerabilities, and better false positive reduction.
+
+Learn more: [Azure Web Application Firewall DRS rule groups and rules](../web-application-firewall/afds/waf-front-door-drs.md#default-rule-sets)
+
+### Bot Manager Ruleset
+
+By default, the Azure Front Door WAF deploys with the latest version of Azure-managed Bot Manager ruleset. This ruleset categorizes bot traffic into good, bad, and unknown bots. The bot signatures behind this ruleset are managed by the WAF platform and are updated dynamically.
+
+Learn more: [What is Azure Web Application Firewall on Azure Front Door?](../web-application-firewall/afds/afds-overview.md#bot-protection-rule-set)
+
+### Rate Limiting
+
+Rate limiting enables you to detect and block abnormally high levels of traffic from any socket IP address. By using Azure WAF in Azure Front Door, you can mitigate some types of denial-of-service attacks. Rate limiting protects you against clients that were accidentally misconfigured to send large volumes of requests in a short time period. Rate limiting must be configured manually on the WAF using custom rules.
+
+Learn more:
+- [Web application firewall rate limiting for Azure Front Door](../web-application-firewall/afds/waf-front-door-rate-limit.md)
+- [Configure a WAF rate-limit rule for Azure Front Door](../web-application-firewall/afds/waf-front-door-rate-limit-configure.md)
### Detection and Prevention modes
-When you create WAF policy, the policy is in Detection mode. We recommend you don't disable Detection mode. In this mode, WAF doesn't block requests. Instead, requests that match the WAF rules are logged in the WAF logs.
+When you create a WAF policy, the policy starts in **Detection mode**. We recommend you leave the WAF policy in **Detection mode** while you tune the WAF for your traffic. In this mode, WAF doesn't block requests. Instead, requests that match the WAF rules are logged by the WAF once logging is enabled.
+
+Enable logging: [Azure Web Application Firewall monitoring and logging](../web-application-firewall/afds/waf-front-door-monitor.md#logs-and-diagnostics)
-Learn more: [Azure Web Application Firewall monitoring and logging](../web-application-firewall/afds/waf-front-door-monitor.md)
+Once logging is enabled, and your WAF starts receiving request traffic, you can begin tuning your WAF by looking through your logs.
+
+Learn more: [Tune Azure Web Application Firewall for Azure Front Door](../web-application-firewall/afds/waf-front-door-tuning.md)
The following query shows the requests blocked by the WAF policy in the past 24 hours. The details include, rule name, request data, action taken by the policy, and the policy mode.
-
- ![Screenshot of blocked requests.](./media/partner-web-application-firewall/blocked-requests-query.png)
- ![Screenshot of blocked requests details, such as Rule ID, Action, Mode, etc.](./media/partner-web-application-firewall/blocked-requests-details.png)
+```json
+AzureDiagnostics
+| where TimeGenerated >= ago(24h)
+| where Category == "FrontdoorWebApplicationFirewallLog"
+| where action_s == "Block"
+| project RuleID=ruleName_s, DetailMsg=details_msg_s, Action=action_s, Mode=policyMode_s, DetailData=details_data_s
+```
+
+|RuleID|DetailMsg|Action|Mode|DetailData|
+||||||
+|DefaultRuleSet-1.0-SQLI-942430|Restricted SQL Character Anomaly Detection (args): # of special characters exceeded (12)|Block|detection|Matched Data: CfDJ8KQ8bY6D|
Review the WAF logs to determine if policy rules cause false positives. Then, exclude the WAF rules based on the WAF logs.
-Learn more: [Define exclusion rules based on Web Application Firewall logs](../web-application-firewall/afds/waf-front-door-exclusion.md#define-exclusion-based-on-web-application-firewall-logs)
+Learn more
+- [Configure WAF exclusion lists for Azure Front Door](../web-application-firewall/afds/waf-front-door-exclusion-configure.md)
+- [Web application firewall exclusion lists in Azure Front Door](../web-application-firewall/afds/waf-front-door-exclusion.md)
+
+Once logging is set up and your WAF is receiving traffic, you can assess the effectiveness of your bot manager rules in handling bot traffic. The following query shows the actions taken by your bot manager ruleset, categorized by bot type. While in **Detection mode**, the WAF logs bot traffic actions only. However, once switched to prevention mode, the WAF begins actively blocking unwanted bot traffic.
+
+```json
+AzureDiagnostics
+| where Category == "FrontDoorWebApplicationFirewallLog"
+| where action_s in ("Log", "Allow", "Block", "JSChallenge", "Redirect") and ruleName_s contains "BotManager"
+| extend RuleGroup = extract("Microsoft_BotManagerRuleSet-[\\d\\.]+-(.*?)-Bot\\d+", 1, ruleName_s)
+| extend RuleGroupAction = strcat(RuleGroup, " - ", action_s)
+| summarize Hits = count() by RuleGroupAction, bin(TimeGenerated, 30m)
+| project TimeGenerated, RuleGroupAction, Hits
+| render columnchart kind=stacked
+```
#### Switching modes
-To see WAF operating, select **Switch to prevention mode**, which changes the mode from Detection to Prevention. Requests that match the rules in the DRS are blocked and logged in the WAF logs.
-
- ![Screenshot of options and selections for DefaultRuleSet under Web Application Firewall policies.](./media/partner-web-application-firewall/switch-to-prevention-mode.png)
+To see WAF take action on request traffic, select **Switch to prevention mode** from the Overview page, which changes the mode from Detection to Prevention. Requests that match the rules in the DRS are blocked and logged in the WAF logs. The WAF takes the prescribed action when a request matches one, or more, rules in the DRS and logs the results. By default, the DRS is set to anomaly scoring mode; this means that the WAF doesn't take any action on a request unless the anomaly score threshold is met.
-To revert to Detection mode, select **Switch to detection mode**.
+Learn more: Anomaly scoring [Azure Web Application Firewall DRS rule groups and rules](../web-application-firewall/afds/waf-front-door-drs.md#anomaly-scoring-mode)
- ![Screenshot of DefaultRuleSet with Switch to detection mode.](./media/partner-web-application-firewall/switch-to-detection-mode.png)
+To revert to **Detection mode**, select **Switch to detection mode** from the Overview page.
## Next steps
-* [Azure Web Application Firewall monitoring and logging](../web-application-firewall/afds/waf-front-door-monitor.md)
-* [Web Application Firewall (WAF) with Front Door exclusion lists](../web-application-firewall/afds/waf-front-door-exclusion.md)
+- [Best practices for Azure Web Application Firewall in Azure Front Door](../web-application-firewall/afds/waf-front-door-best-practices.md)
+- [Manage Web Application Firewall policies](../firewall-manager/manage-web-application-firewall-policies.md)
+- [Tune Azure Web Application Firewall for Azure Front Door](../web-application-firewall/afds/waf-front-door-tuning.md)
api-management Add Api Manually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/add-api-manually.md
Test the operation in the Azure portal. You can also test it in the **Developer
This section shows how to add a wildcard operation. A wildcard operation lets you pass an arbitrary value with an API request. Instead of creating separate GET operations as shown in the previous sections, you could create a wildcard GET operation. > [!CAUTION]
-> Use care when configuring a wildcard operation. This configuration may make an API more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#improper-assets-management).
+> Use care when configuring a wildcard operation. This configuration may make an API more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#improper-inventory-management).
### Add the operation
api-management Api Management Howto Api Inspector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-api-inspector.md
Previously updated : 05/05/2024 Last updated : 11/04/2024
In this tutorial, you learn how to:
## Trace a call in the portal
+Follow these steps to trace an API request in the test console in the portal. This example assumes that you [imported](import-and-publish.md) a sample API in a previous tutorial. You can follow similar steps with a different API that you imported.
+ 1. Sign in to the [Azure portal](https://portal.azure.com), and navigate to your API Management instance.
-1. Select **APIs**.
-1. Select **Demo Conference API** from your API list.
+1. Select **APIs** > **APIs**.
+1. Select **Petstore API** from your API list.
1. Select the **Test** tab.
-1. Select the **GetSpeakers** operation.
+1. Select the **Find pet by ID** operation.
+1. In the *petId* **Query parameter**, enter *1*.
1. Optionally check the value for the **Ocp-Apim-Subscription-Key** header used in the request by selecting the "eye" icon. > [!TIP] > You can override the value of **Ocp-Apim-Subscription-Key** by retrieving a key for another subscription in the portal. Select **Subscriptions**, and open the context menu (**...**) for another subscription. Select **Show/hide keys** and copy one of the keys. You can also regenerate keys if needed. Then, in the test console, select **+ Add header** to add an **Ocp-Apim-Subscription-Key** header with the new key value.
In this tutorial, you learn how to:
:::image type="content" source="media/api-management-howto-api-inspector/response-trace-1.png" alt-text="Review response trace":::
- * **Inbound** - Shows the original request API Management received from the caller and the policies applied to the request. For example, if you added policies in [Tutorial: Transform and protect your API](transform-api.md), they'll appear here.
+ * **Inbound** - Shows the original request API Management received from the caller and the policies applied to the request. For example, if you added policies in [Tutorial: Transform and protect your API](transform-api.md), they appear here.
* **Backend** - Shows the requests API Management sent to the API backend and the response it received.
In this tutorial, you learn how to:
## Enable tracing for an API
-You can enable tracing for an API when making requests to API Management using `curl`, a REST client such as Visual Studio Code with the REST Client extension, or a client app.
+The following high level steps are required to enable tracing for a request to API Management when using `curl`, a REST client such as Visual Studio Code with the REST Client extension, or a client app. Currently these steps must be followed using the [API Management REST API](/rest/api/apimanagement):
+
+1. Obtain a token credential for tracing.
+1. Add the token value in an `Apim-Debug-Authorization` request header to the API Management gateway.
+1. Obtain a trace ID in the `Apim-Trace-Id` response header.
+1. Retrieve the trace corresponding to the trace ID.
-Enable tracing by the following steps using calls to the API Management REST API.
+Detailed steps follow.
> [!NOTE]
-> The following steps require API Management REST API version 2023-05-01-preview or later. You must be assigned the Contributor or higher role on the API Management instance to call the REST API.
+> * These steps require API Management REST API version 2023-05-01-preview or later. You must be assigned the Contributor or higher role on the API Management instance to call the REST API.
+> * For information about authenticating to the REST API, see [Azure REST API reference](/rest/api/azure).
-1. Obtain trace credentials by calling the [List debug credentials](/rest/api/apimanagement/gateway/list-debug-credentials) API. Pass the gateway ID in the URI, or use "managed" for the instance's managed gateway in the cloud. For example, to obtain trace credentials for the managed gateway, use a call similar to the following:
+1. **Obtain a token credential** - Call the API Management gateway's [List debug credentials](/rest/api/apimanagement/gateway/list-debug-credentials) API. In the URI, enter "managed" for the instance's managed gateway in the cloud, or the gateway ID for a self-hosted gateway. For example, to obtain trace credentials for the instance's managed gateway, use a request similar to the following:
```http POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ApiManagement/service/{serviceName}/gateways/managed/listDebugCredentials?api-version=2023-05-01-preview ```
- In the request body, pass the full resource ID of the API that you want to trace, and specify `purposes` as `tracing`. By default the token credential returned in the response expires after 1 hour, but you can specify a different value in the payload.
+ In the request body, pass the full resource ID of the API that you want to trace, and specify `purposes` as `tracing`. By default the token credential returned in the response expires after 1 hour, but you can specify a different value in the payload. For example:
```json { "credentialsExpireAfter": PT1H,
- "apiId": "<API resource ID>",
+ "apiId": ""/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ApiManagement/service/{serviceName}/apis/{apiName}",
"purposes": ["tracing"] } ```
Enable tracing by the following steps using calls to the API Management REST API
```json {
- "token": "aid=api-name&p=tracing&ex=......."
+ "token": "aid=api-name&......."
} ```
-1. To enable tracing for a request to the API Management gateway, send the token value in an `Apim-Debug-Authorization` header. For example, to trace a call to the demo conference API, use a call similar to the following:
+1. **Add the token value in a request header** - To enable tracing for a request to the API Management gateway, send the token value in an `Apim-Debug-Authorization` header. For example, to trace a call to the Petstore API that you imported in a previous tutorial, you might use a request similar to the following:
```bash
- curl -v GET https://apim-hello-world.azure-api.net/conference/speakers HTTP/1.1 -H "Ocp-Apim-Subscription-Key: <subscription-key>" -H "Apim-Debug-Authorization: aid=api-name&p=tracing&ex=......."
+ curl -v https://apim-hello-world.azure-api.net/pet/1 HTTP/1.1 -H "Ocp-Apim-Subscription-Key: <subscription-key>" -H "Apim-Debug-Authorization: aid=api-name&......."
```
-1. Depending on the token, the response contains different headers:
- * If the token is valid, the response includes an `Apim-Trace-Id` header whose value is the trace ID.
+
+1. Depending on the token, the response contains one of the following headers:
+ * If the token is valid, the response includes an `Apim-Trace-Id` header whose value is the trace ID, similar to the following:
+
+ ```http
+ Apim-Trace-Id: 0123456789abcdef....
+ ```
+
* If the token is expired, the response includes an `Apim-Debug-Authorization-Expired` header with information about expiration date.
- * If the token was obtained for wrong API, the response includes an `Apim-Debug-Authorization-WrongAPI` header with an error message.
+ * If the token was obtained for a different API, the response includes an `Apim-Debug-Authorization-WrongAPI` header with an error message.
-1. To retrieve the trace, pass the trace ID obtained in the previous step to the [List trace](/rest/api/apimanagement/gateway/list-trace) API for the gateway. For example, to retrieve the trace for the managed gateway, use a call similar to the following:
+1. **Retrieve the trace** - Pass the trace ID obtained in the previous step to the gateway's [List trace](/rest/api/apimanagement/gateway/list-trace) API. For example, to retrieve the trace for the managed gateway, use a request similar to the following:
```http POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ApiManagement/service/{serviceName}/gateways/managed/listTrace?api-version=2023-05-01-preview
Enable tracing by the following steps using calls to the API Management REST API
```json {
- "traceId": "<trace ID>"
+ "traceId": "0123456789abcdef...."
} ```
For information about customizing trace information, see the [trace](trace-polic
In this tutorial, you learned how to: > [!div class="checklist"]
-> * Trace an example call
+> * Trace an example call in the test conosle
> * Review request processing steps > * Enable tracing for an API
api-management Mitigate Owasp Api Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mitigate-owasp-api-threats.md
description: Learn how to protect against common API-based vulnerabilities, as i
Previously updated : 04/13/2023 Last updated : 10/29/2024
[!INCLUDE [api-management-availability-all-tiers](../../includes/api-management-availability-all-tiers.md)]
+> [!NOTE]
+> This article has been updated to reflect the latest OWASP API Security Top 10 list for 2023.
+ The Open Web Application Security Project ([OWASP](https://owasp.org/about/)) Foundation works to improve software security through its community-led open source software projects, hundreds of chapters worldwide, tens of thousands of members, and by hosting local and global conferences.
-The OWASP [API Security Project](https://owasp.org/www-project-api-security/) focuses on strategies and solutions to understand and mitigate the unique *vulnerabilities and security risks of APIs*. In this article, we'll discuss recommendations to use Azure API Management to mitigate the top 10 API threats identified by OWASP.
+The OWASP [API Security Project](https://owasp.org/www-project-api-security/) focuses on strategies and solutions to understand and mitigate the unique *vulnerabilities and security risks of APIs*. In this article, we discuss the latest recommendations to mitigate the top 10 API threats identified by OWASP in their *2023* list using Azure API Management.
-> [!NOTE]
-> In addition to following the recommendations in this article, you can enable [Defender for APIs](/azure/defender-for-cloud/defender-for-apis-introduction), a capability of [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction), for API security insights, recommendations, and threat detection. [Learn more about using Defender for APIs with API Management](protect-with-defender-for-apis.md)
+Even though API Management provides comprehensive controls for API security, other Microsoft services provide complementary functionality to detect or protect against OWASP API threats:
+
+- [Defender for APIs](/azure/defender-for-cloud/defender-for-apis-introduction), a capability of [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction) [that integrates natively with API Management](/azure/api-management/protect-with-defender-for-apis), provides API security insights, recommendations, and threat detection. [Learn how to protect against OWASP API threats with Defender for APIs](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/protect-against-owasp-api-top-10-security-risks-using-defender/ba-p/4093913).
+- [Azure API Center](/azure/api-center/overview) centralizes management and governance of the organization-wide API inventory.
+- [Azure Front Door](/azure/frontdoor/front-door-overview), [Azure Application Gateway](/azure/application-gateway/overview), and [Azure Web Application Firewall](/azure/web-application-firewall/overview) provide protection against traditional web application threats and bots.
+- [Azure DDoS Protection](/azure/ddos-protection/ddos-protection-overview) helps detect and mitigate DDoS attacks.
+- Azure networking services allow for restricting public access to APIs, thus reducing the attack surface.
+- [Azure Monitor](/azure/azure-monitor/overview) and [Log Analytics](/azure/azure-monitor/logs/log-analytics-overview) provide actionable metrics and logs for investigating threats.
+- [Azure Key Vault](/azure/key-vault/general/overview) allows for secure storage of certificates and secrets used in API Management.
+- [Microsoft Entra](/entra/fundamentals/what-is-entra) provides advanced methods of identity management and authentication and authorization of requests in API Management.
## Broken object level authorization API objects that aren't protected with the appropriate level of authorization may be vulnerable to data leaks and unauthorized data manipulation through weak object access identifiers. For example, an attacker could exploit an integer object identifier, which can be iterated.
-More information about this threat: [API1:2019 Broken Object Level Authorization](https://github.com/OWASP/API-Security/blob/master/editions/2023/en/0xa1-broken-object-level-authorization.md)
+More information about this threat: [API1:2023 Broken Object Level Authorization](https://github.com/OWASP/API-Security/blob/master/editions/2023/en/0xa1-broken-object-level-authorization.md)
### Recommendations * The best place to implement object level authorization is within the backend API itself. At the backend, the correct authorization decisions can be made at the request (or object) level, where applicable, using logic applicable to the domain and API. Consider scenarios where a given request may yield differing levels of detail in the response, depending on the requestor's permissions and authorization. * If a current vulnerable API can't be changed at the backend, then API Management could be used as a fallback. For example:
-
- * Use a custom policy to implement object-level authorization, if it's not implemented in the backend.
-
- * Implement a custom policy to map identifiers from request to backend and from backend to client, so that internal identifiers aren't exposed.
-
- In these cases, the custom policy could be a [policy expression](api-management-policy-expressions.md) with a look-up (for example, a dictionary) or integration with another service through the [send request](send-request-policy.md) policy.
-
-* For GraphQL scenarios, enforce object-level authorization through the [validate GraphQL request](validate-graphql-request-policy.md) policy, using the `authorize` element.
-
-## Broken user authentication
-
-Authentication mechanisms are often implemented incorrectly or missing, allowing attackers to exploit implementation flaws to access data.
-
-More information about this threat: [API2:2019 Broken User Authentication](https://github.com/OWASP/API-Security/blob/master/editions/2019/en/0xa2-broken-user-authentication.md)
-
-### Recommendations
-
-Use API Management for user authentication and authorization:
-
-* **Authentication** - API Management supports the following [authentication methods](api-management-policies.md#authentication-and-authorization):
-
- * [Basic authentication](authentication-basic-policy.md) policy - Username and password credentials.
-
- * [Subscription key](api-management-subscriptions.md) - A subscription key provides a similar level of security as basic authentication and may not be sufficient alone. If the subscription key is compromised, an attacker may get unlimited access to the system.
-
- * [Client certificate](authentication-certificate-policy.md) policy - Using client certificates is more secure than basic credentials or subscription key, but it doesn't allow the flexibility provided by token-based authorization protocols such as OAuth 2.0.
-
-* **Authorization** - API Management supports a [validate JWT](validate-jwt-policy.md) policy to check the validity of an incoming OAuth 2.0 JWT access token based on information obtained from the OAuth identity provider's metadata endpoint. Configure the policy to check relevant token claims, audience, and expiration time. Learn more about protecting an API using [OAuth 2.0 authorization and Microsoft Entra ID](api-management-howto-protect-backend-with-aad.md).
-
-More recommendations:
-
-* Use policies in API Management to increase security. For example, [call rate limiting](rate-limit-policy.md) slows down bad actors using brute force attacks to compromise credentials.
-
-* APIs should use TLS/SSL (transport security) to protect the credentials or tokens. Credentials and tokens should be sent in request headers and not as query parameters.
-
-* In the API Management [developer portal](api-management-howto-developer-portal.md), configure [Microsoft Entra ID](api-management-howto-aad.md) or [Azure Active Directory B2C](api-management-howto-aad-b2c.md) as the identity provider to increase the account security. The developer portal uses CAPTCHA to mitigate brute force attacks.
-
-### Related information
-
-* [Authentication vs. authorization](../active-directory/develop/authentication-vs-authorization.md)
-
-## Excessive data exposure
-
-Good API interface design is deceptively challenging. Often, particularly with legacy APIs that have evolved over time, the request and response interfaces contain more data fields than the consuming applications require.
-
-A bad actor could attempt to access the API directly (perhaps by replaying a valid request), or sniff the traffic between server and API. Analysis of the API actions and the data available could yield sensitive data to the attacker, which isn't surfaced to, or used by, the frontend application.
+
+ * Use a custom policy to implement object-level authorization, if it's not implemented in the backend.
+ * Implement a custom policy to map identifiers from request to backend and from backend to client, so that internal identifiers aren't exposed.
-More information about this threat: [API3:2019 Excessive Data Exposure](https://github.com/OWASP/API-Security/blob/master/editions/2019/en/0xa3-excessive-data-exposure.md)
+ In these cases, the custom policy could be a [policy expression](api-management-policy-expressions.md) with a look-up (for example, a dictionary) or integration with another service through the [send-request](send-request-policy.md) policy.
-### Recommendations
-
-* The best approach to mitigating this vulnerability is to ensure that the external interfaces defined at the backend API are designed carefully and, ideally, independently of the data persistence. They should contain only the fields required by consumers of the API. APIs should be reviewed frequently, and legacy fields deprecated, then removed.
-
- In API Management, use:
- * [Revisions](api-management-revisions.md) to gracefully control nonbreaking changes, for example, the addition of a field to an interface. You may use revisions along with a versioning implementation at the backend.
-
- * [Versions](api-management-versions.md) for breaking changes, for example, the removal of a field from an interface.
-
-* If it's not possible to alter the backend interface design and excessive data is a concern, use API Management [transformation policies](api-management-policies.md#transformation) to rewrite response payloads and mask or filter data. For example, [remove unneeded JSON properties](./policies/filter-response-content.md) from a response body.
-
-* [Response content validation](validate-content-policy.md) in API Management can be used with an XML or JSON schema to block responses with undocumented properties or improper values. The policy also supports blocking responses exceeding a specified size.
-
-* Use the [validate status code](validate-status-code-policy.md) policy to block responses with errors undefined in the API schema.
+* For GraphQL scenarios, enforce object-level authorization through the [validate-graphql-request](validate-graphql-request-policy.md) policy, using the `authorize` element.
-* Use the [validate headers](validate-headers-policy.md) policy to block responses with headers that aren't defined in the schema or don't comply to their definition in the schema. Remove unwanted headers with the [set header](set-header-policy.md) policy.
+## Broken authentication
-* For GraphQL scenarios, use the [validate GraphQL request](validate-graphql-request-policy.md) policy to validate GraphQL requests, authorize access to specific query paths, and limit response size.
+The authentication mechanism for a site or API is especially vulnerable because it's open to anonymous users. Assets and endpoints required for authentication, including forgotten password or reset password flows, should be protected to prevent exploitation.
-## Lack of resources and rate limiting
+More information about this threat: [API2:2023 Broken Authentication](https://owasp.org/API-Security/editions/2023/en/0xa2-broken-authentication/)
-Lack of rate limiting may lead to data exfiltration or successful DDoS attacks on backend services, causing an outage for all consumers.
-
-More information about this threat: [API4:2019 Lack of resources and rate limiting](https://github.com/OWASP/API-Security/blob/master/editions/2019/en/0xa4-lack-of-resources-and-rate-limiting.md)
-
-### Recommendations
+### Recommendations
-* Use [rate limit](rate-limit-policy.md) (short-term) and [quota limit](quota-policy.md) (long-term) policies to control the allowed number of API calls or bandwidth per consumer.
+- Use Microsoft Entra to implement [API authentication](/azure/api-management/authentication-authorization-overview). Microsoft Entra automatically provides protected, resilient, and geographically distributed login endpoints. Use the [validate-azure-ad-token](validate-azure-ad-token-policy.md) policy to validate Microsoft Entra tokens in incoming API requests.
+- Where authentication is required, API Management supports [validation of OAuth 2 tokens](/azure/api-management/authentication-authorization-overview), [basic authentication](/azure/api-management/authentication-basic-policy), [client certificates](/azure/api-management/api-management-howto-mutual-certificates-for-clients), and API keys.
+ - Ensure proper configuration of authentication methods. For example, set `require-expiration-time` and `require-signed-tokens` to `true` when validating OAuth2 tokens using the [validate-jwt](/azure/api-management/validate-jwt-policy) policy.
+- [Rate limiting](/azure/api-management/api-management-sample-flexible-throttling) can be utilized to reduce the effectiveness of brute force attacks.
+- [Client IP filtering](/azure/api-management/ip-filter-policy) can be used to reduce the attack surface area. Network security groups can be applied to virtual networks [integrated with API Management](/azure/api-management/virtual-network-concepts).
+- If possible, authenticate to backends from API Management through secure protocols and [managed identity](/azure/api-management/api-management-howto-use-managed-service-identity) or [credential manager](/azure/api-management/credentials-overview) to authenticate to backends.
+- Ensure tokens or keys are passed in headers and not URLs for inbound requests to API Management and outbound requests to backends.
+- Use Microsoft Entra to [secure access](/azure/api-management/api-management-howto-aad) to the API Management developer portal.
-* Define strict request object definitions and their properties in the OpenAPI definition. For example, define the max value for paging integers, maxLength and regular expression (regex) for strings. Enforce those schemas with the [validate content](validate-content-policy.md) and [validate parameters](validate-parameters-policy.md) policies in API Management.
+## Broken object property level authorization
-* Enforce maximum size of the request with the [validate content](validate-content-policy.md) policy.
+Good API interface design is deceptively challenging. Often, particularly with legacy APIs that have evolved over time, the request and response interfaces contain more data fields than the consuming applications require, enabling data injection attacks. Attackers may also discover undocumented interfaces. These vulnerabilities could yield sensitive data to the attacker.
-* Optimize performance with [built-in caching](api-management-howto-cache.md), thus reducing the consumption of CPU, memory, and networking resources for certain operations.
+More information about this threat: [API3:2023 Broken Object Property Level Authorization](https://owasp.org/API-Security/editions/2023/en/0xa3-broken-object-property-level-authorization/)
-* Enforce authentication for API calls (see [Broken user authentication](#broken-user-authentication)). Revoke access for abusive users. For example, deactivate the subscription key, block the IP address with the [restrict caller IPs](ip-filter-policy.md) policy, or reject requests for a certain user claim from a [JWT token](validate-jwt-policy.md).
-* Apply a [CORS](cors-policy.md) policy to control the websites that are allowed to load the resources served through the API. To avoid overly permissive configurations, donΓÇÖt use wildcard values (`*`) in the CORS policy.
+### Recommendations
-* Minimize the time it takes a backend service to respond. The longer the backend service takes to respond, the longer the connection is occupied in API Management, therefore reducing the number of requests that can be served in a given timeframe.
+- The best approach to mitigating this vulnerability is to ensure that the external interfaces defined at the backend API are designed carefully and, ideally, independently of the data persistence. They should contain only the fields required by consumers of the API. APIs should be reviewed frequently, and legacy fields deprecated, then removed.
+- In API Management, use [revisions](/azure/api-management/api-management-revisions) to gracefully control nonbreaking changes, for example, the addition of a field to an interface, and [versions](/azure/api-management/api-management-versions) to implement breaking changes. You should also version backend interfaces, which typically have a different lifecycle than consumer-facing APIs.
+- Decouple external API interfaces from the internal data implementation. Avoid binding API contracts directly to data contracts in backend services.
+- If it's not possible to alter the backend interface design and excessive data is a concern, use API Management [transformation policies](/azure/api-management/api-management-policies#transformation) to rewrite response payloads and mask or filter data. [Content validation](/azure/api-management/validate-content-policy) in API Management can be used with an XML or JSON schema to block responses with undocumented properties or improper values. For example, [remove unneeded JSON properties](/azure/api-management/policies/filter-response-content) from a response body. Blocking requests with undocumented properties mitigates attacks, while blocking responses with undocumented properties makes it harder to reverse-engineer potential attack vectors. The [validate-content](/azure/api-management/validate-content-policy) policy also supports blocking responses exceeding a specified size.
+- Use the [validate-status-code](/azure/api-management/validate-status-code-policy) policy to block responses with errors undefined in the API schema.
+- Use the [validate-headers](/azure/api-management/validate-headers-policy) policy to block responses with headers that aren't defined in the schema or don't comply to their definition in the schema. Remove unwanted headers with the [set-header](/azure/api-management/set-header-policy) policy.
+- For GraphQL scenarios, use the [validate-graphql-request](/azure/api-management/validate-graphql-request-policy) policy to validate GraphQL requests, authorize access to specific query paths, and limit response size.
- * Define `timeout` in the [forward request](forward-request-policy.md) policy.
+## Unrestricted resource consumption
- * Use the [validate GraphQL request](validate-graphql-request-policy.md) policy for GraphQL APIs and configure `max-depth` and `max-size` parameters.
+APIs require resources to run, like memory or CPU, and may include downstream integrations that represent an operating cost (for example, pay-per-request services). Applying limits can help protect APIs from excessive resource consumption.
- * Limit the number of parallel backend connections with the [limit concurrency](limit-concurrency-policy.md) policy.
+More information about this threat: [API4:2023 Unrestricted Resource Consumption](https://owasp.org/API-Security/editions/2023/en/0xa4-unrestricted-resource-consumption/)
-* While API Management can protect backend services from DDoS attacks, it may be vulnerable to those attacks itself. Deploy a bot protection service in front of API Management (for example, [Azure Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md), [Azure Front Door](front-door-api-management.md), or [Azure DDoS Protection](protect-with-ddos-protection.md)) to better protect against DDoS attacks. When using a WAF with Azure Application Gateway or Azure Front Door, consider using [Microsoft_BotManagerRuleSet_1.0](../web-application-firewall/afds/afds-overview.md#bot-protection-rule-set).
+### Recommendations
+- Use [rate-limit-by-key](/azure/api-management/rate-limit-by-key-policy) or [rate-limit](/azure/api-management/rate-limit-policy) policies to apply throttling on shorter time windows. Apply stricter rate-limiting policies on sensitive endpoints, like password reset, sign-in, or sign-up operations, or endpoints that consume significant resources.
+- Use [quota-by-key](/azure/api-management/quota-by-key-policy) or [quota-limit](/azure/api-management/quota-policy) policies to control the allowed number of API calls or bandwidth for longer time frames.
+- Optimize performance with [built-in caching](/azure/api-management/api-management-howto-cache), thus reducing the consumption of CPU, memory, and networking resources for certain operations.
+- Apply validation policies.
+ - Use the `max-size` attribute in the [validate-content](/azure/api-management/validate-content-policy) policy to enforce maximum size of requests and responses
+ - Define schemas and properties, such as string length or maximum array size, in the API specification. Use [validate-content](validate-content-policy.md), [validate-parameters](validate-parameters-policy.md), and [validate-headers](validate-headers-policy.md) policies to enforce those schemas for requests and responses.
+ - Use the [validate-graphql-request](/azure/api-management/validate-graphql-request-policy) policy for GraphQL APIs and configure `max-depth` and `max-size` parameters.
+ - Configure alerts in Azure Monitor for excessive consumption of data by users.
+- For generative AI APIs:
+ - Use [semantic caching](/azure/api-management/azure-openai-enable-semantic-caching) to reduce load on the backends.
+ - Use [token limiting](genai-gateway-capabilities.md#token-limit-policy) to control consumption and costs.
+ - Emit [token consumption metrics](genai-gateway-capabilities.md#emit-token-metric-policy) to monitor token utilization and configure alerts.
+- Minimize the time it takes a backend service to respond. The longer the backend service takes to respond, the longer the connection is occupied in API Management, therefore reducing the number of requests that can be served in a given time frame.
+ - Define `timeout` in the [forward-request](/azure/api-management/forward-request-policy) policy and strive for the shortest acceptable value.
+ - Limit the number of parallel backend connections with the [limit-concurrency](/azure/api-management/limit-concurrency-policy) policy.
+- Apply a [CORS](/azure/api-management/cors-policy) policy to control the websites that are allowed to load the resources served through the API. To avoid overly permissive configurations, don't use wildcard values (`*`) in the CORS policy.
+- While Azure has both platform-level protection and [enhanced protection](/azure/ddos-protection/ddos-protection-overview) against distributed denial of service (DDoS) attacks, application (layer 7) protection for APIs can be improved by deploying a bot protection service in front of API Management - for example, [Azure Application Gateway](/azure/api-management/api-management-howto-integrate-internal-vnet-appgateway), [Azure Front Door](/azure/api-management/front-door-api-management), or [Azure DDoS Protection](/azure/ddos-protection/). When using a web application firewall (WAF) policy with Azure Application Gateway or Azure Front Door, consider using [Microsoft_BotManagerRuleSet_1.0](/azure/web-application-firewall/afds/afds-overview#bot-protection-rule-set).
+
## Broken function level authorization
-Complex access control policies with different hierarchies, groups, and roles, and an unclear separation between administrative and regular functions lead to authorization flaws. By exploiting these issues, attackers gain access to other usersΓÇÖ resources or administrative functions.
+Complex access control policies with different hierarchies, groups, and roles, and an unclear separation between administrative and regular functions, lead to authorization flaws. By exploiting these issues, attackers gain access to other users' resources or administrative functions.
-More information about this threat: [API5:2019 Broken function level authorization](https://github.com/OWASP/API-Security/blob/master/editions/2019/en/0xa5-broken-function-level-authorization.md)
+More information about this threat: [API5:2023 Broken function level authorization](https://owasp.org/API-Security/editions/2023/en/0xa5-broken-function-level-authorization/)
-### Recommendations
-
-* By default, protect all API endpoints in API Management with [subscription keys](api-management-subscriptions.md).
-
-* Define a [validate JWT](validate-jwt-policy.md) policy and enforce required token claims. If certain operations require stricter claims enforcement, define extra `validate-jwt` policies for those operations only.
+### Recommendations
-* Use an Azure virtual network or Private Link to hide API endpoints from the internet. Learn more about [virtual network options](virtual-network-concepts.md) with API Management.
+- By default, protect all API endpoints in API Management with [subscription keys](/azure/api-management/api-management-subscriptions) or all-APIs-level authorization policy. If applicable, define other authorization policies for specific APIs or API operations.
+- Validate OAuth tokens using policies.
+ - Use [validate-azure-ad-token](/azure/api-management/validate-azure-ad-token-policy) policy to validate Microsoft Entra tokens. Specify all required claims and, if applicable, specify authorized applications.
+ - For validating tokens not issued by Microsoft Entra, define a [validate-jwt](/azure/api-management/validate-jwt-policy) policy and enforce required token claims. If possible, require expiration time.
+ - If possible, use encrypted tokens or list specific applications for access.
+ - Monitor and review requests rejected due to lack of authorization.
+- Use an Azure virtual network or Private Link to hide API endpoints from the internet. Learn more about [virtual network options](/azure/api-management/virtual-network-concepts) with API Management.
+- Don't define [wildcard API operations](/azure/api-management/add-api-manually#add-and-test-a-wildcard-operation) (that is, "catch-all" APIs with `* `as the path). Ensure that API Management only serves requests for explicitly defined endpoints, and requests to undefined endpoints are rejected.
+- Don't publish APIs with [open products](/azure/api-management/api-management-howto-add-products#access-to-product-apis) that don't require a subscription.
+- If client IPs are known, use an [ip-filter](ip-filter-policy.md) policy to allow traffic only from authorized IP addresses.
+- Use the [validate-client-certificate](validate-client-certificate-policy.md) policy to enforce that a certificate presented by a client to an API Management instance matches specified validation rules and claims.
-* Don't define [wildcard API operations](add-api-manually.md#add-and-test-a-wildcard-operation) (that is, "catch-all" APIs with `*` as the path). Ensure that API Management only serves requests for explicitly defined endpoints, and requests to undefined endpoints are rejected.
+## Unrestricted access to sensitive business flows
-* Don't publish APIs with [open products](api-management-howto-add-products.md#access-to-product-apis) that don't require a subscription.
+APIs can expose a wide range of functionality to the consuming application. It's important for API authors to understand the business flows the API provides and the associated sensitivity. There's a greater risk to the business if APIs exposing sensitive flows don't implement appropriate protections.
-## Mass assignment
+More information about this threat: [API6:2023 Unrestricted Access to Sensitive Business Flows](https://owasp.org/API-Security/editions/2023/en/0xa6-unrestricted-access-to-sensitive-business-flows/)
-If an API offers more fields than the client requires for a given action, an attacker may inject excessive properties to perform unauthorized operations on data. Attackers may discover undocumented properties by inspecting the format of requests and responses or other APIs, or guessing them. This vulnerability is especially applicable if you donΓÇÖt use strongly typed programming languages.
+### Recommendations
-More information about this threat: [API6:2019 Mass assignment](https://github.com/OWASP/API-Security/blob/master/editions/2019/en/0xa6-mass-assignment.md)
+- Reduce or block access based on client fingerprints. For example, use the [return-response](return-response-policy.md) policy with the [choose](choose-policy.md) policy to block traffic from headless browsers based on the User-Agent header or consistency of other headers.
+- Use [validate-parameters](validate-parameters-policy.md) policy to enforce that request headers match the API specification.
+- Use [ip-filter](ip-filter-policy.md) policy to allow requests only from known IP addresses or deny access from specific IPs.
+- Use private networking features to limit external connectivity to internal APIs.
+- Use [rate-limit-by-key](rate-limit-by-key-policy.md) policy to limit spikes in API consumption based on user identity, IP address, or another value.
+- Front API Management with Azure Application Gateway or Azure DDoS Protection service to detect and block bot traffic.
-### Recommendations
+## Server side request forgery
-* External API interfaces should be decoupled from the internal data implementation. Avoid binding API contracts directly to data contracts in backend services. Review the API design frequently, and deprecate and remove legacy properties using [versioning](api-management-versions.md) in API Management.
+A server side request forgery vulnerability could occur when the API fetches a downstream resource based on the value of a URL which has been passed by the API caller without appropriate validation checks.
-* Precisely define XML and JSON contracts in the API schema and use [validate content](validate-content-policy.md) and [validate parameters](validate-parameters-policy.md) policies to block requests and responses with undocumented properties. Blocking requests with undocumented properties mitigates attacks, while blocking responses with undocumented properties makes it harder to reverse-engineer potential attack vectors.
+More information about this threat: [API7:2023 Server Side Request Forgery](https://owasp.org/API-Security/editions/2023/en/0xa7-server-side-request-forgery/)
-* If the backend interface can't be changed, use [transformation policies](api-management-policies.md#transformation) to rewrite request and response payloads and decouple the API contracts from backend contracts. For example, mask or filter data or [remove unneeded JSON properties](./policies/filter-response-content.md).
+### Recommendations
+- If possible, don't use URLs provided in the client payloads, for example, as parameters for backend URLs, [send-request](send-request-policy.md) policy, or [rewrite-url](rewrite-uri-policy.md) policy.
+- If API Management or backend services use URLs provided in request payload for business logic, define and enforce a limited list of hostnames, ports, media types, or other attributes with policies in API Management, such as the [choose](/azure/api-management/choose-policy) policy and policy expressions.
+- Define the `timeout` attribute in the [forward-request](forward-request-policy.md) and [send-request](send-request-policy.md) policies.
+- Validate and sanitize request and response data with validation policies. If needed, use the [set-body](set-body-policy.md) policy to process the response and avoid returning raw data.
+- Use private networking to restrict connectivity. For example, if the API doesn't need to be public, restrict connectivity from the internet to reduce the attack surface.
-## Security misconfiguration
+## Security misconfiguration
Attackers may attempt to exploit security misconfiguration vulnerabilities such as:
-* Missing security hardening
-* Unnecessary enabled features
-* Network connections unnecessarily open to the internet
-* Use of weak protocols or ciphers
-* Other settings or endpoints that may allow unauthorized access to the system
-
-More information about this threat: [API7:2019 Security misconfiguration](https://github.com/OWASP/API-Security/blob/master/editions/2019/en/0xa7-security-misconfiguration.md)
-
-### Recommendations
-
-* Correctly configure [gateway TLS](api-management-howto-manage-protocols-ciphers.MD). Don't use vulnerable protocols (for example, TLS 1.0, 1.1) or ciphers.
-
-* Configure APIs to accept encrypted traffic only, for example through HTTPS or WSS protocols.
-
-* Consider deploying API Management behind a [private endpoint](private-endpoint.md) or attached to a [virtual network deployed in internal mode](api-management-using-with-internal-vnet.md). In internal networks, access can be controlled from within the private network (via firewall or network security groups) and from the internet (via a reverse proxy).
+- Missing security hardening
+- Unnecessarily enabled features
+- Network connections unnecessarily open to the internet
+- Use of weak protocols or ciphers
-* Use Azure API Management policies:
-
- * Always inherit parent policies through the `<base>` tag.
-
- * When using OAuth 2.0, configure and test the [validate JWT](validate-jwt-policy.md) policy to check the existence and validity of the JWT token before it reaches the backend. Automatically check the token expiration time, token signature, and issuer. Enforce claims, audiences, token expiration, and token signature through policy settings.
-
- * Configure the [CORS](cors-policy.md) policy and don't use wildcard `*` for any configuration option. Instead, explicitly list allowed values.
-
- * Set [validation policies](api-management-policies.md#content-validation) to `prevent` in production environments to validate JSON and XML schemas, headers, query parameters, and status codes, and to enforce the maximum size for request or response.
-
- * If API Management is outside a network boundary, client IP validation is still possible using the [restrict caller IPs](ip-filter-policy.md) policy. Ensure that it uses an allowlist, not a blocklist.
-
- * If client certificates are used between caller and API Management, use the [validate client certificate](validate-client-certificate-policy.md) policy. Ensure that the `validate-revocation`, `validate-trust`, `validate-not-before`, and `validate-not-after` attributes are all set to `true`.
-
- * Client certificates (mutual TLS) can also be applied between API Management and the backend. The backend should:
-
- * Have authorization credentials configured
-
- * Validate the certificate chain where applicable
-
- * Validate the certificate name where applicable
-
-* For GraphQL scenarios, use the [validate GraphQL request](validate-graphql-request-policy.md) policy. Ensure that the `authorization` element and `max-size` and `max-depth` attributes are set.
-
-* Don't store secrets in policy files or in source control. Always use API Management [named values](api-management-howto-properties.md) or fetch the secrets at runtime using custom policy expressions.
-
- * Named values should be [integrated with Key Vault](api-management-howto-properties.md#key-vault-secrets) or encrypted within API Management by marking them "secret". Never store secrets in plain-text named values.
-
-* Publish APIs through [products](api-management-howto-add-products.md), which require subscriptions. Don't use [open products](api-management-howto-add-products.md#access-to-product-apis) that don't require a subscription.
-
-* Use Key Vault integration to manage all certificates ΓÇô this centralizes certificate management and can help to ease operations management tasks such as certificate renewal or revocation.
-
-* When using the [self-hosted-gateway](self-hosted-gateway-overview.md), ensure that there's a process in place to update the image to the latest version periodically.
-
-* Represent backend services as [backend entities](backends.md). Configure authorization credentials, certificate chain validation, and certificate name validation where applicable.
-
-* When using the [developer portal](api-management-howto-developer-portal.md):
-
- * If you choose to [self-host](developer-portal-self-host.md) the developer portal, ensure there's a process in place to periodically update the self-hosted portal to the latest version. Updates for the default managed version are automatic.
-
- * Use [Microsoft Entra ID](api-management-howto-aad.md) or [Azure Active Directory B2C](api-management-howto-aad-b2c.md) for user sign-up and sign-in. Disable the default username and password authentication, which is less secure.
-
- * Assign [user groups](api-management-howto-create-groups.md#-associate-a-group-with-a-product) to products, to control the visibility of APIs in the portal.
-
-* Use [Azure Policy](security-controls-policy.md) to enforce API Management resource-level configuration and role-based access control (RBAC) permissions to control resource access. Grant minimum required privileges to every user.
-
-* Use a [DevOps process](devops-api-development-templates.md) and infrastructure-as-code approach outside of a development environment to ensure consistency of API Management content and configuration changes and to minimize human errors.
-
-* Don't use any deprecated features.
-
-## Injection
-
-Any endpoint accepting user data is potentially vulnerable to an injection exploit. Examples include, but aren't limited to:
-
-* [Command injection](https://owasp.org/www-community/attacks/Command_Injection), where a bad actor attempts to alter the API request to execute commands on the operating system hosting the API
-
-* [SQL injection](https://owasp.org/www-community/attacks/SQL_Injection), where a bad actor attempts to alter the API request to execute commands and queries against the database an API depends on
-
-More information about this threat: [API8:2019 Injection](https://github.com/OWASP/API-Security/blob/master/editions/2019/en/0xa8-injection.md)
+More information about this threat: [API8:2023 Security misconfiguration](https://owasp.org/API-Security/editions/2023/en/0xa8-security-misconfiguration/)
### Recommendations
-* [Modern Web Application Firewall (WAF) policies](https://github.com/SpiderLabs/ModSecurity) cover many common injection vulnerabilities. While API Management doesnΓÇÖt have a built-in WAF component, deploying a WAF upstream (in front) of the API Management instance is strongly recommended. For example, use [Azure Application Gateway](/azure/architecture/reference-architectures/apis/protect-apis) or [Azure Front Door](front-door-api-management.md).
-
- > [!IMPORTANT]
- > Ensure that a bad actor can't bypass the gateway hosting the WAF and connect directly to the API Management gateway or backend API itself. Possible mitigations include: [network ACLs](../virtual-network/network-security-groups-overview.md), using API Management policy to [restrict inbound traffic by client IP](ip-filter-policy.md), removing public access where not required, and [client certificate authentication](api-management-howto-mutual-certificates-for-clients.md) (also known as mutual TLS or mTLS).
-
-* Use schema and parameter [validation](api-management-policies.md#content-validation) policies, where applicable, to further constrain and validate the request before it reaches the backend API service.
-
- The schema supplied with the API definition should have a regex pattern constraint applied to vulnerable fields. Each regex should be tested to ensure that it constrains the field sufficiently to mitigate common injection attempts.
-
-### Related information
-
-* [Deployment stamps pattern with Azure Front Door and API Management](/azure/architecture/patterns/deployment-stamp)
-
-* [Deploy Azure API Management with Azure Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md)
-
-## Improper assets management
+- Correctly configure [gateway TLS](/azure/api-management/api-management-howto-manage-protocols-ciphers). Don't use vulnerable protocols (for example, TLS 1.0, 1.1) or ciphers.
+- Configure APIs to accept encrypted traffic only, for example through HTTPS or WSS protocols. You can audit and enforce this setting using [Azure Policy](/azure/api-management/policy-reference).
+- Consider deploying API Management behind a [private endpoint](/azure/api-management/private-endpoint) or attached to a [virtual network deployed in internal mode](/azure/api-management/api-management-using-with-internal-vnet). In internal networks, access can be controlled from within the private network (via firewall or network security groups) and from the internet (via a reverse proxy).
+- Use Azure API Management policies:
+ - Always inherit parent policies through the `<base>` tag.
+ - When using OAuth 2.0, configure and test the [validate-jwt](/azure/api-management/validate-jwt-policy) policy to check the existence and validity of the token before it reaches the backend. Automatically check the token expiration time, token signature, and issuer. Enforce claims, audiences, token expiration, and token signature through policy settings. If you use Microsoft Entra, the [validate-azure-ad-token](validate-azure-ad-token-policy.md) policy provides a more comprehensive and easier way to validate security tokens.
+ - Configure the [CORS](/azure/api-management/cors-policy) policy and don't use wildcard `*` for any configuration option. Instead, explicitly list allowed values.
+ - Set [validation policies](/azure/api-management/api-management-policies#content-validation) in production environments to validate JSON and XML schemas, headers, query parameters, and status codes, and to enforce the maximum size for request or response.
+ - If API Management is outside a network boundary, client IP validation is still possible using the [restrict caller IPs](/azure/api-management/ip-filter-policy) policy. Ensure that it uses an allowlist, not a blocklist.
+ - If client certificates are used between caller and API Management, use the [validate-client-certificate](/azure/api-management/validate-client-certificate-policy) policy. Ensure that the `validate-revocation`, `validate-trust`, `validate-not-before`, and `validate-not-after` attributes are all set to `true`.
+- Client certificates (mutual TLS) can also be applied between API Management and the backend. The backend should:
+ - Have authorization credentials configured
+ - Validate the certificate chain where applicable
+ - Validate the certificate name where applicable
+ - For GraphQL scenarios, use the [validate-graphQL-request](/azure/api-management/validate-graphql-request-policy) policy. Ensure that the `authorization` element and `max-size` and `max-depth` attributes are set.
+- Don't store secrets in policy files or in source control. Always use API Management [named values](/azure/api-management/api-management-howto-properties) or fetch the secrets at runtime using custom policy expressions. Named values should be [integrated with Azure Key Vault](/azure/api-management/api-management-howto-properties#key-vault-secrets) or encrypted within API Management by marking them "secret". Never store secrets in plain-text named values.
+- Publish APIs through [products](/azure/api-management/api-management-howto-add-products), which require subscriptions. Don't use [open products](/azure/api-management/api-management-howto-add-products#access-to-product-apis) that don't require a subscription.
+- Ensure that your APIs require subscription keys, even if all products are configured to require subscription keys. [Learn more](/azure/api-management/api-management-subscriptions#how-api-management-handles-requests-with-or-without-subscription-keys)
+- Require subscription approval for all products and carefully review all subscription requests.
+- Use Key Vault integration to manage all certificates. This centralizes certificate management and can help to ease operations management tasks such as certificate renewal or revocation. Use managed identity to authenticate to key vaults.
+- When using the [self-hosted-gateway](/azure/api-management/self-hosted-gateway-overview), ensure that there's a process in place to update the image to the latest version periodically.
+- Represent backend services as [backend entities](/azure/api-management/backends). Configure authorization credentials, certificate chain validation, and certificate name validation where applicable.
+- Where possible, use credential manager or managed identity to authenticate against backend services.
+- When using the [developer portal](/azure/api-management/api-management-howto-developer-portal):
+ - If you choose to [self-host](/azure/api-management/developer-portal-self-host) the developer portal, ensure there's a process in place to periodically update the self-hosted portal to the latest version. Updates for the default managed version are automatic.
+ - Use [Microsoft Entra ID](/azure/api-management/api-management-howto-aad) or [Azure Active Directory B2C](/azure/api-management/api-management-howto-aad-b2c) for user sign-up and sign-in. Disable the default username and password authentication, which is less secure.
+ - Assign [user groups](/azure/api-management/api-management-howto-create-groups#-associate-a-group-with-a-product) to products, to control the visibility of APIs in the portal.
+- Use [Azure Policy](/azure/api-management/security-controls-policy) to enforce API Management resource-level configuration and role-based access control (RBAC) permissions to control resource access. Grant minimum required privileges to every user.
+- Use a [DevOps process](/azure/api-management/devops-api-development-templates) and infrastructure-as-code approach outside of a development environment to ensure consistency of API Management content and configuration changes and to minimize human errors.
+- Don't use any deprecated features.
+
+## Improper inventory management
Vulnerabilities related to improper assets management include:
-* Lack of proper API documentation or ownership information
-
-* Excessive numbers of older API versions, which may be missing security fixes
+- Lack of proper API documentation or ownership information
+- Excessive numbers of older API versions, which may be missing security fixes
-More information about this threat: [API9:2019 Improper assets management](https://github.com/OWASP/API-Security/blob/master/editions/2019/en/0xa9-improper-assets-management.md)
+More information about this threat: [API9:2023 Improper inventory management](https://owasp.org/API-Security/editions/2023/en/0xa9-improper-inventory-management/)
### Recommendations
+- Use a well-defined [OpenAPI specification](https://swagger.io/specification/) as the source for importing REST APIs. The specification allows encapsulation of the API definition, including self-documenting metadata.
+- Use API interfaces with precise paths, data schemas, headers, query parameters, and status codes. Avoid [wildcard operations](/azure/api-management/add-api-manually#add-and-test-a-wildcard-operation). Provide descriptions for each API and operation and include contact and license information.
+- Avoid endpoints that don't directly contribute to the business objective. They unnecessarily increase the attack surface area and make it harder to evolve the API.
+- Use [revisions](/azure/api-management/api-management-revisions) and [versions](/azure/api-management/api-management-versions) in API Management to manage API changes. Have a strong backend versioning strategy and commit to a maximum number of supported API versions (for example, 2 or 3 prior versions). Plan to quickly deprecate and ultimately remove older, often less secure, API versions. Ensure security controls are implemented across all available API versions.
+- Separate environments (such as development, test, and production) with different API Management services. Ensure that each API Management service connects to its dependencies in the same environment. For example, in the test environment, the test API Management resource should connect to a test Azure Key Vault resource and the test versions of backend services. Use [DevOps automation and infrastructure-as-code practices](/azure/api-management/devops-api-development-templates) to help maintain consistency and accuracy between environments and reduce human errors.
+- Isolate administrative permissions to APIs and related resources using [workspaces](/azure/api-management/workspaces-overview).
+- Use tags to organize APIs and products and group them for publishing.
+- Publish APIs for consumption through a [developer portal](/azure/api-management/api-management-howto-developer-portal). Make sure the API documentation is up to date.
+- Discover undocumented or unmanaged APIs and expose them through API Management for better control.
+- Use [Azure API Center](/azure/api-center/overview) to maintain a comprehensive, centralized inventory of APIs, versions, and deployments, even if APIs aren't managed in Azure API Management.
-* Use a well-defined [OpenAPI specification](https://swagger.io/specification/) as the source for importing REST APIs. The specification allows encapsulation of the API definition, including self-documenting metadata.
-
- * Use API interfaces with precise paths, data schemas, headers, query parameters, and status codes. Avoid [wildcard operations](add-api-manually.md#add-and-test-a-wildcard-operation). Provide descriptions for each API and operation and include contact and license information.
-
- * Avoid endpoints that donΓÇÖt directly contribute to the business objective. They unnecessarily increase the attack surface area and make it harder to evolve the API.
-
-* Use [revisions](api-management-revisions.md) and [versions](api-management-versions.md) in API Management to govern and control the API endpoints. Have a strong backend versioning strategy and commit to a maximum number of supported API versions (for example, 2 or 3 prior versions). Plan to quickly deprecate and ultimately remove older, often less secure, API versions.
+## Unsafe consumption of APIs
-* Use an API Management instance per environment (such as development, test, and production). Ensure that each API Management instance connects to its dependencies in the same environment. For example, in the test environment, the test API Management resource should connect to a test Azure Key Vault resource and the test versions of backend services. Use [DevOps automation and infrastructure-as-code practices](devops-api-development-templates.md) to help maintain consistency and accuracy between environments and reduce human errors.
+Resources obtained through downstream integrations tend to be more highly trusted than API input from the caller or end user. If the appropriate sanitization and security standards are not applied, the API could be vulnerable, even if the integration is provided through a trusted service.
-* Use tags to organize APIs and products and group them for publishing.
-
-* Publish APIs for consumption through the built-in [developer portal](api-management-howto-developer-portal.md). Make sure the API documentation is up-to-date.
-
-* Discover undocumented or unmanaged APIs and expose them through API Management for better control.
-
-## Insufficient logging and monitoring
-
-Insufficient logging and monitoring, coupled with missing or ineffective integration with incident response, allows attackers to further attack systems, maintain persistence, pivot to more systems to tamper with, and extract or destroy data. Most breach studies demonstrate that the time to detect a breach is over 200 days, typically detected by external parties rather than internal processes or monitoring.
-
-More information about this threat: [API10:2019 Insufficient logging and monitoring](https://github.com/OWASP/API-Security/blob/master/editions/2019/en/0xaa-insufficient-logging-monitoring.md)
+More information about this threat: [API10:2023 Unsafe Consumption of APIs](https://owasp.org/API-Security/editions/2023/en/0xaa-unsafe-consumption-of-apis/)
### Recommendations
-* Understand [observability options](observability.md) in Azure API Management and [best practices](/azure/architecture/best-practices/monitoring) for monitoring in Azure.
-
-* Monitor API traffic with [Azure Monitor](api-management-howto-use-azure-monitor.md).
-
-* Log to [Application Insights](api-management-howto-app-insights.md) for debugging purposes. Correlate [transactions in Application Insights](/azure/azure-monitor/app/search-and-transaction-diagnostics?tabs=transaction-diagnostics) between API Management and the backend API to [trace them end-to-end](/azure/azure-monitor/app/correlation).
-
-* If needed, forward custom events to [Event Hubs](api-management-howto-log-event-hubs.md).
-
-* Set alerts in Azure Monitor and Application Insights - for example, for the [capacity metric](api-management-howto-autoscale.md) or for excessive requests or bandwidth transfer.
-
-* Use the [emit-metric](emit-metric-policy.md) policy for custom metrics.
-
-* Use the Azure Activity log for tracking activity in the service.
-
-* Use custom events in [Azure Application Insights](/azure/azure-monitor/app/api-custom-events-metrics) and [Azure Monitor](/azure/azure-monitor/app/custom-data-correlation) as needed.
-
-* Configure [OpenTelemetry](how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md#introduction-to-opentelemetry) for [self-hosted gateways](self-hosted-gateway-overview.md) on Kubernetes.
+- Consider using API Management to act as a façade for downstream dependencies that the backend APIs integrate with.
+- If downstream dependencies are fronted with API Management or if downstream dependencies are consumed with a [send-request](send-request-policy.md) policy in API Management, use the recommendations from other sections of this documentation to ensure their safe and controlled consumption, including:
+ - Ensure secure transport is enabled and [enforce TLS/SSL configuration](/azure/api-management/api-management-howto-manage-protocols-ciphers)
+ - If possible, authenticate with credential manager or managed identity
+ - Control consumption with rate-limit-by-key and quota-limit-by-key policies
+ - Log or block responses that are incompliant with the API specification using the validate-content and validate-header policies
+ - Transform responses with the set-body policy, for example to remove unnecessary or sensitive information
+ - Configure timeouts and limit concurrency
-## Next steps
+## Related content
Learn more about: * [Authentication and authorization in API Management](authentication-authorization-overview.md) * [Security baseline for API Management](/security/benchmark/azure/baselines/api-management-security-baseline)
-* [Security controls by Azure policy](security-controls-policy.md)
+* [Security controls by Azure Policy](security-controls-policy.md)
* [Building a comprehensive API security strategy](https://aka.ms/API-Security-EBook) * [Landing zone accelerator for API Management](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/landing-zone-accelerator) * [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction)
app-service App Service Web Nodejs Best Practices And Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-nodejs-best-practices-and-troubleshoot-guide.md
Follow these links to learn more about Node.js applications on Azure App Service
* [Using Node.js Modules with Azure applications](/training/modules/create-nodejs-project-dependencies/) * [Azure App Service Web Apps: Node.js](/archive/blogs/silverlining/windows-azure-websites-node-js) * [Node.js Developer Center](../nodejs-use-node-modules-azure-apps.md)
-* [Exploring the Super Secret Kudu Debug Console](https://azure.microsoft.com/documentation/videos/super-secret-kudu-debug-console-for-azure-web-sites/)
+* [Exploring the Super Secret Kudu Debug Console](https://www.youtube.com/watch?v=-VjqyvA2XjM)
app-service Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-github-actions.md
The following examples show the part of the workflow that builds the web app, in
## Frequently Asked Questions -- [How do I deploy a WAR file through Maven plugin and OpenID Connect](#how-do-i-deploy-a-war-file-through-maven-plugin-and-openid-connect)-- [How do I deploy a WAR file through Az CLI and OpenID Connect](#how-do-i-deploy-a-war-file-through-az-cli-and-openid-connect)-- [How do I deploy to a Container](#how-do-i-deploy-to-a-container)-- [How do I update the Tomcat configuration after deployment](#how-do-i-update-the-tomcat-configuration-after-deployment)
+- [How do I deploy a WAR file through Maven plugin?](#how-do-i-deploy-a-war-file-through-maven-plugin)
+- [How do I deploy a WAR file through Az CLI?](#how-do-i-deploy-a-war-file-through-az-cli)
+- [How do I deploy a startup file?](#how-do-i-deploy-a-startup-file)
+- [How do I deploy to a Container?](#how-do-i-deploy-to-a-container)
+- [How do I update the Tomcat configuration after deployment?](#how-do-i-update-the-tomcat-configuration-after-deployment)
-### How do I deploy a WAR file through Maven plugin and OpenID Connect
+### How do I deploy a WAR file through Maven plugin?
In case you configured your Java Tomcat project with the [Maven plugin](https://github.com/microsoft/azure-maven-plugins), you can also deploy to Azure App Service through this plugin. If you use the [Azure CLI GitHub action](https://github.com/Azure/cli) it will make use of your Azure login credentials.
In case you configured your Java Tomcat project with the [Maven plugin](https://
More information on the Maven plugin and how to use and configure it can be found in the [Maven plugin wiki for Azure App Service](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Web-App).
-### How do I deploy a WAR file through Az CLI and OpenID Connect
+### How do I deploy a WAR file through Az CLI?
-If you use prefer the Azure CLI to deploy to App Service, you can use the GitHub Action for CLI.
+If you use prefer the Azure CLI to deploy to App Service, you can use the GitHub Action for Azure CLI.
```yaml
- - name: Azure CLI script
- uses: azure/cli@v2
- with:
- inlineScript: |
- az webapp deploy --src-path '${{ github.workspace }}/target/yourpackage.war' --name ${{ env.AZURE_WEBAPP_NAME }} --resource-group ${{ env.RESOURCE_GROUP }} --async true --type war
+- name: Azure CLI script
+ uses: azure/cli@v2
+ with:
+ inlineScript: |
+ az webapp deploy --src-path '${{ github.workspace }}/target/yourpackage.war' --name ${{ env.AZURE_WEBAPP_NAME }} --resource-group ${{ env.RESOURCE_GROUP }} --async true --type war
``` More information on the GitHub Action for CLI and how to use and configure it can be found in the [Azure CLI GitHub action](https://github.com/Azure/cli).
-More information on the az webapp deploy command, how to use and the parameter details can be found in the [az webapp deploy documentation](/cli/azure/webapp?view=azure-cli-latest#az-webapp-deploy).
+More information on the az webapp deploy command, how to use and the parameter details can be found in the [az webapp deploy documentation](/cli/azure/webapp#az-webapp-deploy).
+
+### How do I deploy a startup file?
+
+Use the GitHub Action for CLI. For example:
+
+```yaml
+- name: Deploy startup script
+ uses: azure/cli@v2
+ with:
+ inlineScript: |
+ az webapp deploy --src-path ${{ github.workspace }}/src/main/azure/createPasswordlessDataSource.sh --name ${{ env.AZURE_WEBAPP_NAME }} --resource-group ${{ env.RESOURCE_GROUP }} --type startup --track-status false
+```
-### How do I deploy to a Container
+### How do I deploy to a Container?
With the Azure Web Deploy action, you can automate your workflow to deploy custom containers to App Service using GitHub Actions. Detailed information on the steps to deploy using GitHub Actions, can be found in the [Deploy to a Container](/azure/app-service/deploy-container-github-action).
-### How do I update the Tomcat configuration after deployment
+### How do I update the Tomcat configuration after deployment?
In case you would like to update any of your web apps settings after deployment, you can use the [App Service Settings](https://github.com/Azure/appservice-settings) action.
app-service How To Upgrade Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-upgrade-preference.md
description: Configure the upgrade preference for the Azure App Service Environm
Previously updated : 06/25/2024 Last updated : 11/05/2024 zone_pivot_groups: app-service-cli-portal
In smaller regions, Early and Late upgrade preferences might be very close to ea
Manual upgrade preference gives you the option to receive a notification when an upgrade is available. The availability is also visible in the Azure portal. After the upgrade is available, you'll have 15 days to start the upgrade process. If you don't start the upgrade within the 15 days, the upgrade is processed with the remaining automatic upgrades in the region.
+> [!IMPORTANT]
+> In rare cases, you might see an upgrade is available in the **Configuration** page for your App Service Environment, but you don't receive a **Service Health** notification (if you [configure notifications](#configure-notifications)). If you don't receive a Service Health notification, this available upgrade isn't required and the 15-day time limit doesn't apply. This is a known bug that we are working to fix.
+>
+ Upgrades normally don't affect the availability of your apps. The upgrade adds extra instances to ensure that the same capacity is available during upgrade. Patched and restarted instances are added back in rotation, and when you have workloads sensitive to restarts you should plan to start the maintenance during non-business hours. The full upgrade process normally finishes within 18 hours, but could take longer. Once the upgrade is started the upgrade runs until it's complete and isn't paused during standard business hours. > [!NOTE]
-> In rare cases the upgrade availability might be impacted by a security hotfix superseding the planned upgrade, or a regression found in the planned upgrade before it has been applied to your instance. In these rare cases, the available upgrade will be removed and will transition to automatic upgrade.
+> In rare cases, the upgrade availability might be impacted by a security hotfix superseding the planned upgrade, or a regression found in the planned upgrade before it has been applied to your instance. In these rare cases, the available upgrade will be removed and will transition to automatic upgrade.
> ## Configure notifications
app-service Overview Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-authentication-authorization.md
When a request fulfills all these conditions, App Service authentication automat
When using Azure App Service with authentication behind Azure Front Door or other reverse proxies, a few additional things have to be taken into consideration. -- Disable caching for the authentication workflow.-
- See [Disable cache for auth workflow](../static-web-apps/front-door-manual.md#disable-cache-for-auth-workflow) to learn more on how to configure rules in Azure Front Door to disable caching for authentication and authorization-related pages.
+- Disable [Front Door caching](../frontdoor/front-door-caching.md) for the authentication workflow.
- Use the Front Door endpoint for redirects.
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
In this tutorial, you learn how to deploy a data-driven ASP.NET Core app to Azur
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a secure-by-default App Service, SQL Database, and Redis cache architecture
+> * Create a secure-by-default App Service, SQL Database, and Redis cache architecture.
> * Secure connection secrets using a managed identity and Key Vault references. > * Deploy a sample ASP.NET Core app to App Service from a GitHub repository. > * Acces App Service connection strings and app settings in the application code.
app-service Tutorial Java Jboss Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-jboss-mysql-app.md
+
+ Title: 'Tutorial: Linux Java app with JBoss and MySQL'
+description: Learn how to get a data-driven Linux JBoss app working in Azure App Service, with connection to a MySQL running in Azure.
++
+ms.devlang: java
+ Last updated : 10/31/2024+
+# zone_pivot_groups: app-service-portal-azd
+++
+# Tutorial: Build a JBoss web app with Azure App Service on Linux and MySQL
+
+This tutorial shows how to build, configure, and deploy a secure JBoss application in Azure App Service that connects to a MySQL database (using [Azure Database for MySQL](/azure/mysql/)). Azure App Service is a highly scalable, self-patching, web-hosting service that can easily deploy apps on Windows or Linux. When you're finished, you'll have a JBoss app running on [Azure App Service on Linux](overview.md).
++
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a secure-by-default architecture for Azure App Service and Azure Database for MySQL flexible server.
+> * Secure database connectivity using a passwordless connection string.
+> * Verify JBoss data sources in App Service using JBoss CLI.
+> * Deploy a JBoss sample app to App Service from a GitHub repository.
+> * Acces App Service app settings in the application code.
+> * Make updates and redeploy the application code.
+> * Stream diagnostic logs from App Service.
+> * Manage the app in the Azure portal.
+<!-- > * Provision the same architecture and deploy by using Azure Developer CLI.
+> * Optimize your development workflow with GitHub Codespaces and GitHub Copilot. -->
+
+## Prerequisites
+
+<!-- ::: zone pivot="azure-portal" -->
+
+* An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/java/).
+* A GitHub account. you can also [get one for free](https://github.com/join).
+* Knowledge of Java with JBoss development.
+<!-- * **(Optional)** To try GitHub Copilot, a [GitHub Copilot account](https://docs.github.com/copilot/using-github-copilot/using-github-copilot-code-suggestions-in-your-editor). A 30-day free trial is available. -->
+
+<!-- ::: zone-end
++
+* An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/java).
+* [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) installed. You can follow the steps with the [Azure Cloud Shell](https://shell.azure.com) because it already has Azure Developer CLI installed.
+* Knowledge of Java with JBoss development.
+* **(Optional)** To try GitHub Copilot, a [GitHub Copilot account](https://docs.github.com/copilot/using-github-copilot/using-github-copilot-code-suggestions-in-your-editor). A 30-day free trial is available.
++
+## Skip to the end
+
+You can quickly deploy the sample app in this tutorial and see it running in Azure. Just run the following commands in the [Azure Cloud Shell](https://shell.azure.com), and follow the prompt:
+
+```bash
+mkdir msdocs-jboss-mysql-sample-app
+cd msdocs-jboss-mysql-sample-app
+azd init --template msdocs-jboss-mysql-sample-app
+azd up
+``` -->
+
+## 1. Run the sample
+
+First, you set up a sample data-driven app as a starting point. For your convenience, the [sample repository](https://github.com/Azure-Samples/msdocs-jboss-mysql-sample-app), includes a [dev container](https://docs.github.com/codespaces/setting-up-your-project-for-codespaces/adding-a-dev-container-configuration/introduction-to-dev-containers) configuration. The dev container has everything you need to develop an application, including the database, cache, and all environment variables needed by the sample application. The dev container can run in a [GitHub codespace](https://docs.github.com/en/codespaces/overview), which means you can run the sample on any computer with a web browser.
+
+ :::column span="2":::
+ **Step 1:** In a new browser window:
+ 1. Sign in to your GitHub account.
+ 1. Navigate to [https://github.com/Azure-Samples/msdocs-jboss-mysql-sample-app/fork](https://github.com/Azure-Samples/msdocs-jboss-mysql-sample-app/fork).
+ 1. Select **Create fork**.
+ <!-- 1. Unselect **Copy the main branch only**. You want all the branches. -->
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-run-sample-application-1.png" alt-text="A screenshot showing how to create a fork of the sample GitHub repository." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-run-sample-application-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2:** In the GitHub fork:
+ <!-- 1. Select **main** > **starter-no-infra** for the starter branch. This branch contains just the sample project and no Azure-related files or configuration. -->
+ Select **Code** > **Create codespace on main**.
+ The codespace takes a few minutes to set up.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-run-sample-application-2.png" alt-text="A screenshot showing how to create a codespace in GitHub." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-run-sample-application-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3:** In the codespace terminal:
+ 1. Run `mvn clean wildfly:run`.
+ 1. When you see the notification `Your application running on port 8080 is available.`, wait a few seconds longer for the WildFly server to finish loading the application. Then, select **Open in Browser**.
+ You should see the sample application in a new browser tab.
+ To stop the WildFly server, type `Ctrl`+`C`.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-run-sample-application-3.png" alt-text="A screenshot showing how to run the sample application inside the GitHub codespace." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-run-sample-application-3.png":::
+ :::column-end:::
+
+> [!TIP]
+> You can ask [GitHub Copilot](https://docs.github.com/copilot/using-github-copilot/using-github-copilot-code-suggestions-in-your-editor) about this repository. For example:
+>
+> * *@workspace What does this project do?*
+> * *@workspace What does the .devcontainer folder do?*
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+<!-- ::: zone pivot="azure-portal" -->
+
+## 2. Create App Service and MySQL
+
+First, you create the Azure resources. The steps used in this tutorial create a set of secure-by-default resources that include App Service and Azure Database for MySQL. For the creation process, you specify:
+
+* The **Name** for the web app. It's used as part of the DNS name for your app in the form of `https://<app-name>-<hash>.<region>.azurewebsites.net`.
+* The **Region** to run the app physically in the world. It's also used as part of the DNS name for your app.
+* The **Runtime stack** for the app. It's where you select the version of Java to use for your app.
+* The **Hosting plan** for the app. It's the pricing tier that includes the set of features and scaling capacity for your app.
+* The **Resource Group** for the app. A resource group lets you group (in a logical container) all the Azure resources needed for the application.
+
+Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
+
+ :::column span="2":::
+ **Step 1:** In the Azure portal:
+ 1. In the top search bar, type *app service*.
+ 1. Select the item labeled **App Service** under the **Services** heading.
+ 1. Select **Create** > **Web App**.
+ You can also navigate to the [creation wizard](https://portal.azure.com/#create/Microsoft.WebSite) directly.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-create-app-mysql-1.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find the Web App creation wizard." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-create-app-mysql-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2:** In the **Create Web App** page, fill out the form as follows.
+ 1. *Name*: **msdocs-jboss-mysql**. A resource group named **msdocs-jboss-mysql_group** will be generated for you.
+ 1. *Runtime stack*: **Java 17**.
+ 1. *Java web server stack*: **Red Hat JBoss EAP 8**. If you configured your Red Hat subscription with Azure already, select **Red Hat JBoss EAP 8 BYO License**.
+ 1. *Region*: Any Azure region near you.
+ 1. *Linux Plan*: **Create new** and use the name **msdocs-jboss-mysql**.
+ 1. *Pricing plan*: **Premium V3 P0V3**. When you're ready, you can [scale up](manage-scale-up.md) to a different pricing tier.
+ 1. *Deploy with your app*: Select **Database**. Azure Database for MySQL - Flexible Server is selected for you by default. It's a fully managed MySQL database as a service on Azure, compatible with the latest community editions.
+ 1. Select **Review + create**.
+ 1. After validation completes, select **Create**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-create-app-mysql-2.png" alt-text="A screenshot showing how to configure a new app and database in the Web App wizard." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-create-app-mysql-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3:** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created:
+ - **Resource group**: The container for all the created resources.
+ - **App Service plan**: Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created.
+ - **App Service**: Represents your app and runs in the App Service plan.
+ - **Virtual network**: Integrated with the App Service app and isolates back-end network traffic.
+ - **Azure Database for MySQL flexible server**: Accessible only from the virtual network. A database and a user are created for you on the server.
+ - **Private DNS zones**: Enable DNS resolution of the database server in the virtual network.
+ <!-- Author note: Azure Database for MySQL's networking is not the same as other databases. It integrates with a private DNS zone, not with a private endpoint. -->
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-create-app-mysql-3.png" alt-text="A screenshot showing the deployment process completed." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-create-app-mysql-3.png":::
+ :::column-end:::
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 3. Create a passwordless connection
+
+In this step, you generate a managed identity based service connection, which you can later use to create a data source in your JBoss server. By using a managed identity to connect to the MySQL database, your code is safe from accidental secrets leakage.
+
+ :::column span="2":::
+ **Step 1: Create a managed identity.**
+ 1. In the top search bar, type *managed identity*.
+ 1. Select the item labeled **Managed Identities** under the **Services** heading.
+ 1. Select **Create**.
+ 1. In **Resource group**, select **msdocs-jboss-mysql_group**.
+ 1. In **Region**, select the same region that you used for your web app.
+ 1. In **Name**, type **msdocs-jboss-mysql-server-identity**.
+ 1. Select **Review + create**.
+ 1. Select **Create**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-create-passwordless-connection-1.png" alt-text="A screenshot showing how to configure a new managed identity." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-create-passwordless-connection-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2: Enable Microsoft Entra authentication in the MySQL server.**
+ 1. In the top search bar, type *msdocs-jboss-mysql-server*.
+ 1. Select the Azure Database for MySQL flexible server resource called **msdocs-jboss-mysql-server**.
+ 1. From the left menu, select **Security** > **Authentication**.
+ 1. In **Assign access to**, select **Microsoft Entra authentication only**.
+ 1. In **User assigned managed identity**, select **Select**.
+ 1. Select **msdocs-jboss-mysql-server-identity**, then select **Add**. It takes a moment for the identity to be assigned to the MySQL server.
+ 1. In **Microsoft Entra Admin Name**, select **Select**.
+ 1. Find your Azure account and select it, then select **Select**.
+ 1. Select **Save** and wait for the operation to complete.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-create-passwordless-connection-2.png" alt-text="A screenshot showing how to configure Microsoft Entra authentication for Azure Database for MySQL flexible server." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-create-passwordless-connection-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3: Add a managed identity-based service connector.**
+ 1. In the top search bar, type *msdocs-jboss-mysql*.
+ 1. Select the App Service resource called **msdocs-jboss-mysql**.
+ 1. In the App Service page, in the left menu, select **Settings > Service Connector**.
+ 1. Select **Create**.
+ 1. In the **Basics** tab:
+ 1. Set **Service type** to **DB for MySQL flexible server**.
+ 1. Set **MySQL flexible server** to **msdocs-jboss-mysql-server**.
+ 1. Set **MySQL database** to **msdocs-jboss-mysql-database**.
+ 1. Set **Client type** to **Java**.
+ 1. Select the **Authentication** tab.
+ 1. Select **System assigned managed identity**.
+ 1. Select the **Review + Create** tab.
+ 1. When validation completes, select **Create on Cloud Shell** and wait for the operation to complete in the Cloud Shell.
+ 1. When you see the output JSON, you can close the Cloud Shell. Also, close the **Create connection** dialog.
+ 1. Select **Refresh** to show the new service connector.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-create-passwordless-connection-3.png" alt-text="A screenshot showing a completely configured service connector, ready to be created with cloud shell." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-create-passwordless-connection-3.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 4: Add authentication plugins to the connection string.**
+ 1. From the left menu, select **Environment variables > Connection strings**.
+ 1. Select **AZURE_MYSQL_CONNECTIONSTRING**. The **Value** field should contain a `user` but no `password`. The user is a managed identity.
+ 1. The JBoss server in your App Service app has the authentication plugins authenticate the managed identity, but you still need to add it to the connection string. Scroll to the end of the value and append `&defaultAuthenticationPlugin=com.azure.identity.extensions.jdbc.mysql.AzureMysqlAuthenticationPlugin&authenticationPlugins=com.azure.identity.extensions.jdbc.mysql.AzureMysqlAuthenticationPlugin`.
+ 1. Select **Apply**.
+ 1. Select **Apply**, then **Confirm**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-create-passwordless-connection-4.png" alt-text="A screenshot showing how to change the value of the MySQL environment variable in Azure." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-create-passwordless-connection-4.png":::
+ :::column-end:::
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 4. Confirm JNDI data source
+
+If you add an app setting that contains a valid JDBC connection string for Oracle, SQL Server, PostgreSQL, or MySQL, App Service adds a Java Naming and Directory Interface (JNDI) data source for it in the JBoss server. In this step, you use the SSH connection to the app container to verify the JNDI data source. In the process, you learn how to access the SSH shell and run the JBoss CLI.
+
+ :::column span="2":::
+ **Step 1:** Back in the App Service page:
+ 1. In the left menu, select **Development Tools > SSH**.
+ 1. Select **Go**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-check-config-in-ssh-1.png" alt-text="A screenshot showing how to open the SSH shell for your app from the Azure portal." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-check-config-in-ssh-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2:** In the SSH terminal:
+ 1. Run `$JBOSS_HOME/bin/jboss-cli.sh --connect`.
+ 1. In the JBoss CLI connection, run `ls subsystem=datasources/data-source`. You should see the automatically generated data source called `AZURE_MYSQL_CONNECTIONSTRING_DS`.
+ 1. Get the JNDI name of the data source with `/subsystem=datasources/data-source=AZURE_MYSQL_CONNECTIONSTRING_DS:read-attribute(name=jndi-name)`.
+ You now have a JNDI name `java:jboss/env/jdbc/AZURE_MYSQL_CONNECTIONSTRING_DS`, which you can use in your application code later.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-check-config-in-ssh-2.png" alt-text="A screenshot showing the JBoss CLI commands to run in the SSH shell and their output." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-check-config-in-ssh-2.png":::
+ :::column-end:::
+
+> [!NOTE]
+> Only changes to files in `/home` can persist beyond app restarts. For example, if you edit `/opt/eap/standalone/configuration/standalone.xml` or change server configuration in the JBoss CLI, the changes won't persist beyond an app restart. To persist your changes, use a startup script, such as demonstrated in [Configure data sources for a Tomcat, JBoss, or Java SE app in Azure App Service](configure-language-java-data-sources.md?tabs=linux&pivots=java-jboss)
+>
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 5. Deploy sample code
+
+In this step, you configure GitHub deployment using GitHub Actions. It's just one of many ways to deploy to App Service, but also a great way to have continuous integration in your deployment process. By default, every `git push` to your GitHub repository kicks off the build and deploy action.
+
+Like the JBoss convention, if you want to deploy to the root context of JBoss, name your built artifact *ROOT.war*.
+
+ :::column span="2":::
+ **Step 1:** Back in the App Service page, in the left menu, select **Deployment > Deployment Center**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-deploy-sample-code-1.png" alt-text="A screenshot showing how to open the deployment center in App Service." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-deploy-sample-code-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2:** In the Deployment Center page:
+ 1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider.
+ 1. Sign in to your GitHub account and follow the prompt to authorize Azure.
+ 1. In **Organization**, select your account.
+ 1. In **Repository**, select **msdocs-jboss-mysql-sample-app**.
+ 1. In **Branch**, select **main**. This is the same branch that you worked in with your sample app, without any Azure-related files or configuration.
+ 1. For **Authentication type**, select **User-assigned identity**.
+ 1. In the top menu, select **Save**. App Service commits a workflow file into the chosen GitHub repository, in the `.github/workflows` directory.
+ By default, the deployment center [creates a user-assigned identity](#i-dont-have-permissions-to-create-a-user-assigned-identity) for the workflow to authenticate using Microsoft Entra (OIDC authentication). For alternative authentication options, see [Deploy to App Service using GitHub Actions](deploy-github-actions.md).
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-deploy-sample-code-2.png" alt-text="A screenshot showing how to configure CI/CD using GitHub Actions." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-deploy-sample-code-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3:** Back in the GitHub codespace of your sample fork, run `git pull origin main`.
+ This pulls the newly committed workflow file into your codespace. You can modify it according to your needs at *.github/workflows/main_msdocs-jboss-mysql.yml*.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-deploy-sample-code-3.png" alt-text="A screenshot showing git pull inside a GitHub codespace." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-deploy-sample-code-3.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 4 (Option 1: with GitHub Copilot):**
+ 1. Start a new chat session by clicking the **Chat** view, then clicking **+**.
+ 1. Ask, "*@workspace How does the app connect to the database?*" Copilot might give you some explanation about the `java:jboss/MySQLDS` data source and how it's configured.
+ 1. Say, "*The data source in JBoss in Azure uses the JNDI name java:jboss/env/jdbc/AZURE_MYSQL_CONNECTIONSTRING_DS.*" Copilot might give you a code suggestion similar to the one in the **Option 2: without GitHub Copilot** steps below and even tell you to make the change in the class.
+ GitHub Copilot doesn't give you the same response every time, you might need to ask more questions to fine-tune its response. For tips, see [What can I do with GitHub Copilot in my codespace?](#what-can-i-do-with-github-copilot-in-my-codespace).
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="media/tutorial-java-jboss-mysql-app/github-copilot-1.png" alt-text="A screenshot showing how to ask a question in a new GitHub Copilot chat session." lightbox="media/tutorial-java-jboss-mysql-app/github-copilot-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 4 (Option 2: without GitHub Copilot):**
+ 1. Open *src/main/resources/META-INF/persistence.xml* in the explorer. When the application starts, it loads the database settings in this file.
+ 1. Change the value of `<jta-data-source>` to `java:jboss/env/jdbc/AZURE_MYSQL_CONNECTIONSTRING_DS`, which is the data source you found with JBoss CLI earlier in the SSH shell.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-deploy-sample-code-4.png" alt-text="A screenshot showing a GitHub codespace and the ContextListener.java file opened." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-deploy-sample-code-4.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 5:**
+ 1. Select the **Source Control** extension.
+ 1. In the textbox, type a commit message like `Configure Azure JNDI name`.
+ 1. Select **Commit**, then confirm with **Yes**.
+ 1. Select **Sync changes 1**, then confirm with **OK**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-deploy-sample-code-5.png" alt-text="A screenshot showing the changes being committed and pushed to GitHub." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-deploy-sample-code-5.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 6:**
+ Back in the Deployment Center page in the Azure portal:
+ 1. Select **Logs**. A new deployment run is already started from your committed changes.
+ 1. In the log item for the deployment run, select the **Build/Deploy Logs** entry with the latest timestamp.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-deploy-sample-code-6.png" alt-text="A screenshot showing how to open deployment logs in the deployment center." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-deploy-sample-code-6.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 7:** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 5 minutes.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-deploy-sample-code-7.png" alt-text="A screenshot showing a GitHub run in progress." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-deploy-sample-code-7.png":::
+ :::column-end:::
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 6. Browse to the app
+
+ :::column span="2":::
+ **Step 1:** In the App Service page:
+ 1. From the left menu, select **Overview**.
+ 1. In **Default domain**, select the URL of your app.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-browse-app-1.png" alt-text="A screenshot showing how to launch an App Service from the Azure portal." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-browse-app-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2:** Add a few tasks to the list.
+ Congratulations, you're running a web app in Azure App Service, with secure connectivity to Azure Database for MySQL.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-browse-app-2.png" alt-text="A screenshot of the JBoss web app with MySQL running in Azure." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-browse-app-2.png":::
+ :::column-end:::
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 7. Stream diagnostic logs
+
+Azure App Service captures all messages output to the console to help you diagnose issues with your application. The sample application includes standard Log4j logging statements to demonstrate this capability, as shown in the following snippet:
++
+ :::column span="2":::
+ In the App Service page, from the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-stream-diagnostic-logs-1.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-stream-diagnostic-logs-1.png":::
+ :::column-end:::
+
+Learn more about logging in Java apps in the series on [Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python and Java applications](/azure/azure-monitor/app/opentelemetry-enable?tabs=java).
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 8. Clean up resources
+
+When you're finished, you can delete all of the resources from your Azure subscription by deleting the resource group.
+
+ :::column span="2":::
+ **Step 1:** In the search bar at the top of the Azure portal:
+ 1. Enter the resource group name *msdocs-jboss-mysql_group*.
+ 1. Select the resource group.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-clean-up-resources-1.png" alt-text="A screenshot showing how to search for and navigate to a resource group in the Azure portal." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-clean-up-resources-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2:** In the resource group page, select **Delete resource group**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-clean-up-resources-2.png" alt-text="A screenshot showing the location of the **Delete Resource Group** button in the Azure portal." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-clean-up-resources-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3:**
+ 1. Confirm your deletion by typing the resource group name.
+ 1. Select **Delete**.
+ 1. Confirm with **Delete** again.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-clean-up-resources-3.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group in the Azure portal." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-clean-up-resources-3.png":::
+ :::column-end:::
+
+<!-- ::: zone-end
++
+## 2. Create Azure resources and deploy a sample app
+
+In this step, you create the Azure resources and deploy a sample app to App Service on Linux. The steps used in this tutorial create a set of secure-by-default resources that include App Service and Azure Database for MySQL.
+
+The dev container already has the [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) (AZD).
+
+1. From the repository root, run `azd init`.
+
+ ```bash
+ azd init --template tomcat-app-service-mysql-infra
+ ```
+
+1. When prompted, give the following answers:
+
+ |Question |Answer |
+ |||
+ |The current directory is not empty. Would you like to initialize a project here in '\<your-directory>'? | **Y** |
+ |What would you like to do with these files? | **Keep my existing files unchanged** |
+ |Enter a new environment name | Type a unique name. The AZD template uses this name as part of the DNS name of your web app in Azure (`<app-name>-<hash>.azurewebsites.net`). Alphanumeric characters and hyphens are allowed. |
+
+1. Sign into Azure by running the `azd auth login` command and following the prompt:
+
+ ```bash
+ azd auth login
+ ```
+
+1. Create the necessary Azure resources and deploy the app code with the `azd up` command. Follow the prompt to select the desired subscription and location for the Azure resources.
+
+ ```bash
+ azd up
+ ```
+
+ The `azd up` command takes about 15 minutes to complete (the Redis cache takes the most time). It also compiles and deploys your application code, but you'll modify your code later to work with App Service. While it's running, the command provides messages about the provisioning and deployment process, including a link to the deployment in Azure. When it finishes, the command also displays a link to the deploy application.
+
+ This AZD template contains files (*azure.yaml* and the *infra* directory) that generate a secure-by-default architecture with the following Azure resources:
+
+ - **Resource group**: The container for all the created resources.
+ - **App Service plan**: Defines the compute resources for App Service. A Linux plan in the *B1* tier is created.
+ - **App Service**: Represents your app and runs in the App Service plan.
+ - **Virtual network**: Integrated with the App Service app and isolates back-end network traffic.
+ - **Azure Database for MySQL flexible server**: Accessible only from the virtual network through the DNS zone integration. A database is created for you on the server.
+ - **Azure Cache for Redis**: Accessible only from within the virtual network.
+ - **Private endpoints**: Access endpoints for the key vault and the Redis cache in the virtual network.
+ - **Private DNS zones**: Enable DNS resolution of the key vault, the database server, and the Redis cache in the virtual network.
+ - **Log Analytics workspace**: Acts as the target container for your app to ship its logs, where you can also query the logs.
+ - **Key vault**: Used to keep your database password the same when you redeploy with AZD.
+
+ Once the command finishes creating resources and deploying the application code the first time, the deployed sample app doesn't work yet because you must make small changes to make it connect to the database in Azure.
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 3. Verify connection strings
+
+The AZD template you use generated the connectivity variables for you already as [app settings](configure-common.md#configure-app-settings) and outputs the them to the terminal for your convenience. App settings are one way to keep connection secrets out of your code repository.
+
+1. In the AZD output, find the app setting `AZURE_MYSQL_CONNECTIONSTRING`. Only the setting names are displayed. They look like this in the AZD output:
+
+ <pre>
+ App Service app has the following connection strings:
+ - AZURE_MYSQL_CONNECTIONSTRING
+ - AZURE_REDIS_CONNECTIONSTRING
+ - AZURE_KEYVAULT_RESOURCEENDPOINT
+ - AZURE_KEYVAULT_SCOPE
+ </pre>
+
+ `AZURE_MYSQL_CONNECTIONSTRING` contains the connection string to the MySQL database in Azure. You need to use it in your code later.
+
+1. For your convenience, the AZD template shows you the direct link to the app's app settings page. Find the link and open it in a new browser tab.
+
+ If you add an app setting that contains a valid Oracle, SQL Server, PostgreSQL, or MySQL connection string, App Service adds it as a Java Naming and Directory Interface (JNDI) data source in the JBoss server's *context.xml* file.
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 4. Confirm JNDI data source
+
+In this step, you use the SSH connection to the app container to verify the JNDI data source in the JBoss server. In the process, you learn how to access the SSH shell for the JBoss container.
+
+1. In the AZD output, find the URL for the SSH session and navigate to it in the browser. It looks like this in the output:
+
+ <pre>
+ Open SSH session to App Service container at: https://&lt;app-name>-&lt;hash>.scm.azurewebsites.net/webssh/host
+ </pre>
+
+1. In the SSH terminal, run `cat /usr/local/tomcat/conf/context.xml`. You should see that a JNDI resource called `jdbc/AZURE_MYSQL_CONNECTIONSTRING_DS` was added. You'll use this data source later.
+
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-check-config-in-ssh-2.png" alt-text="A screenshot showing the commands to run in the SSH shell and their output.":::
+
+> [!NOTE]
+> Only changes to files in `/home` can persist beyond app restarts. For example, if you edit `/usr/local/tomcat/conf/server.xml`, the changes won't persist beyond an app restart.
+>
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 5. Modify sample code and redeploy
+
+# [With GitHub Copilot](#tab/copilot)
+
+1. In the GitHub codespace, start a new chat session by clicking the **Chat** view, then clicking **+**.
+
+1. Ask, "*@workspace How does the app connect to the database?*" Copilot might give you some explanation about the `jdbc/MYSQLDS` data source and how it's configured.
+
+1. Ask, "*@workspace I want to replace the data source defined in persistence.xml with an existing JNDI data source in JBoss but I want to do it dynamically.*" Copilot might give you a code suggestion similar to the one in the **Option 2: without GitHub Copilot** steps below and even tell you to make the change in the [ContextListener](https://github.com/Azure-Samples/msdocs-jboss-mysql-sample-app/blob/starter-no-infra/src/main/java/com/microsoft/azure/appservice/examples/tomcatmysql/ContextListener.java) class.
+
+1. Open *src/main/java/com/microsoft/azure/appservice/examples/tomcatmysql/ContextListener.java* in the explorer and add the code suggestion in the `contextInitialized` method.
+
+ GitHub Copilot doesn't give you the same response every time, you might need to ask other questions to fine-tune its response. For tips, see [What can I do with GitHub Copilot in my codespace?](#what-can-i-do-with-github-copilot-in-my-codespace).
+
+1. In the codespace terminal, run `azd deploy`.
+
+ ```bash
+ azd deploy
+ ```
+
+# [Without GitHub Copilot](#tab/nocopilot)
+
+1. From the explorer, open *src/main/java/com/microsoft/azure/appservice/examples/tomcatmysql/ContextListener.java*. When the application starts, this class loads the database settings in *src/main/resources/META-INF/persistence.xml*.
+
+1. In the `contextIntialized()` method, find the commented code (lines 29-33) and uncomment it.
+
+ ```java
+ String azureDbUrl= System.getenv("AZURE_MYSQL_CONNECTIONSTRING");
+ if (azureDbUrl!=null) {
+ logger.info("Detected Azure MySQL connection string. Adding JBoss data source...");
+ props.put("jakarta.persistence.nonJtaDataSource", "java:comp/env/jdbc/AZURE_MYSQL_CONNECTIONSTRING_DS");
+ }
+ ```
+
+ This code checks to see if the `AZURE_MYSQL_CONNECTIONSTRING` app setting exists, and changes the data source to `java:comp/env/jdbc/AZURE_MYSQL_CONNECTIONSTRING_DS`, which is the data source you found earlier in *context.xml* in the SSH shell.
+
+1. Back in the codespace terminal, run `azd deploy`.
+
+ ```bash
+ azd deploy
+ ```
+
+--
+
+> [!TIP]
+> You can also just use `azd up` always, which does all of `azd package`, `azd provision`, and `azd deploy`.
+>
+> To find out how the War file is packaged, you can run `azd package --debug` by itself.
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 6. Browse to the app
+
+1. In the AZD output, find the URL of your app and navigate to it in the browser. The URL looks like this in the AZD output:
+
+ <pre>
+ Deploying services (azd deploy)
+
+ (Γ£ô) Done: Deploying service web
+ - Endpoint: https://&lt;app-name>-&lt;hash>.azurewebsites.net/
+ </pre>
+
+2. Add a few tasks to the list.
+
+ :::image type="content" source="./media/tutorial-java-jboss-mysql-app/azure-portal-browse-app-2.png" alt-text="A screenshot of the JBoss web app with MySQL running in Azure showing tasks." lightbox="./media/tutorial-java-jboss-mysql-app/azure-portal-browse-app-2.png":::
+
+ Congratulations, you're running a web app in Azure App Service, with secure connectivity to Azure Database for MySQL.
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 7. Stream diagnostic logs
+
+Azure App Service can capture console logs to help you diagnose issues with your application. For convenience, the AZD template already [enabled logging to the local file system](troubleshoot-diagnostic-logs.md#enable-application-logging-linuxcontainer) and is [shipping the logs to a Log Analytics workspace](troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor).
+
+The sample application includes standard Log4j logging statements to demonstrate this capability, as shown in the following snippet:
++
+In the AZD output, find the link to stream App Service logs and navigate to it in the browser. The link looks like this in the AZD output:
+
+<pre>
+Stream App Service logs at: https://portal.azure.com/#@/resource/subscriptions/&lt;subscription-guid>/resourceGroups/&lt;group-name>/providers/Microsoft.Web/sites/&lt;app-name>/logStream
+</pre>
+
+Learn more about logging in Java apps in the series on [Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python and Java applications](/azure/azure-monitor/app/opentelemetry-enable?tabs=java).
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 8. Clean up resources
+
+To delete all Azure resources in the current deployment environment, run `azd down` and follow the prompts.
+
+```bash
+azd down
+```
+
+ -->
+## Troubleshooting
+
+- [I see the error 'not entitled to use the Bring Your Own License feature' in the creation wizard.](#i-see-the-error-not-entitled-to-use-the-bring-your-own-license-feature-in-the-creation-wizard)
+- [The portal deployment view for Azure Database for MySQL Flexible Server shows a Conflict status.](#the-portal-deployment-view-for-azure-database-for-mysql-flexible-server-shows-a-conflict-status)
+- [The Create connection dialog shows a Create On Cloud Shell button but it's not enabled.](#the-create-connection-dialog-shows-a-create-on-cloud-shell-button-but-its-not-enabled)
+- [My app failed to start, and I see 'Access denied for user... (using password: NO)' in the logs.](#my-app-failed-to-start-and-i-see-access-denied-for-user-using-password-no-in-the-logs)
+- [The deployed sample app doesn't show the tasks list app.](#the-deployed-sample-app-doesnt-show-the-tasks-list-app)
+- [I see a "Table 'Task' already exists" error in the diagnostic logs.](#i-see-a-table-task-already-exists-error-in-the-diagnostic-logs)
+
+#### I see the error 'not entitled to use the Bring Your Own License feature' in the creation wizard.
+
+If you see the error: `The subscription '701ea799-fb46-4407-bb67-9cbcf289f1c7' is not entitled to use the Bring Your Own License feature when creating the application`, it means that you selected **Red Hat JBoss EAP 7/8 BYO License** in **Java web server stack** but haven't set up your Azure account in Red Hat Cloud Access or don't have an active JBoss EAP license in Red Hat Cloud Access.
+
+#### The portal deployment view for Azure Database for MySQL Flexible Server shows a Conflict status.
+
+Depending on your subscription and the region you select, you might see the deployment status for Azure Database for MySQL Flexible Server to be `Conflict`, with the following message in Operation details:
+
+`InternalServerError: An unexpected error occured while processing the request.`
+
+This error is most likely caused by a limit on your subscription for the region you select. Try choosing a different region for your deployment.
+
+#### The Create connection dialog shows a Create On Cloud Shell button but it's not enabled.
+
+You might also see an error message in the dialog: `The database server is in Virtual Network and Cloud Shell can't connect to it. Please copy the commands and execute on an environment which can connect to the database server in Virtual Network.`
+
+The service connector automation needs network access to the MySQL server. Look in the networking settings of your MySQL server resource and make sure **Allow public access to this resource through the internet using a public IP address** is selected at a minimum. Service Connector can take it from there.
+
+If you don't see this checkbox, you might have created the deployment using the [Web App + Database wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) instead, and the deployment locks down all public network access to the MySQL server. There's no way to modify the configuration. Since app's Linux container can access MySQL through the virtual network integration, you could install Azure CLI in the app's SSH session and run the supplied Cloud Shell commands there.
+
+#### The deployed sample app doesn't show the tasks list app.
+
+If you see the JBoss splash page instead of the tasks list app, App Service is most likely still loading the updated container from your most recent code deployment. Wait a few minutes and refresh the page.
+
+#### My app failed to start, and I see 'Access denied for user... (using password: NO)' in the logs.
+
+This error is most likely because you didn't add the passwordless authentication plugin to the connection string (see the Java sample code for [Integrate Azure Database for MySQL with Service Connector](../service-connector/how-to-integrate-mysql.md?tabs=java#default-environment-variable-names-or-application-properties-and-sample-code)). Change the MySQL connection string by following the instructions in [3. Create a passwordless connection](#3-create-a-passwordless-connection).
+
+#### I see a "Table 'Task' already exists" error in the diagnostic logs.
+
+You can ignore this Hibernate error because it indicates that the application code is connected to the MySQL database. The application is configured to create the necessary tables when it starts (see *src/main/resources/META-INF/persistence.xml*). When the application starts the first time, it should create the tables successfully, but on subsequent restarts, you would see this error because the tables already exist.
+
+## Frequently asked questions
+
+- [How much does this setup cost?](#how-much-does-this-setup-cost)
+- [How do I connect to the MySQL server behind the virtual network with other tools?](#how-do-i-connect-to-the-mysql-server-behind-the-virtual-network-with-other-tools)
+- [How does local app development work with GitHub Actions?](#how-does-local-app-development-work-with-github-actions)
+- [I don't have permissions to create a user-assigned identity](#i-dont-have-permissions-to-create-a-user-assigned-identity)
+- [What can I do with GitHub Copilot in my codespace?](#what-can-i-do-with-github-copilot-in-my-codespace)
+
+#### How much does this setup cost?
+
+Pricing for the created resources is as follows:
+
+- The App Service plan is created in **P0v3** tier and can be scaled up or down. See [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/).
+- The MySQL flexible server is created in **D2ds** tier and can be scaled up or down. See [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/).
+<!-
+- The virtual network doesn't incur a charge unless you configure extra functionality, such as peering. See [Azure Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/).
+- The private DNS zone incurs a small charge. See [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/).
+
+#### How do I connect to the MySQL server behind the virtual network with other tools?
+
+- The JBoss container currently doesn't have the `mysql-client` terminal too. If you want, you must manually install it. Remember that anything you install doesn't persist across app restarts.
+- To connect from a desktop tool like MySQL Workbench, your machine must be within the virtual network. For example, it could be an Azure VM in one of the subnets, or a machine in an on-premises network that has a [site-to-site VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) connection with the Azure virtual network.
+- You can also [integrate Azure Cloud Shell](../cloud-shell/private-vnet.md) with the virtual network.
+
+#### How does local app development work with GitHub Actions?
+
+Using the autogenerated workflow file from App Service as an example, each `git push` kicks off a new build and deployment run. From a local clone of the GitHub repository, you make the desired updates and push to GitHub. For example:
+
+```terminal
+git add .
+git commit -m "<some-message>"
+git push origin main
+```
+
+#### I don't have permissions to create a user-assigned identity
+
+See [Set up GitHub Actions deployment from the Deployment Center](deploy-github-actions.md#set-up-github-actions-deployment-from-the-deployment-center).
+
+#### What can I do with GitHub Copilot in my codespace?
+
+You might notice that the GitHub Copilot chat view was already there for you when you created the codespace. For your convenience, we include the GitHub Copilot chat extension in the container definition (see *.devcontainer/devcontainer.json*). However, you need a [GitHub Copilot account](https://docs.github.com/copilot/using-github-copilot/using-github-copilot-code-suggestions-in-your-editor) (30-day free trial available).
+
+A few tips for you when you talk to GitHub Copilot:
+
+- In a single chat session, the questions and answers build on each other and you can adjust your questions to fine-tune the answer you get.
+- By default, GitHub Copilot doesn't have access to any file in your repository. To ask questions about a file, open the file in the editor first.
+- To let GitHub Copilot have access to all of the files in the repository when preparing its answers, begin your question with `@workspace`. For more information, see [Use the @workspace agent](https://github.blog/2024-03-25-how-to-use-github-copilot-in-your-ide-tips-tricks-and-best-practices/#10-use-the-workspace-agent).
+- In the chat session, GitHub Copilot can suggest changes and (with `@workspace`) even where to make the changes, but it's not allowed to make the changes for you. It's up to you to add the suggested changes and test it.
+
+Here are some other things you can say to fine-tune the answer you get:
+
+* Change this code to use the data source jdbc/AZURE_MYSQL_CONNECTIONSTRING_DS.
+* Some imports in your code are using javax but I have a Jakarta app.
+* I want this code to run only if the environment variable AZURE_MYSQL_CONNECTIONSTRING is set.
+* I want this code to run only in Azure App Service and not locally.
+
+## Next steps
+
+- [Azure for Java Developers](/java/azure/)
+
+Learn more about running Java apps on App Service in the developer guide.
+
+> [!div class="nextstepaction"]
+> [Configure a Java app in Azure App Service](configure-language-java-deploy-run.md?pivots=platform-linux)
+
+Learn how to secure your app with a custom domain and certificate.
+
+> [!div class="nextstepaction"]
+> [Secure with custom domain and certificate](tutorial-secure-domain-certificate.md)
app-service Tutorial Java Tomcat Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-mysql-app.md
This tutorial shows how to build, configure, and deploy a secure Tomcat applicat
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a secure-by-default architecture for Azure App Service and Azure Cosmos DB with MongoDB API.
+> * Create a secure-by-default architecture for Azure App Service and Azure Database for MySQL.
> * Secure connection secrets using a managed identity and Key Vault references. > * Deploy a Tomcat sample app to App Service from a GitHub repository. > * Acces App Service app settings in the application code.
application-gateway Ssl Certificate Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ssl-certificate-management.md
There are two primary scenarios when deleting a certificate from portal:
| Port | The port associated with the listener gets updated to reflect the new state. | | Frontend IP | The frontend IP of the gateway gets updated to reflect the new state. |
+### Deletion of a listener with an SSL certificate
+
+When a listener with an associated SSL certificate is deleted, the SSL certificate itself is not deleted. The certificate will remain in the application gateway configuration and can be assigned to another listener.
+
+### Deletion of a key vault certificate
+
+When deleting a certificate from key vault that is associated to an application gateway, the certificate must be deleted first on application gateway, then on key vault.
+ ### Bulk update The bulk operation feature is helpful for large gateways having multiple SSL certificates for separate listeners. Similar to individual certificate management, this option also allows you to change the type from "Uploaded" to "Key Vault" or vice-versa (if required). This utility is also helpful in recovering a gateway when facing misconfigurations for multiple certificate objects simultaneously.
To use the Bulk update option,
1. You can't delete a certificate object if its associated listener is a redirection target for another listener. Any attempt to do so will return the following error. You can either remove the redirection or delete the dependent listener first to resolve this problem. `The listener associated with this certificate is configured as the redirection target for another listener. You will need to either remove this redirection or delete the redirected listener first to allow deletion of this certificate.`
-1. The Application Gateway requires at least one active Listener and Rule combination. You thus cannot delete the certificate of a HTTPS listener, if no other active listener exists. This is also true if there are only HTTPS listeners on your gateway, and all of them are referencing the same certificate. Such operations are prevented because deletion of a certificate leads to deletion of all dependent sub resources.
+1. The Application Gateway requires at least one active Listener and Rule combination. You thus cannot delete the certificate of an HTTPS listener, if no other active listener exists. This is also true if there are only HTTPS listeners on your gateway, and all of them are referencing the same certificate. Such operations are prevented because deletion of a certificate leads to deletion of all dependent sub resources.
+
+1. If a certificate is deleted in key vault but the reference to the certificate in Application Gateway is not deleted, any update to the Application Gateway will cause it to appear in a failed state. To fix this, you must delete all the certificates without an associated listener one by one.
## Next steps
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/overview.md
Title: Azure Automation overview
description: This article tells what Azure Automation is and how to use it to automate the lifecycle of infrastructure and applications. keywords: azure automation, DSC, powershell, state configuration, update management, change tracking, DSC, inventory, runbooks, python, graphical Previously updated : 10/25/2021 Last updated : 11/05/2024
Automation is needed in three broad areas of cloud operations:
Azure Automation delivers a cloud-based automation, operating system updates, and configuration service that supports consistent management across your Azure and non-Azure environments. It includes process automation, configuration management, update management, shared capabilities, and heterogeneous features. There are several Azure services that can deliver the above requirements, where each service includes a set of capabilities and serves a role as a programmable platform to build cloud solutions. For example, Azure Bicep and Resource Manager provide a language to develop repeatable and consistent deployment templates for Azure resources. Azure Automation can process that template to deploy an Azure resource and then process a set of post-deployment configuration tasks.
Process automation supports the integration of Azure services and other third pa
## Configuration Management
-Configuration Management in Azure Automation is supported by two capabilities:
-
-* Change Tracking and Inventory
-* Azure Automation State Configuration
-
-### Change Tracking and Inventory
-
-[Change Tracking and Inventory](change-tracking/overview.md) combines functions to allow you to track Linux and Windows virtual machine and server infrastructure changes. The service supports change tracking across services, daemons, software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. Change Tracking & Inventory is now supported with the Azure Monitoring Agent version. [Learn more](change-tracking/overview-monitoring-agent.md).
+Configuration Management in Azure Automation is supported by Azure Automation State Configuration capability.
### Azure Automation State Configuration [Azure Automation State Configuration](automation-dsc-overview.md) is a cloud-based feature for PowerShell desired state configuration (DSC) that provides services for enterprise environments. Using this feature, you can manage your DSC resources in Azure Automation and apply configurations to virtual or physical machines from a DSC pull server in the Azure cloud.
-## Update Management
-
-Azure Automation includes the [Update Management](./update-management/overview.md) feature for Windows and Linux systems across hybrid environments. Update Management gives you visibility into update compliance across Azure and other clouds, and on-premises. The feature allows you to create scheduled deployments that orchestrate the installation of updates within a defined maintenance window. If an update shouldn't be installed on a machine, you can use Update Management functionality to exclude it from a deployment.
- ## Shared capabilities Azure Automation provides a number of shared capabilities, including shared resources, role-based access control, flexible scheduling, source control integration, auditing, and tagging.
Azure Automation supports management throughout the lifecycle of your infrastruc
* **Schedule tasks** - stop VMs or services at night and turn on during the day, weekly or monthly recurring maintenance workflows. * **Build and deploy resources** - Deploy virtual machines across a hybrid environment using runbooks and Azure Resource Manager templates. Integrate into development tools, such as Jenkins and Azure DevOps.
-* **Periodic maintenance** - to execute tasks that need to be performed at set timed intervals like purging stale or old data, or reindex a SQL database.
+* **Periodic maintenance** - to execute tasks that need to be performed at set timed intervals like purging stale or old data, or reindexing a SQL database.
* **Respond to alerts** - Orchestrate a response when cost-based, system-based, service-based, and/or resource utilization alerts are generated. * **Hybrid automation** - Manage or automate on-premises servers and services like SQL Server, Active Directory, SharePoint Server, etc. * **Azure resource lifecycle management** - for IaaS and PaaS services.
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Before starting your move operation, review the [checklist](./move-resource-grou
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | databaseaccounts | **Yes** | No | No |
+> | databaseaccounts | **Yes** | Partial | No |
+
+Moves between subscriptions are supported for APIs that use the RU architecture (Microsoft.DocumentDB/databaseAccounts), but not for those based on the vCore architecture, such as:
+
+- MongoDB vCore (Microsoft.DocumentDB/mongoClusters)
+- Azure Managed Instance for Apache Cassandra (Microsoft.DocumentDB/cassandraClusters)
## Microsoft.DomainRegistration
azure-resource-manager Template Functions Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-scope.md
It returns:
"version": "1" }, "displayName": "Example MG 1",
- "tenantId": "00000000-0000-0000-0000-000000000000"
+ "tenantId": "aaaabbbb-0000-cccc-1111-dddd2222eeee"
}, "type": "/providers/Microsoft.Management/managementGroups" }
It returns:
"countryCode": "US", "displayName": "Contoso", "id": "/tenants/00000000-0000-0000-0000-000000000000",
- "tenantId": "00000000-0000-0000-0000-000000000000"
+ "tenantId": "aaaabbbb-0000-cccc-1111-dddd2222eeee"
} } ```
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs.md
For example, you deploy a template spec with the following command.
# [PowerShell](#tab/azure-powershell) ```azurepowershell
-$id = "/subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/templateSpecsRG/providers/Microsoft.Resources/templateSpecs/storageSpec/versions/1.0a"
+$id = "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/templateSpecsRG/providers/Microsoft.Resources/templateSpecs/storageSpec/versions/1.0a"
New-AzResourceGroupDeployment ` -TemplateSpecId $id `
New-AzResourceGroupDeployment `
# [CLI](#tab/azure-cli) ```azurecli
-id = "/subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/templateSpecsRG/providers/Microsoft.Resources/templateSpecs/storageSpec/versions/1.0a"
+id = "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/templateSpecsRG/providers/Microsoft.Resources/templateSpecs/storageSpec/versions/1.0a"
az deployment group create \ --resource-group demoRG \
azure-vmware Azure Vmware Solution Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md
Refer to the table to find details about resolution dates or possible workaround
| [VMSA-2024-0012](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24453) Multiple Vulnerabilities in the DCERPC Protocol and Local Privilege Escalations | June 2024 | Microsoft, working with Broadcom, adjudicated the risk of these vulnerabilities at an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/MAC:L/MPR:H/MUI:R) or lower. Adjustments from the base score were possible due to the network isolation of the Azure VMware Solution vCenter Server (ports 2012, 2014, and 2020 aren't exposed via any interactive network path) and multiple levels of authentication and authorization necessary to gain interactive access to the vCenter Server network segment. A plan is being put in place to address these vulnerabilities at a future date TBD. | N/A | | Zerto DR isn't currently supported with the AV64 SKU. The AV64 SKU uses ESXi host secure boot and Zerto DR hasn't implemented a signed VIB for the ESXi install. | 2024 | Continue using the AV36, AV36P, and AV52 SKUs for Zerto DR. | N/A | | [VMSA-2024-0013 (CVE-2024-37085)](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24505) VMware ESXi Active Directory Integration Authentication Bypass | July 2024 | Azure VMware Solution does not provide Active Directory integration and isn't vulnerable to this attack. | N/A |
-| AV36P SKU new private cloud deploys with vSphere 7, not vSphere 8. | September 2024 | The AV36P SKU is waiting for a Hotfix to be deployed, which will resolve this issue. | N/A |
+| AV36P SKU new private cloud deploys with vSphere 7, not vSphere 8. | September 2024 | AV36P SKU Hotfix deployed, issue resolved. | September 2024 |
| [VMSA-2024-0019](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24968) Vulnerability in the DCERPC Protocol and Local Privilege Escalations | September 2024 | Microsoft, working with Broadcom, adjudicated the risk of CVE-2024-38812 at an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/MAC:L/MPR:H/MUI:R) and CVE-2024-38813 with an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:H/MAV:A/MAC:H/MPR:L/MUI:R). Adjustments from the base scores were possible due to the network isolation of the Azure VMware Solution vCenter Server DCERPC protocol access (ports 2012, 2014, and 2020 aren't exposed via any interactive network path) and multiple levels of authentication and authorization necessary to gain interactive access to the Azure VMware Solution vCenter Server. A plan is being put in place to address these vulnerabilities at a future date TBD. | N/A | [VMSA-2024-0020](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/25047) VMware NSX command injection, local privilege escalation & content spoofing vulnerability| October 2024 | The vulnerability mentioned in the Broadcom document is not applicable to Azure VMware Solution, as attack vector mentioned does not apply. | N/A |
+| New Stretched Clusters private cloud deploys with vSphere 7, not vSphere 8. | September 2024 | Stretched Clusters is waiting for a Hotfix to be deployed, which will resolve this issue. | Planned November 2024 |
+| New Standard private cloud deploys with vSphere 7, not vSphere 8 in Australia East region (Pods 4 and 5). | October 2024 | Pods 4 and 5 in Australia East are waiting for a Hotfix to be deployed, which will resolve this issue. | Planned November 2024 |
In this article, you learned about the current known issues with the Azure VMware Solution.
communication-services Get Started Calling With Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/ui-library/get-started-calling-with-chat.md
Title: Add calling and chat functionality
-description: Add calling and chat functionality using the Azure Communication Services UI Library.
+description: Add calling and chat functionality by using the Azure Communication Services UI Library.
Last updated 10/28/2024
zone_pivot_groups: acs-plat-ios-android
-#Customer intent: As a developer, I want to add calling and chat functionality to my App.
+#Customer intent: As a developer, I want to add calling and chat functionality to my app.
-# Integrate Calling and Chat UI Libraries
+# Integrate calling and chat by using the UI Library
-## Set up the feature
+In this article, you learn how to integrate calling and chat functionality in your Android or iOS app by using the Azure Communication Services UI Library.
::: zone pivot="platform-android" [!INCLUDE [Integrate Calling with Chat in the Android UI Library](./includes/get-started-calling-with-chat/android.md)]
zone_pivot_groups: acs-plat-ios-android
[!INCLUDE [Integrate Calling with Chat in the iOS UI Library](./includes/get-started-calling-with-chat/ios.md)] ::: zone-end - ## Run the code
-To build and run your app on the device.
+Run the code to build and run your app on the device.
### More features
-The list of [use cases](../../concepts/ui-library/ui-library-use-cases.md?branch=main&pivots=platform-mobile) has detailed information about more features.
+The [list of use cases](../../concepts/ui-library/ui-library-use-cases.md?branch=main&pivots=platform-mobile) has detailed information about more features.
## Add notifications to your mobile app Azure Communication Services integrates with [Azure Event Grid](../../../event-grid/overview.md) and [Azure Notification Hubs](../../../notification-hubs/notification-hubs-push-notification-overview.md), so you can [add push notifications](../../concepts/notifications.md) to your apps in Azure. You can use push notifications to send information from your application to users' mobile devices. A push notification can show a dialog, play a sound, or display an incoming call UI. -
-## Next steps
+## Related content
- [Learn more about the UI Library](../../concepts/ui-library/ui-library-overview.md)
communication-services Get Started Teams Interop Group Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-interop-group-calls.md
Title: Quickstart - Teams interop group calls on Azure Communication Services
+ Title: Quickstart - Teams interop calls on Azure Communication Services
-description: In this quickstart, you learn how to place Microsoft Teams interop group calls with Azure Communication Calling SDK.
+description: In this quickstart, you learn how to place Microsoft Teams interop calls with Azure Communication Calling SDK.
Last updated 04/04/2024
-# Quickstart: Place interop group calls between Azure Communication Services and Microsoft Teams
+# Quickstart: Place interop calls between Azure Communication Services and Microsoft Teams
-In this quickstart, you're going to learn how to start a group call from Azure Communication Services user to Teams users. You're going to achieve it with the following steps:
+In this quickstart, you're going to learn how to start a call from Azure Communication Services user to Teams users. You're going to achieve it with the following steps:
1. Enable federation of Azure Communication Services resource with Teams Tenant. 2. Get identifiers of the Teams users.
Find the finalized code for this quickstart on [GitHub](https://github.com/Azure
## Add the Call UI controls
-Replace code in https://docsupdatetracker.net/index.html with following snippet.
-Place a group call to the Teams users by specifying their IDs.
-The text boxes are used to enter the Teams user IDs planning to call and add in a group:
+Replace code in https://docsupdatetracker.net/index.html with the following snippet. Place a call to the Teams users by specifying their <a href="#userIds">ID(s)</a>.
+- The text box is used to enter the Teams user IDs you are planning to call. Enter one id for a 1:1 call, or multiple for a group call
```html <!DOCTYPE html>
The text boxes are used to enter the Teams user IDs planning to call and add in
</head> <body> <h4>Azure Communication Services</h4>
- <h1>Teams interop group call quickstart</h1>
- <input id="teams-ids-input" type="text" placeholder="Teams IDs split by comma"
+ <h1>Teams interop calling quickstart</h1>
+ <input id="teams-id-input" type="text" placeholder="Teams ID(s)"
style="margin-bottom:1em; width: 300px;" /> <p>Call state <span style="font-weight: bold" id="call-state">-</span></p> <p><span style="font-weight: bold" id="recording-state"></span></p> <div>
- <button id="place-group-call-button" type="button" disabled="false">
- Place group call
+ <button id="start-call-button" type="button" disabled="false">
+ Start Call
</button> <button id="hang-up-button" type="button" disabled="true"> Hang Up </button> </div>
+ <br>
+ <div>
+ <button id="mute-button" type="button" disabled="true"> Mute </button>
+ <button id="unmute-button" type="button" disabled="true"> Unmute </button>
+ </div>
+ <br>
+ <div>
+ <button id="start-video-button" type="button" disabled="true">Start Video</button>
+ <button id="stop-video-button" type="button" disabled="true">Stop Video</button>
+ </div>
+ <br>
+ <br>
+ <div id="remoteVideoContainer" style="width: 40%;" hidden>Remote participants' video streams:</div>
+ <br>
+ <div id="localVideoContainer" style="width: 30%;" hidden>Local video stream:</div>
+ <!-- points to the bundle generated from client.js -->
<script src="./main.js"></script> </body> </html>
The text boxes are used to enter the Teams user IDs planning to call and add in
Replace content of client.js file with following snippet. ```javascript
-const { CallClient, Features } = require('@azure/communication-calling');
+const { CallClient, Features, VideoStreamRenderer, LocalVideoStream } = require('@azure/communication-calling');
const { AzureCommunicationTokenCredential } = require('@azure/communication-common'); const { AzureLogger, setLogLevel } = require("@azure/logger");
+// Set the log level and output
+setLogLevel('verbose');
+AzureLogger.log = (...args) => {
+ console.log(...args);
+};
+// Calling web sdk objects
let call; let callAgent;
-const teamsIdsInput = document.getElementById('teams-ids-input');
+let localVideoStream;
+let localVideoStreamRenderer;
+// UI widgets
+const teamsIdInput = document.getElementById('teams-id-input');
const hangUpButton = document.getElementById('hang-up-button');
-const placeInteropGroupCallButton = document.getElementById('place-group-call-button');
+const startInteropCallButton = document.getElementById('start-call-button');
+const muteButton = document.getElementById('mute-button')
+const unmuteButton = document.getElementById('unmute-button')
const callStateElement = document.getElementById('call-state'); const recordingStateElement = document.getElementById('recording-state');-
+const startVideoButton = document.getElementById('start-video-button');
+const stopVideoButton = document.getElementById('stop-video-button');
+const remoteVideoContainer = document.getElementById('remoteVideoContainer');
+const localVideoContainer = document.getElementById('localVideoContainer');
+/**
+ * Create an instance of CallClient. Initialize a CallAgent instance with a CommunicationUserCredential via created CallClient. CallAgent enables us to make outgoing calls.
+ * You can then use the CallClient.getDeviceManager() API instance to get the DeviceManager.
+ */
async function init() {
- const callClient = new CallClient();
- const tokenCredential = new AzureCommunicationTokenCredential("<USER ACCESS TOKEN>");
- callAgent = await callClient.createCallAgent(tokenCredential, { displayName: 'ACS user' });
- placeInteropGroupCallButton.disabled = false;
+ try {
+ const callClient = new CallClient();
+ const tokenCredential = new AzureCommunicationTokenCredential("<USER ACCESS TOKEN>");
+ callAgent = await callClient.createCallAgent(tokenCredential, { displayName: 'ACS user' });
+ // Set up a camera device to use.
+ deviceManager = await callClient.getDeviceManager();
+ await deviceManager.askDevicePermission({ video: true });
+ await deviceManager.askDevicePermission({ audio: true });
+ startInteropCallButton.disabled = false;
+ } catch(error) {
+ console.error(error);
+ }
} init();
-hangUpButton.addEventListener("click", async () => {
- await call.hangUp();
- hangUpButton.disabled = true;
- placeInteropGroupCallButton.disabled = false;
- callStateElement.innerText = '-';
-});
-
-placeInteropGroupCallButton.addEventListener("click", () => {
- if (!teamsIdsInput.value) {
- return;
+muteButton.addEventListener("click", async () => {
+ try {
+ await call.mute();
+ } catch (error) {
+ console.error(error)
}
+})
+unmuteButton.onclick = async () => {
+ try {
+ await call.unmute();
+ } catch (error) {
+ console.error(error)
+ }
+}
- const participants = teamsIdsInput.value.split(',').map(id => {
- const participantId = id.replace(' ', '');
- return {
- microsoftTeamsUserId: `${participantId}`
- };
- })
-
- call = callAgent.startCall(participants);
-
- call.on('stateChanged', () => {
- callStateElement.innerText = call.state;
- })
+startInteropCallButton.addEventListener("click", async () => {
+ if (!teamsIdInput.value) {
+ return;
+ }
+ try {
+ const localVideoStream = await createLocalVideoStream();
+ const videoOptions = localVideoStream ? { localVideoStreams: [localVideoStream] } : undefined;
+ const participants = teamsIdInput.value.split(',').map(id => {
+ const participantId = id.replace(' ', '');
+ return {
+ microsoftTeamsUserId: `${participantId}`
+ };
+ })
+ call = callAgent.startCall(participants, {videoOptions: videoOptions})
+ // Subscribe to the call's properties and events.
+ subscribeToCall(call);
+ } catch (error) {
+ console.error(error);
+ }
call.feature(Features.Recording).on('isRecordingActiveChanged', () => { if (call.feature(Features.Recording).isRecordingActive) {
placeInteropGroupCallButton.addEventListener("click", () => {
recordingStateElement.innerText = ""; } });
- hangUpButton.disabled = false;
- placeInteropGroupCallButton.disabled = true;
+});
+
+// Subscribe to a call obj.
+// Listen for property changes and collection udpates.
+subscribeToCall = (call) => {
+ try {
+ // Inspect the initial call.id value.
+ console.log(`Call Id: ${call.id}`);
+ //Subsribe to call's 'idChanged' event for value changes.
+ call.on('idChanged', () => {
+ console.log(`Call ID changed: ${call.id}`);
+ });
+ // Inspect the initial call.state value.
+ console.log(`Call state: ${call.state}`);
+ // Subscribe to call's 'stateChanged' event for value changes.
+ call.on('stateChanged', async () => {
+ console.log(`Call state changed: ${call.state}`);
+ callStateElement.innerText = call.state;
+ if(call.state === 'Connected') {
+ startInteropCallButton.disabled = true;
+ hangUpButton.disabled = false;
+ startVideoButton.disabled = false;
+ stopVideoButton.disabled = false;
+ muteButton.disabled = false;
+ unmuteButton.disabled = false;
+ } else if (call.state === 'Disconnected') {
+ startInteropCallButton.disabled = false;
+ hangUpButton.disabled = true;
+ startVideoButton.disabled = true;
+ stopVideoButton.disabled = true;
+ muteButton.disabled = true;
+ unmuteButton.disabled = true;
+ console.log(`Call ended, call end reason={code=${call.callEndReason.code}, subCode=${call.callEndReason.subCode}}`);
+ }
+ });
+ call.on('isLocalVideoStartedChanged', () => {
+ console.log(`isLocalVideoStarted changed: ${call.isLocalVideoStarted}`);
+ });
+ console.log(`isLocalVideoStarted: ${call.isLocalVideoStarted}`);
+ call.localVideoStreams.forEach(async (lvs) => {
+ localVideoStream = lvs;
+ await displayLocalVideoStream();
+ });
+ call.on('localVideoStreamsUpdated', e => {
+ e.added.forEach(async (lvs) => {
+ localVideoStream = lvs;
+ await displayLocalVideoStream();
+ });
+ e.removed.forEach(lvs => {
+ removeLocalVideoStream();
+ });
+ });
+
+ // Inspect the call's current remote participants and subscribe to them.
+ call.remoteParticipants.forEach(remoteParticipant => {
+ subscribeToRemoteParticipant(remoteParticipant);
+ });
+ // Subscribe to the call's 'remoteParticipantsUpdated' event to be
+ // notified when new participants are added to the call or removed from the call.
+ call.on('remoteParticipantsUpdated', e => {
+ // Subscribe to new remote participants that are added to the call.
+ e.added.forEach(remoteParticipant => {
+ subscribeToRemoteParticipant(remoteParticipant)
+ });
+ // Unsubscribe from participants that are removed from the call
+ e.removed.forEach(remoteParticipant => {
+ console.log('Remote participant removed from the call.');
+ });
+ });
+ } catch (error) {
+ console.error(error);
+ }
+}
+
+// Subscribe to a remote participant obj.
+// Listen for property changes and collection udpates.
+subscribeToRemoteParticipant = (remoteParticipant) => {
+ try {
+ // Inspect the initial remoteParticipant.state value.
+ console.log(`Remote participant state: ${remoteParticipant.state}`);
+ // Subscribe to remoteParticipant's 'stateChanged' event for value changes.
+ remoteParticipant.on('stateChanged', () => {
+ console.log(`Remote participant state changed: ${remoteParticipant.state}`);
+ });
+ // Inspect the remoteParticipants's current videoStreams and subscribe to them.
+ remoteParticipant.videoStreams.forEach(remoteVideoStream => {
+ subscribeToRemoteVideoStream(remoteVideoStream)
+ });
+ // Subscribe to the remoteParticipant's 'videoStreamsUpdated' event to be
+ // notified when the remoteParticiapant adds new videoStreams and removes video streams.
+ remoteParticipant.on('videoStreamsUpdated', e => {
+ // Subscribe to newly added remote participant's video streams.
+ e.added.forEach(remoteVideoStream => {
+ subscribeToRemoteVideoStream(remoteVideoStream)
+ });
+ // Unsubscribe from newly removed remote participants' video streams.
+ e.removed.forEach(remoteVideoStream => {
+ console.log('Remote participant video stream was removed.');
+ })
+ });
+ } catch (error) {
+ console.error(error);
+ }
+}
+/**
+ * Subscribe to a remote participant's remote video stream obj.
+ * You have to subscribe to the 'isAvailableChanged' event to render the remoteVideoStream. If the 'isAvailable' property
+ * changes to 'true' a remote participant is sending a stream. Whenever the availability of a remote stream changes
+ * you can choose to destroy the whole 'Renderer' a specific 'RendererView' or keep them. Displaying RendererView without a video stream will result in a blank video frame.
+ */
+subscribeToRemoteVideoStream = async (remoteVideoStream) => {
+ // Create a video stream renderer for the remote video stream.
+ let videoStreamRenderer = new VideoStreamRenderer(remoteVideoStream);
+ let view;
+ const renderVideo = async () => {
+ try {
+ // Create a renderer view for the remote video stream.
+ view = await videoStreamRenderer.createView();
+ // Attach the renderer view to the UI.
+ remoteVideoContainer.hidden = false;
+ remoteVideoContainer.appendChild(view.target);
+ } catch (e) {
+ console.warn(`Failed to createView, reason=${e.message}, code=${e.code}`);
+ }
+ }
+
+ remoteVideoStream.on('isAvailableChanged', async () => {
+ // Participant has switched video on.
+ if (remoteVideoStream.isAvailable) {
+ await renderVideo();
+ // Participant has switched video off.
+ } else {
+ if (view) {
+ view.dispose();
+ view = undefined;
+ remoteVideoContainer.hidden = true;
+ }
+ }
+ });
+ // Participant has video on initially.
+ if (remoteVideoStream.isAvailable) {
+ await renderVideo();
+ }
+}
+
+// Start your local video stream.
+// This will send your local video stream to remote participants so they can view it.
+startVideoButton.onclick = async () => {
+ try {
+ const localVideoStream = await createLocalVideoStream();
+ await call.startVideo(localVideoStream);
+ } catch (error) {
+ console.error(error);
+ }
+}
+// Stop your local video stream.
+// This will stop your local video stream from being sent to remote participants.
+stopVideoButton.onclick = async () => {
+ try {
+ await call.stopVideo(localVideoStream);
+ } catch (error) {
+ console.error(error);
+ }
+}
+
+/**
+ * To render a LocalVideoStream, you need to create a new instance of VideoStreamRenderer, and then
+ * create a new VideoStreamRendererView instance using the asynchronous createView() method.
+ * You may then attach view.target to any UI element.
+ */
+// Create a local video stream for your camera device
+createLocalVideoStream = async () => {
+ const camera = (await deviceManager.getCameras())[0];
+ if (camera) {
+ return new LocalVideoStream(camera);
+ } else {
+ console.error(`No camera device found on the system`);
+ }
+}
+// Display your local video stream preview in your UI
+displayLocalVideoStream = async () => {
+ try {
+ localVideoStreamRenderer = new VideoStreamRenderer(localVideoStream);
+ const view = await localVideoStreamRenderer.createView();
+ localVideoContainer.hidden = false;
+ localVideoContainer.appendChild(view.target);
+ } catch (error) {
+ console.error(error);
+ }
+}
+// Remove your local video stream preview from your UI
+removeLocalVideoStream = async() => {
+ try {
+ localVideoStreamRenderer.dispose();
+ localVideoContainer.hidden = true;
+ } catch (error) {
+ console.error(error);
+ }
+}
+
+// End the current call
+hangUpButton.addEventListener("click", async () => {
+ // end call
+ await call.hangUp();
}); ```
-## Get the Teams user IDs
+ <h2 id="userIds">Get the Teams user IDs</h2>
The Teams user IDs can be retrieved using Graph APIs, which is detailed in [Graph documentation](/graph/api/user-get?tabs=http).
In results get the "id" field.
"id": "31a011c2-2672-4dd0-b6f9-9334ef4999db" ```
+Or the same ID could be found in [Azure portal](https://aka.ms/portal) in Users tab:
+![Screenshot of User Object ID in Azure portal.](./includes/teams-user/portal-user-id.png)
+ ## Run the code Run the following command to bundle your application host on a local webserver:
npx webpack serve --config webpack.config.js
Open your browser and navigate to http://localhost:8080/. You should see the following screen:
-Insert the Teams IDs into the text box split by comma and press *Place Group Call* to start the group call from within your Communication Services application.
+Insert the Teams ID(s) into the text box, split by commas if more than one, and press *Start Call* to start the call from within your Communication Services application.
## Clean up resources
container-apps Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/metrics.md
Previously updated : 09/23/2024 Last updated : 11/01/2024
Container Apps provides these basic metrics.
| Title | Dimensions | Description | Metric ID | Unit | |--|--|--|--|--|
-| CPU Usage | Replica, Revision | CPU consumed by the container app, in nano cores (1,000,000,000 nanocores = 1 core) | `UsageNanoCores` | nanocores |
-| Memory Working Set Bytes | Replica, Revision | Container app working set memory used in bytes | `WorkingSetBytes` | bytes |
-| Network In Bytes | Replica, Revision | Network received bytes | `RxBytes` | bytes |
-| Network Out Bytes | Replica, Revision | Network transmitted bytes | `TxBytes` | bytes |
-| Replica count | Revision | Number of active replicas | `Replicas` | n/a |
-| Replica Restart Count | Replica, Revision | Restarts count of container app replicas | `RestartCount` | n/a |
-| Requests | Replica, Revision, Status Code, Status Code Category | Requests processed | `Requests` | n/a |
-| Reserved Cores | Revision | Number of reserved cores for container app revisions | `CoresQuotaUsed` | n/a |
-| Resiliency Connection Timeouts | Revision | Total connection timeouts | `ResiliencyConnectTimeouts` | n/a |
-| Resiliency Ejected Hosts | Revision | Number of currently ejected hosts | `ResiliencyEjectedHosts` | n/a |
-| Resiliency Ejections Aborted | Revision | Number of ejections aborted due to the max ejection % | `ResiliencyEjectionsAborted` | n/a |
-| Resiliency Request Retries | Revision | Total request retries | `ResiliencyRequestRetries` | n/a |
-| Resiliency Request Timeouts | Revision | Total requests that timed out waiting for a response | `ResiliencyRequestTimeouts` | n/a |
-| Resiliency Requests Pending Connection Pool | Replica | Total requests pending a connection pool connection | `ResiliencyRequestsPendingConnectionPool` | n/a |
-| Total Reserved Cores | None | Total cores reserved for the container app | `TotalCoresQuotaUsed` | n/a |
-
-The metrics namespace is `microsoft.app/containerapps`.
+| CPU Usage | Replica, Revision | CPU consumed by the container app, in nano cores (1,000,000,000 nanocores = 1 core) | `UsageNanoCores` | Nanocores |
+| Memory Working Set Bytes | Replica, Revision | Container app working set memory used in bytes | `WorkingSetBytes` | Bytes |
+| Network In Bytes | Replica, Revision | Network received bytes | `RxBytes` | Bytes |
+| Network Out Bytes | Replica, Revision | Network transmitted bytes | `TxBytes` | Bytes |
+| Replica count | Revision | Number of active replicas | `Replicas` | Count |
+| Replica Restart Count | Replica, Revision | Restarts count of container app replicas | `RestartCount` | Count |
+| Requests | Replica, Revision, Status Code, Status Code Category | Requests processed | `Requests` | Count |
+| Reserved Cores | Revision | Number of reserved cores for container app revisions | `CoresQuotaUsed` | Count |
+| Resiliency Connection Timeouts | Revision | Total connection timeouts | `ResiliencyConnectTimeouts` | Count |
+| Resiliency Ejected Hosts | Revision | Number of currently ejected hosts | `ResiliencyEjectedHosts` | Count |
+| Resiliency Ejections Aborted | Revision | Number of ejections aborted due to the max ejection % | `ResiliencyEjectionsAborted` | Count |
+| Resiliency Request Retries | Revision | Total request retries | `ResiliencyRequestRetries` | Count |
+| Resiliency Request Timeouts | Revision | Total requests that timed out waiting for a response | `ResiliencyRequestTimeouts` | Count |
+| Resiliency Requests Pending Connection Pool | Replica | Total requests pending a connection pool connection | `ResiliencyRequestsPendingConnectionPool` | Count |
+| Total Reserved Cores | None | Total cores reserved for the container app | `TotalCoresQuotaUsed` | Count |
+| Average Response Time (Preview) | Status Code, Status Code Category | Average response time per status code | `ResponseTime` | Milliseconds |
+| CPU Usage Percentage (Preview) | Replica | Percentage of CPU limit used, in percentage points | `CpuPercentage` | Percent |
+| Memory Percentage (Preview) | Replica | Percentage of memory limit used, in percentage points | `MemoryPercentage` | Percent |
+
+The metrics namespace is `Microsoft.App/containerapps`.
> [!NOTE] > Replica restart count is the aggregate restart count over the specified time range, not the number of restarts that occurred at a point in time.
+Container Apps environments provides this basic metric. You can only view this metric in [Azure Monitor metrics](https://ms.portal.azure.com/?feature.allrts=true#view/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/~/metrics).
+
+| Title | Dimensions | Description | Metric ID | Unit |
+|--|--|--|--|--|
+| Workload Profile Node Count (Preview) | Workload Profile Name | The node count per workload profile | `NodeCount` | Count |
+
+The metrics namespace is `Microsoft.App/managedEnvironments`.
+ More runtime specific metrics are available, [Java metrics](./java-metrics.md). ## Metrics snapshots
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
Previously updated : 09/12/2024 Last updated : 11/05/2024 # customer intent: As a billing administrator, I want to learn about transferring subscriptions so that I can transfer one.
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| EA | MOSP (pay-as-you-go) | ΓÇó Transfer from an EA enrollment to a MOSP subscription requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. | | EA | MCA-online | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers with no currency change are supported. <br><br> ΓÇó You can't transfer a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency. However, you can [change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope) so that it applies to other subscriptions. | | EA | EA | ΓÇó Transferring between EA enrollments requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations and savings plans automatically get transferred during EA to EA transfers, except in transfers with a currency change.<br><br> ΓÇó Transfer within the same enrollment is the same action as changing the account owner. For details, see [Change Azure subscription or account ownership](direct-ea-administration.md#change-azure-subscription-or-account-ownership). |
-| EA | MCA-E | ΓÇó Transferring all enrollment products is completed as part of the MCA transition process from an EA. For more information, see [Complete Enterprise Agreement tasks in your billing account for a Microsoft Customer Agreement](mca-enterprise-operations.md).<br><br> ΓÇó If you want to transfer specific products but not all of the products in an enrollment, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <br><br>ΓÇó Self-service reservation transfers with no currency change are supported. When there's a currency change during or after an enrollment transfer, reservations paid for monthly are canceled for the source enrollment. Cancellation happens at the time of the next monthly payment for an individual reservation. The cancellation is intentional and only affects monthly reservation purchases. For more information, see [Transfer Azure Enterprise enrollment accounts and subscriptions](../manage/ea-transfers.md#prerequisites-1).<br><br> ΓÇó You can't transfer a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency. You can [change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope) so that it applies to other subscriptions. |
+| EA | MCA-E | ΓÇó Transferring all enrollment products is completed as part of the MCA transition process from an EA. For more information, see [Complete Enterprise Agreement tasks in your billing account for a Microsoft Customer Agreement](mca-enterprise-operations.md).<br><br>ΓÇó For details about how to transfer an EA enrollment to a Microsoft Customer Agreement (enterprise), see [Set up your billing account for a Microsoft Customer Agreement](mca-setup-account.md). <br><br> ΓÇó If you want to transfer specific products but not all of the products in an enrollment, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <br><br>ΓÇó Self-service reservation transfers with no currency change are supported. When there's a currency change during or after an enrollment transfer, reservations paid for monthly are canceled for the source enrollment. Cancellation happens at the time of the next monthly payment for an individual reservation. The cancellation is intentional and only affects monthly reservation purchases. For more information, see [Transfer Azure Enterprise enrollment accounts and subscriptions](../manage/ea-transfers.md#prerequisites-1).<br><br> ΓÇó You can't transfer a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency. You can [change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope) so that it applies to other subscriptions. |
| EA | CSP (MCA managed by partner) | ΓÇó Transfer is only allowed for direct EA to CSP (MCA managed by partner). A direct EA is signed between Microsoft and an EA customer.<br><br>ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Direct Enterprise Agreement (EA). For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers that accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Transfer from EA Government to CSP (MCA managed by partner) isn't supported.<br><br>ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.yml). | | MCA-online | MOSP (pay-as-you-go) | ΓÇó Microsoft doesn't support the transfer, so you must move resources yourself. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. | | MCA-online | MCA-online | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. |
defender-for-iot On Premises Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/on-premises-sentinel.md
This article describes the legacy method for connecting your OT sensor or on-premises management console to Microsoft Sentinel. Stream data into Microsoft Sentinel whenever you want to use Microsoft Sentinel's advanced threat hunting, security analytics, and automation features when responding to security incidents and threats across your network. > [!IMPORTANT]
+> This feature will be deprecated in **January 2025**.
+>
> If you're using a cloud connected sensor, we recommend that you connect Defender for IoT data using the Microsoft Sentinel solution instead of the legacy integration method. For more information, see: > > - [OT threat monitoring in enterprise SOCs](../concept-sentinel-integration.md)
defender-for-iot Air Gapped Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/air-gapped-deploy.md
If you're an existing customer using an on-premises management console to manage
1. After your transition is complete, decommission the on-premises management console.
+### Retirement timeline of the Central Manager
-### Retirement timeline
+The on-premises management console will be retired on **January 1, 2025** with the following updates/changes:
-The on-premises management console retirement includes the following details:
--- Sensor versions released after **January 1, 2025** won't be able to be managed by an on-premises management console.-- Sensor software versions released between **January 1st, 2024 ΓÇô January 1st, 2025** will continue to support an on-premises management console release.-- Air-gapped sensors that cannot connect to the cloud can be managed directly via the sensor console, CLI, or API.
+- Sensor versions released after **January 1, 2025** won't be managed by an on-premises management console.
+- Air-gapped sensor support isn't affected by these changes to the on-premises management console support. We continue to support air-gapped deployments and assist with the transition to the cloud. The sensors retain a full user interface so that they can be used in "lights out" scenarios and continue to analyze and secure the network in the event of an outage.
+- Air-gapped sensors that can't <!-- or don't / aren't connected to-->connect to the cloud can be managed directly via the sensor console GUI, CLI, or API.
+- Sensor software versions released between **January 1st, 2024 ΓÇô January 1st, 2025** still support the on-premises management console.
For more information, see [OT monitoring software versions](../release-notes.md).
dns Dns Get Started Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-template.md
description: Learn how to create a DNS zone and record in Azure DNS. This articl
Previously updated : 11/30/2023 Last updated : 11/05/2024
Azure PowerShell is used to deploy the template. In addition to Azure PowerShell
:::image type="content" source="./media/dns-getstarted-template/resource-group-dns-zone.png" alt-text="DNS zone deployment resource group":::
-1. Select the DNS zone with the suffix of `azurequickstart.org` to verify that the zone is created properly with an `A` record referencing the value of `1.2.3.4` and `1.2.3.5`.
+1. Select the DNS zone with the suffix of `azurequickstart.org` to verify that the zone is created properly with an `A` record referencing the value of `203.0.113.1` and `203.0.113.2`.
:::image type="content" source="./media/dns-getstarted-template/dns-zone-overview.png" alt-text="DNS zone deployment":::
Azure PowerShell is used to deploy the template. In addition to Azure PowerShell
:::image type="content" source="./media/dns-getstarted-template/dns-zone-validation.png" alt-text="DNS zone nslookup":::
-The host name `www.2lwynbseszpam.azurequickstart.org` resolves to `1.2.3.4` and `1.2.3.5`, just as you configured it. This result verifies that name resolution is working correctly.
+The host name `www.2lwynbseszpam.azurequickstart.org` resolves to `203.0.113.1` and `203.0.113.2`, just as you configured it. This result verifies that name resolution is working correctly.
## Clean up resources
dns Dns Zones Records https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-zones-records.md
ms.assetid: be4580d7-aa1b-4b6b-89a3-0991c0cda897
Previously updated : 10/30/2024 Last updated : 11/04/2024
When calling the Azure DNS REST API, you need to specify each TXT string separat
The multiple strings in a DNS record shouldn't be confused with the multiple TXT records in a TXT record set. A TXT record set can contain multiple records, *each of which* can contain multiple strings. Azure DNS supports a total string length of up to 4096 characters in each TXT record set (across all records combined).
+### DS records
+
+The delegation signer (DS) record is a [DNSSEC](dnssec.md) resource record type that is used to secure a delegation. To create a DS record in a zone, the zone must first be signed with DNSSEC.
+
+### TLSA records
+
+A TLSA (Transport Layer Security Authentication) record is used to associate a TLS server certificate or public key with the domain name where the record is found. A TLSA record links the public key (a TLS server certificate) to the domain name, providing an additional layer of security for TLS connections.
+
+To use TLSA records effectively, [DNSSEC](dnssec.md) must be enabled on your domain. This ensures that the TLSA records can be trusted and properly validated
+ ## Tags and metadata ### Tags
event-hubs Azure Event Hubs Kafka Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/azure-event-hubs-kafka-overview.md
Standalone and without ksqlDB, Kafka Streams has fewer capabilities than many al
- [Apache Flink](event-hubs-kafka-flink-tutorial.md) - [Apache Flink on HDInsight on AKS](../hdinsight-aks/flink/flink-overview.md) - [Akka Streams](event-hubs-kafka-akka-streams-tutorial.md)
-
-The listed services and frameworks can generally acquire event streams and reference data directly from a diverse set of sources through adapters. Kafka Streams can only acquire data from Apache Kafka and your analytics projects are therefore locked into Apache Kafka. To use data from other sources, you're required to first import data into Apache Kafka with the Kafka Connect framework.
-
-If you must use the Kafka Streams framework on Azure, [Apache Kafka on HDInsight](../hdinsight/kafk) provides you with that option. Apache Kafka on HDInsight provides full control over all configuration aspects of Apache Kafka, while being fully integrated with various aspects of the Azure platform, from fault/update domain placement to network isolation to monitoring integration.
### Kafka Transactions
governance 2 Create Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to/develop-custom-package/2-create-package.md
Parameters of the `New-GuestConfigurationPackage` cmdlet when creating Windows c
- **Type**: (`Audit`, `AuditandSet`) Determines whether the configuration should only audit or if the configuration should change the state of the machine if it's out of the desired state. The default is `Audit`.
+- **FrequencyMinutes**: The frequency of evaluation of the package on the machine in minutes.
+- **FilesToInclude**: An array list of paths to additional files to include in the generated package.
This step doesn't require elevation. The **Force** parameter is used to overwrite existing packages, if you run the command more than once.
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/overview.md
You can view the per-setting results from configurations in the [Guest assignmen
Azure Policy assignment orchestrated the configuration is orchestrated, you can select the "Last evaluated resource" link on the ["Compliance details" page][07].
+## Enforcement Modes for Custom Policies
+
+In order to provide greater flexibility in the enforcement and monitoring of server settings, applications and workloads, Machine Configuration offers three main enforcement modes for each policy assignment as described in the following table.
+
+| Mode | Description |
+|:-|:|
+| Audit | Only report on the state of the machine |
+| Apply and Monitor | Configuration applied to the machine and then monitored for changes |
+| Apply and Autocorrect | Configuration applied to the machine and brought back into conformance in the event of drift |
+ [A video walk-through of this document is available][08]. (Update coming soon) ## Enable machine configuration
If you prefer to deploy the extension and managed identity to a single machine,
To use machine configuration packages that apply configurations, Azure VM guest configuration extension version 1.26.24 or later is required.
+> [!IMPORTANT]
+> The creation of a managed identity or assignment of a policy with "Guest Configuration
+> Resource Contributor" role are actions that require appropriate Azure RBAC permissions to perform.
+> To learn more about Azure Policy and Azure RBAC, see [role-based access control in Azure Policy][45].
+ ### Limits set on the extension To limit the extension from impacting applications running inside the machine, the machine
symbolic to represent new minor versions of Linux distributions.
\* Red Hat CoreOS isn't supported. Machine configuration policy definitions support custom virtual machine images as long as they're
-one of the operating systems in the previous table.
+one of the operating systems in the previous table. Machine Configuration does not support VMSS
+uniform but does support [VMSS Flex][46].
## Network requirements
Machine configuration built-in policy samples are available in the following loc
[42]: ./how-to/develop-custom-package/overview.md [43]: ./how-to/create-policy-definition.md [44]: ../policy/how-to/determine-non-compliance.md#compliance-details-for-guest-configuration
+[45]: ../policy/overview.md
+[46]: /azure/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes#scale-sets-with-flexible-orchestration
iot-operations Howto Deploy Iot Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-deploy-iot-operations.md
Use these steps if you chose the **Secure settings** option on the **Dependency
| Parameter | Value | | | -- | | **Subscription** | Select the subscription that contains your Azure key vault. |
- | **Azure Key Vault** | Select an Azure key vault select **Create new**.<br><br>Ensure that your key vault has **Vault access policy** as its permission model. To check this setting, select **Manage selected vault** > **Settings** > **Access configuration**. |
+ | **Azure Key Vault** | Select an Azure key vault select **Create new**.<br><br>Ensure that your key vault has **Azure role-based access control** as its permission model. To check this setting, select **Manage selected vault** > **Settings** > **Access configuration**. |
| **User assigned managed identity for secrets** | Select an identity or select **Create new**. | | **User assigned managed identity for AIO components** | Select an identity or select **Create new**. Don't use the same managed identity as the one you selected for secrets. |
iot-operations Overview Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/overview-deploy.md
At any point, you can upgrade an Azure IoT Operations instance to use secure set
A deployment with secure settings:
-* Enables secrets and user-assignment managed identity, which are important capabilities for developing a production-ready scenario. Secrets are used whenever Azure IoT Operations components connect to a resource outside of the cluster; for example, an OPC UA server or a dataflow endpoint.
+* Enables secrets and user-assignment managed identity, both of which are important capabilities for developing a production-ready scenario. Secrets are used whenever Azure IoT Operations components connect to a resource outside of the cluster; for example, an OPC UA server or a dataflow endpoint.
To deploy Azure IoT Operations with secure settings, follow these articles:
Azure IoT Operations is a suite of data services that run on Azure Arc-enabled e
* Installed dependencies * [Azure Device Registry](../discover-manage-assets/overview-manage-assets.md#store-assets-as-azure-resources-in-a-centralized-registry) * [Azure Container Storage enabled by Azure Arc](/azure/azure-arc/container-storage/overview)
- * Secret Sync Controller
+ * [Azure Key Vault Secret Store extension](/azure/azure-arc/kubernetes/secret-store-extension)
## Organize instances by using sites
iot-operations Howto Manage Assets Remotely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/howto-manage-assets-remotely.md
To sign in to the operations experience, go to the [operations experience](https
## Select your site
-After you sign in, the web UI displays a list of sites. Each site is a collection of Azure IoT Operations instances where you can configure and manage your assets. A site typically represents a physical location where you have physical assets deployed. Sites make it easier for you to locate and manage assets. Your [IT administrator is responsible for grouping instances in to sites](/azure/azure-arc/site-manager/overview). Any Azure IoT Operations instances that aren't assigned to a site appear in the **Unassigned instances** node. Select the site that you want to use:
+After you sign in, the operations experience displays a list of sites. Each site is a collection of Azure IoT Operations instances where you can configure and manage your assets. A site typically represents a physical location where you have physical assets deployed. Sites make it easier for you to locate and manage assets. Your [IT administrator is responsible for grouping instances in to sites](/azure/azure-arc/site-manager/overview). Any Azure IoT Operations instances that aren't assigned to a site appear in the **Unassigned instances** node. Select the site that you want to use:
:::image type="content" source="media/howto-manage-assets-remotely/site-list.png" alt-text="Screenshot that shows a list of sites in the operations experience.":::
An Azure IoT Operations deployment can include an optional built-in OPC PLC simu
Run the following command: ```azurecli
-az iot ops asset endpoint create --name opc-ua-connector-0 --target-address opc.tcp://opcplc-000000:50000 -g {your resource group name} --cluster {your cluster name}
+az iot ops asset endpoint opcua create --name opc-ua-connector-0 --target-address opc.tcp://opcplc-000000:50000 -g {your resource group name} --instance {your instance name}
``` > [!TIP]
To use the `UsernamePassword` authentication mode, complete the following steps:
1. Use a command like the following example to create your asset endpoint: ```azurecli
- az iot ops asset endpoint create --name opc-ua-connector-0 --target-address opc.tcp://opcplc-000000:50000 -g {your resource group name} --cluster {your cluster name} --username-ref "aio-opc-ua-broker-user-authentication/username" --password-ref "aio-opc-ua-broker-user-authentication/password"
+ az iot ops asset endpoint opcua create --name opc-ua-connector-0 --target-address opc.tcp://opcplc-000000:50000 -g {your resource group name} --instance {your instance name} --username-ref "aio-opc-ua-broker-user-authentication/username" --password-ref "aio-opc-ua-broker-user-authentication/password"
```
You can import up to 1000 OPC UA tags at a time from a CSV file:
# [Azure CLI](#tab/cli)
-Use the following command to add a "thermostat" asset by using the Azure CLI. The command adds two tags to the asset by using the `--data` parameter:
+Use the following commands to add a "thermostat" asset by using the Azure CLI. The commands add two tags/datapoints to the asset by using the `point add` command:
```azurecli
-az iot ops asset create --name thermostat -g {your resource group name} --cluster {your cluster name} --endpoint opc-ua-connector-0 --description 'A simulated thermostat asset' --data data_source='ns=3;s=FastUInt10', name=temperature --data data_source='ns=3;s=FastUInt100', name='Tag 10'
+# Create the asset
+az iot ops asset create --name thermostat -g {your resource group name} --instance {your instance name} --endpoint opc-ua-connector-0 --description 'A simulated thermostat asset'
+
+# Add the datapoints
+az iot ops asset dataset point add --asset thermostat -g {your resource group name} --dataset default --data-source 'ns=3;s=FastUInt10' --name temperature
+az iot ops asset dataset point add --asset thermostat -g {your resource group name} --dataset default --data-source 'ns=3;s=FastUInt100' --name 'Tag 10'
+
+# Show the datapoints
+az iot ops asset dataset show --asset thermostat -n default -g {your resource group name}
``` When you create an asset by using the Azure CLI, you can define: -- Multiple tags by using the `--data` parameter multiple times.
+- Multiple datapoints/tags by using the `point add` command multiple times.
- Multiple events by using the `--event` parameter multiple times. - Optional information for the asset such as: - Manufacturer
When you create an asset by using the Azure CLI, you can define:
- Serial number - Documentation URI - Default values for sampling interval, publishing interval, and queue size.-- Tag specific values for sampling interval, publishing interval, and queue size.
+- Datapoint specific values for sampling interval, publishing interval, and queue size.
- Event specific values for sampling publishing interval, and queue size. - The observability mode for each tag and event
Review your asset and OPC UA tag and event details and make any adjustments you
# [Azure CLI](#tab/cli)
-When you create an asset by using the Azure CLI, you can define multiple events by using the `--event` parameter multiple times. The syntax for the `--event` parameter is similar to the `--data` parameter:
+When you create an asset by using the Azure CLI, you can define multiple events by using the `--event` parameter multiple times:
```azurecli az iot ops asset create --name thermostat -g {your resource group name} --cluster {your cluster name} --endpoint opc-ua-connector-0 --description 'A simulated thermostat asset' --event event_notifier='ns=3;s=FastUInt12', name=warning
For each event that you define, you can specify the:
- Observability mode. - Queue size.
+You can also use the a[z iot ops asset event](/cli/azure/iot/ops/asset/event) commands to add and remove events from an asset.
+ ## Update an asset
az iot ops asset update --name thermostat --description 'A simulated thermostat
To list the thermostat asset's tags, use the following command: ```azurecli
-az iot ops asset data-point list --asset thermostat -g {your resource group}
+az iot ops asset dataset show --asset thermostat --name default -g {your resource group}
``` To list the thermostat asset's events, use the following command:
az iot ops asset event list --asset thermostat -g {your resource group}
To add a new tag to the thermostat asset, use a command like the following example: ```azurecli
-az iot ops asset data-point add --asset thermostat -g {your resource group} --data-source 'ns=3;s=FastUInt1002' --name 'humidity'
+az iot ops asset dataset point add --asset thermostat -g {your resource group name} --dataset default --data-source 'ns=3;s=FastUInt1002' --name 'humidity'
```
-To delete a tag, use the `az iot ops asset data-point remove` command.
+To delete a tag, use the `az iot ops asset dataset point remove` command.
You can manage an asset's events by using the `az iot ops asset event` commands.
iot-operations Tutorial Get Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/end-to-end-tutorials/tutorial-get-insights.md
- ignite-2023 Previously updated : 10/01/2024 Last updated : 11/04/2024 #CustomerIntent: As an OT user, I want to create a visual report for my processed OPC UA data that I can use to analyze and derive insights from it.
Follow these steps to check your work so far, and make sure data is flowing into
:::image type="content" source="media/tutorial-get-insights/source-added-data.png" alt-text="Screenshot of the eventstream with data from the AzureEventHub source."::: >[!TIP]
->If data has not arrived in your eventstream, you may want to check your event hub activity to verify that it's receiving messages. This will help you isolate which section of the flow to debug.
+>If data has not arrived in your eventstream, you may want to check your event hub activity to [verify that it's receiving messages](tutorial-upload-telemetry-to-cloud.md#verify-data-is-flowing). This will help you isolate which section of the flow to debug.
### Prepare KQL resources
If you're continuing on to the next tutorial, keep all of your resources.
> [!NOTE] > The resource group contains the Event Hubs namespace you created in this tutorial.
-You can also delete your Microsoft Fabric workspace and/or all the resources within it associated with this tutorial, including the eventstream, Eventhouse, and Real-Time Dashboard.
+You can also delete your Microsoft Fabric workspace and/or all the resources within it associated with this tutorial, including the eventstream, eventhouse, and Real-Time Dashboard.
iot-operations Quickstart Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started-end-to-end-sample/quickstart-configure.md
If there's no data flowing, restart the `aio-opc-opc.tcp-1` pod:
kubectl delete pod aio-opc-opc.tcp-1-849dd78866-vhmz6 -n azure-iot-operations ```
-The sample tags you added in the previous quickstart generate messages from your asset that look like the following example:
+The sample asset you added earlier in this quickstart generates messages that look like the following example:
```json {
- "temperature": {
- "SourceTimestamp": "2024-08-02T13:52:15.1969959Z",
- "Value": 2696
- },
- "Tag 10": {
- "SourceTimestamp": "2024-08-02T13:52:15.1970198Z",
- "Value": 2696
- }
+ "Temperature":{
+ "SourceTimestamp":"2024-11-04T21:30:31.9454188Z",
+ "Value":357
+ },
+ "FillWeight":{
+ "SourceTimestamp":"2024-11-04T21:30:31.9455619Z",
+ "Value":357
+ },
+ "EnergyUse":{
+ "SourceTimestamp":"2024-11-04T21:30:31.9455641Z",
+ "Value":357
+ }
} ```
iot-operations Quickstart Get Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started-end-to-end-sample/quickstart-get-insights.md
- ignite-2023 Previously updated : 10/30/2024 Last updated : 11/04/2024 #CustomerIntent: As an OT user, I want to create a visual report for my processed OPC UA data that I can use to analyze and derive insights from it.
Next, add your event hub from the previous quickstart as a data source for the e
Follow the steps in [Add Azure Event Hubs source to an eventstream](/fabric/real-time-intelligence/event-streams/add-source-azure-event-hubs?pivots=standard-capabilities#add-an-azure-event-hub-as-a-source) to add the event source. Keep the following notes in mind: - You'll be creating a new cloud connection with Shared Access Key authentication.
- - Make sure local authentication is enabled on your event hub. You can set this from its Overview page in the Azure portal.
+ - Make sure local authentication is enabled on your Event Hubs namespace. You can set this from its Overview page in the Azure portal.
- For **Consumer group**, use the default selection (*$Default*). - For **Data format**, choose *Json* (it might be selected already by default).
Follow these steps to check your work so far, and make sure data is flowing into
:::image type="content" source="media/quickstart-get-insights/source-added-data.png" alt-text="Screenshot of the eventstream with data from the AzureEventHub source."::: >[!TIP]
->If data has not arrived in your eventstream, you may want to check your event hub activity to verify that it's receiving messages. This will help you isolate which section of the flow to debug.
+>If data has not arrived in your eventstream, you may want to check your event hub activity to [verify that it's receiving messages](quickstart-configure.md#verify-data-is-flowing-to-event-hubs). This will help you isolate which section of the flow to debug.
### Prepare KQL resources
In this section, you create a KQL database in your Microsoft Fabric workspace to
| Column name | Data type | | | | | AssetId | string |
- | Spike | boolean |
+ | Spike | bool |
| Temperature | decimal | | FillWeight | decimal | | EnergyUse | decimal |
Next, configure some parameters for your dashboard so that the visuals can be fi
OPCUA | summarize by AssetId ```
- * **Value column**: *AssetId*
+ * **Value column**: *AssetId (string)*
* **Default value**: *Select first value of query* 1. Select **Done** to save your parameter.
Next, add a tile to your dashboard to show a line chart of temperature and its s
```kql OPCUA | where Timestamp between (_startTime .. _endTime)
+ | where AssetId == _asset
| project Timestamp, Temperature, Spike | extend SpikeMarker = iff(Spike == true, Temperature, decimal(null)) ```
Next, add a tile to your dashboard to show a line chart of temperature and its s
- **X columns**: *Timestamp (datetime)* (already inferred by default) - **Series columns**: Leave the default inferred value. - **Y Axis**:
- - **Label**: *Units*
+ - **Label**: *Temperature (┬░F)*
- **X Axis**: - **Label**: *Timestamp*
Next, add a tile to your dashboard to show a line chart of temperature and its s
View the finished tile on your dashboard. + ### Create max value tile Next, create a tile to display a real-time spike indicator for temperature.
Next, create a tile to display a real-time spike indicator for temperature.
| project Timestamp, Temperature, Spike ```
- **Run** the query to verify that a maximum temperature can be found.
+ **Run** the query to verify that data can be found.
1. Select **+ Add visual** to add a visual for this data. Create a visual with the following characteristics: - **Tile name**: *Spike indicator*
Next, create a tile to display a real-time spike indicator for temperature.
Select **Apply changes** to create the tile.
-1. View the finished tile on your dashboard (you may want to resize the tile so the full text is visible).
+1. View the finished tile on your dashboard (you may want to resize the tile so the full text is visible). The tile will always display the most recent temperature value, but the conditional formatting will only be triggered if that value is a spike.
+
+ :::image type="content" source="media/quickstart-get-insights/dashboard-2.png" alt-text="Screenshot of the dashboard with the stat tile.":::
1. **Save** your completed dashboard.
Below are some more queries that you can use to add additional tiles to your das
| where AssetId == _asset | project Timestamp, Temperature, FillWeight ```
+* Query for a line chart tile, *Temperature (F) vs. Energy Use*:
+ ```kql
+ OPCUA
+ | where Timestamp between (_startTime.._endTime)
+ | where AssetId == _asset
+ | project Timestamp, Temperature, EnergyUse
+ ```
* Query for a stat tile, *Max temperature*: ```kql OPCUA
If you're continuing on to the next quickstart, keep all of your resources.
> [!NOTE] > The resource group contains the Event Hubs namespace you created in this quickstart.
-You can also delete your Microsoft Fabric workspace and/or all the resources within it associated with this quickstart, including the eventstream, Eventhouse, and Real-Time Dashboard.
+You can also delete your Microsoft Fabric workspace and/or all the resources within it associated with this quickstart, including the eventstream, eventhouse, and Real-Time Dashboard.
iot-operations Howto Configure Brokerlistener https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-brokerlistener.md
The following is an example of a BrokerListener resource that enables TLS on por
# [Portal](#tab/portal)
-1. In the Azure portal, navigate to your IoT Operations instance.
+1. In the Azure portal, go to your IoT Operations instance.
1. Under **Azure IoT Operations resources**, select **MQTT Broker**. 1. Select or create a listener. You can only create one listener per service type. If you already have a listener of the same service type, you can add more ports to the existing listener. 1. You can add TLS settings to the listener by selecting the **TLS** on an existing port or by adding a new port.
iot-operations Howto Test Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-test-connection.md
To remove the pod, run `kubectl delete pod mqtt-client -n azure-iot-operations`.
## Connect clients from outside the cluster
-Since the [default broker listener](howto-configure-brokerlistener.md#default-brokerlistener) is set to *ClusterIp* service type, you can't connect to the broker from outside the cluster directly. To prevent unintentional disruption to communication between internal AIO components, we recommend keeping the default listener unmodified and dedicated for AIO internal communication. While it's possible to create a separate Kubernetes *LoadBalancer* service to expose the cluster IP service, it's better to create a separate listener with different settings, like more common MQTT port 1883 and 8883, to avoid confusion and potential security risks.
+Since the [default broker listener](howto-configure-brokerlistener.md#default-brokerlistener) is set to *ClusterIp* service type, you can't connect to the broker from outside the cluster directly. To prevent unintentional disruption to communication between internal Azure IoT Operations components, we recommend keeping the default listener unmodified and dedicated for AIO internal communication. While it's possible to create a separate Kubernetes *LoadBalancer* service to expose the cluster IP service, it's better to create a separate listener with different settings, like more common MQTT port 1883 and 8883, to avoid confusion and potential security risks.
<!-- TODO: consider moving to the main listener article and just link from here? --> ### Node port The easiest way to test connectivity is to use the *NodePort* service type in the listener. With that, you can use `<nodeExternalIP>:<NodePort>` to connect like in [Kubernetes documentation](https://kubernetes.io/docs/tutorials/services/connect-applications-service/#exposing-the-service).
-For example, to create a new BrokerListener with *NodePort* service type and port 1883, create a file named `broker-nodeport.yaml` with configuration like the following, replacing placeholders with your own values, including your own authentication and TLS settings.
+For example, create a new broker listener with node port service type listening on port 1883:
+
+# [Portal](#tab/portal)
+
+1. In the Azure portal, go to your IoT Operations instance.
+1. Under **Azure IoT Operations resources**, select **MQTT Broker**.
+1. Select **MQTT broker listener for NodePort** > **Create**. You can only create one listener per service type. If you already have a listener of the same service type, you can add more ports to the existing listener.
+
+ > [!CAUTION]
+ > Setting authentication to **None** and not configuring TLS [turns off authentication and TLS for testing purposes only.](#only-turn-off-tls-and-authentication-for-testing)
+
+ Enter the following settings:
+
+ | Setting | Value |
+ | -- | |
+ | Name | nodeport |
+ | Service name | aio-broker-nodeport |
+ | Port | 1883 |
+ | Authentication | Choose **default** or **None** |
+ | Authorization | Choose **default** |
+ | Protocol | Choose **MQTT** |
+ | Node port | 31883 |
+
+1. Add TLS settings to the listener by selecting **TLS** on the port.
+
+ | Setting | Description |
+ | -- | |
+ | TLS | Select the *Add* button. |
+ | TLS mode | Choose **Manual** or **Automatic**. |
+ | Issuer name | Name of the cert-manager issuer. Required. |
+ | Issuer kind | Kind of the cert-manager issuer. Required. |
+ | Issuer group | Group of the cert-manager issuer. Required. |
+ | Private key algorithm | Algorithm for the private key. |
+ | Private key rotation policy | Policy for rotating the private key. |
+ | DNS names | DNS subject alternate names for the certificate. |
+ | IP addresses | IP addresses of the subject alternate names for the certificate. |
+ | Secret name | Kubernetes secret containing an X.509 client certificate. |
+ | Duration | Total lifetime of the TLS server certificate Defaults to 90 days. |
+ | Renew before | When to begin renewing the certificate. |
+
+1. Select **Apply** to save the TLS settings.
+1. Select **Create** to create the listener.
+
+# [Bicep](#tab/bicep)
> [!CAUTION]
-> Removing `authenticationRef` and `tls` settings from the configuration [will turn off authentication and TLS for testing purposes only.](#only-turn-off-tls-and-authentication-for-testing)
+> Removing `authenticationRef` and `tls` settings from the configuration [turns off authentication and TLS for testing purposes only.](#only-turn-off-tls-and-authentication-for-testing)
+
+```bicep
+param aioInstanceName string = '<AIO_INSTANCE_NAME>'
+param customLocationName string = '<CUSTOM_LOCATION_NAME>'
+param listenerServiceName string = 'aio-broker-nodeport'
+param listenerName string = 'nodeport'
+
+resource aioInstance 'Microsoft.IoTOperations/instances@2024-09-15-preview' existing = {
+ name: aioInstanceName
+}
+
+resource customLocation 'Microsoft.ExtendedLocation/customLocations@2021-08-31-preview' existing = {
+ name: customLocationName
+}
+
+resource defaultBroker 'Microsoft.IoTOperations/instances/brokers@2024-09-15-preview' existing = {
+ parent: aioInstance
+ name: 'default'
+}
+
+resource nodePortListener 'Microsoft.IoTOperations/instances/brokers/listeners@2024-09-15-preview' = {
+ parent: defaultBroker
+ name: listenerName
+ extendedLocation: {
+ name: customLocation.id
+ type: 'CustomLocation'
+ }
+
+ properties: {
+ serviceName: listenerServiceName
+ serviceType: 'NodePort'
+ ports: [
+ {
+ authenticationRef: 'default'
+ port: 1883
+ nodePort: 31883
+ tls: {
+ mode: 'Manual'
+ manual: {
+ secretRef: 'server-cert-secret'
+ }
+ }
+ }
+ ]
+ }
+}
+
+```
+
+Deploy the Bicep file using Azure CLI.
+
+```azurecli
+az deployment group create --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+```
+
+# [Kubernetes](#tab/kubernetes)
-<!-- TODO: Bicep and portal -->
+Create a file named `broker-nodeport.yaml` with the following configuration. Replace placeholders with your own values, including your own authentication and TLS settings.
+
+> [!CAUTION]
+> Removing `authenticationRef` and `tls` settings from the configuration [turns off authentication and TLS for testing purposes only.](#only-turn-off-tls-and-authentication-for-testing)
```yaml apiVersion: mqttbroker.iotoperations.azure.com/v1beta1 kind: BrokerListener metadata:
- name: broker-nodeport
+ name: nodeport
namespace: azure-iot-operations spec: brokerRef: default serviceType: NodePort
- serviceName: broker-nodeport
+ serviceName: aio-broker-nodeport
ports: - port: 1883 nodePort: 31883 # Must be in the range 30000-32767
- authenticationRef: # Add BrokerAuthentication reference
- tls:
- # Add TLS settings
+ authenticationRef: default # Add BrokerAuthentication reference
+ tls: # Add TLS settings
``` Then, use `kubectl` to deploy the configuration:
Then, use `kubectl` to deploy the configuration:
kubectl apply -f broker-nodeport.yaml ```
-Next, get the node's external IP address:
++
+Get the node's external IP address:
```bash kubectl get nodes -o yaml | grep ExternalIP -C 1
Then, use the internal IP address and the node port to connect to the broker fro
### Load balancer
-Another way to expose the broker to the internet is to use the *LoadBalancer* service type. This method is more complex and might require additional configuration, like setting up port forwarding. For example, to create a new BrokerListener with *LoadBalancer* service type and port 1883, create a file named `broker-loadbalancer.yaml` with configuration like the following, replacing placeholders with your own values, including your own authentication and TLS settings.
+Another way to expose the broker to the internet is to use the *LoadBalancer* service type. This method is more complex and might require additional configuration, like setting up port forwarding.
+
+For example, to create a new broker listener with load balancer service type listening on port 1883:
+
+# [Portal](#tab/portal)
+
+1. In the Azure portal, go to your IoT Operations instance.
+1. Under **Azure IoT Operations resources**, select **MQTT Broker**.
+1. Select **MQTT broker listener for NodePort** > **Create**. You can only create one listener per service type. If you already have a listener of the same service type, you can add more ports to the existing listener.
+
+ > [!CAUTION]
+ > Setting authentication to **None** and not configuring TLS [turns off authentication and TLS for testing purposes only.](#only-turn-off-tls-and-authentication-for-testing)
+
+ Enter the following settings:
+
+ | Setting | Value |
+ | -- | |
+ | Name | loadbalancer |
+ | Service name | aio-broker-loadbalancer |
+ | Port | 1883 |
+ | Authentication | Choose **default** |
+ | Authorization | Choose **default** or **None** |
+ | Protocol | Choose **MQTT** |
+
+1. You can add TLS settings to the listener by selecting **TLS** on the port.
+
+ | Setting | Description |
+ | -- | |
+ | TLS | Select the *Add* button. |
+ | TLS mode | Choose **Manual** or **Automatic**. |
+ | Issuer name | Name of the cert-manager issuer. Required. |
+ | Issuer kind | Kind of the cert-manager issuer. Required. |
+ | Issuer group | Group of the cert-manager issuer. Required. |
+ | Private key algorithm | Algorithm for the private key. |
+ | Private key rotation policy | Policy for rotating the private key. |
+ | DNS names | DNS subject alternate names for the certificate. |
+ | IP addresses | IP addresses of the subject alternate names for the certificate. |
+ | Secret name | Kubernetes secret containing an X.509 client certificate. |
+ | Duration | Total lifetime of the TLS server certificate Defaults to 90 days. |
+ | Renew before | When to begin renewing the certificate. |
+
+1. Select **Apply** to save the TLS settings.
+1. Select **Create** to create the listener.
+
+# [Bicep](#tab/bicep)
> [!CAUTION]
-> Removing `authenticationRef` and `tls` settings from the configuration [will turn off authentication and TLS for testing purposes only.](#only-turn-off-tls-and-authentication-for-testing)
+> Removing `authenticationRef` and `tls` settings from the configuration [turns off authentication and TLS for testing purposes only.](#only-turn-off-tls-and-authentication-for-testing)
+
+```bicep
+param aioInstanceName string = '<AIO_INSTANCE_NAME>'
+param customLocationName string = '<CUSTOM_LOCATION_NAME>'
+param listenerServiceName string = 'aio-broker-loadbalancer'
+param listenerName string = 'loadbalancer'
+
+resource aioInstance 'Microsoft.IoTOperations/instances@2024-09-15-preview' existing = {
+ name: aioInstanceName
+}
+
+resource customLocation 'Microsoft.ExtendedLocation/customLocations@2021-08-31-preview' existing = {
+ name: customLocationName
+}
+
+resource defaultBroker 'Microsoft.IoTOperations/instances/brokers@2024-09-15-preview' existing = {
+ parent: aioInstance
+ name: 'default'
+}
+
+resource loadBalancerListener 'Microsoft.IoTOperations/instances/brokers/listeners@2024-09-15-preview' = {
+ parent: defaultBroker
+ name: listenerName
+ extendedLocation: {
+ name: customLocation.id
+ type: 'CustomLocation'
+ }
+
+ properties: {
+ serviceName: listenerServiceName
+ serviceType: 'LoadBalancer'
+ ports: [
+ {
+ authenticationRef: 'default'
+ port: 1883
+ tls: {
+ mode: 'Manual'
+ manual: {
+ secretRef: 'server-cert-secret'
+ }
+ }
+ }
+ ]
+ }
+}
+
+```
+
+Deploy the Bicep file using Azure CLI.
+
+```azurecli
+az deployment group create --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+```
+
+# [Kubernetes](#tab/kubernetes)
+
+> [!CAUTION]
+> Removing `authenticationRef` and `tls` settings from the configuration [turns off authentication and TLS for testing purposes only.](#only-turn-off-tls-and-authentication-for-testing)
+
+Create a file named `broker-loadbalancer.yaml` with configuration like the following, replacing placeholders with your own values, including your own authentication and TLS settings.
```yaml apiVersion: mqttbroker.iotoperations.azure.com/v1beta1 kind: BrokerListener metadata:
- name: broker-loadbalancer
+ name: loadbalancer
namespace: azure-iot-operations spec: brokerRef: default serviceType: LoadBalancer
- serviceName: broker-loadbalancer
+ serviceName: aio-broker-loadbalancer
ports: - port: 1883
- authenticationRef: # Add BrokerAuthentication reference
- tls:
- # Add TLS settings
+ authenticationRef: default # Add BrokerAuthentication reference
+ tls: # Add TLS settings
```
-Then, use `kubectl` to deploy the configuration:
+Use `kubectl` to deploy the configuration:
```bash kubectl apply -f broker-loadbalancer.yaml ```
-Next, get the external IP address for the broker's service:
++
+Get the external IP address for the broker's service:
```bash
-kubectl get service broker-loadbalancer --namespace azure-iot-operations
+kubectl get service aio-broker-loadbalancer --namespace azure-iot-operations
``` If the output looks similar to the following: ```output
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-broker-loadbalancer LoadBalancer 10.43.213.246 172.19.0.2 1883:30382/TCP 83s
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+aio-broker-loadbalancer LoadBalancer 10.x.x.x x.x.x.x 1883:30382/TCP 83s
``` This means that an external IP has been assigned to the load balancer service, and you can use the external IP address and the port to connect to the broker. For example, to publish a message to the broker:
For Azure Kubernetes Services Edge Essentials, you need to perform a few additio
```Output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- broker-loadbalancer LoadBalancer 10.43.107.11 192.168.0.4 1883:30366/TCP 14h
+ broker-loadbalancer LoadBalancer 10.x.x.x 192.168.0.4 1883:30366/TCP 14h
``` 1. Set up port forwarding to the `broker-loadbalancer` service on the external IP address `192.168.0.4` and port `1883`:
mosquitto_pub --host localhost --port 1883 --message "hello" --topic "world" --d
The reason that MQTT broker uses TLS and service accounts authentication by default is to provide a secure-by-default experience that minimizes inadvertent exposure of your IoT solution to attackers. You shouldn't turn off TLS and authentication in production. Exposing MQTT broker to the internet without authentication and TLS can lead to unauthorized access and even DDOS attacks.
-If you understand the risks and need to use an insecure port in a well-controlled environment, you can turn off TLS and authentication for testing purposes by removing the `tls` and `authenticationRef` settings from the listener configuration.
+> [!WARNING]
+> If you understand the risks and need to use an insecure port in a well-controlled environment, you can turn off TLS and authentication for testing purposes by removing the `tls` and `authenticationRef` settings from the listener configuration.
+
+# [Portal](#tab/portal)
+
+1. In the Azure portal, go to your IoT Operations instance.
+1. Under **Azure IoT Operations resources**, select **MQTT Broker**.
+1. Select **MQTT broker listener for NodePort** > **Create**. You can only create one listener per service type. If you already have a listener of the same service type, you can add more ports to the existing listener.
+
+ > [!CAUTION]
+ > Setting authentication to **None** and not configuring TLS [turns off authentication and TLS for testing purposes only.](#only-turn-off-tls-and-authentication-for-testing)
+
+ Enter the following settings:
+
+ | Setting | Value |
+ | -- | |
+ | Name | Enter a name for the listener |
+ | Service name | Enter a service name |
+ | Port | 1883 |
+ | Authentication | Choose **None** |
+ | Authorization | Choose **None** |
+ | Protocol | Choose **MQTT** |
+ | Node port | 31883 if using node port |
+
+1. Select **Create** to create the listener.
+
+# [Bicep](#tab/bicep)
+
+> [!CAUTION]
+> Removing `authenticationRef` and `tls` settings from the configuration [turns off authentication and TLS for testing purposes only.](#only-turn-off-tls-and-authentication-for-testing)
+
+```bicep
+param aioInstanceName string = '<AIO_INSTANCE_NAME>'
+param customLocationName string = '<CUSTOM_LOCATION_NAME>'
+param listenerServiceName string = '<SERVICE_NAME>'
+param listenerName string = '<LISTENER_NAME>'
+
+resource aioInstance 'Microsoft.IoTOperations/instances@2024-09-15-preview' existing = {
+ name: aioInstanceName
+}
+
+resource customLocation 'Microsoft.ExtendedLocation/customLocations@2021-08-31-preview' existing = {
+ name: customLocationName
+}
+
+resource defaultBroker 'Microsoft.IoTOperations/instances/brokers@2024-09-15-preview' existing = {
+ parent: aioInstance
+ name: 'default'
+}
+
+resource nodePortListener 'Microsoft.IoTOperations/instances/brokers/listeners@2024-09-15-preview' = {
+ parent: defaultBroker
+ name: listenerName
+ extendedLocation: {
+ name: customLocation.id
+ type: 'CustomLocation'
+ }
+
+ properties: {
+ serviceName: listenerServiceName
+ serviceType: <SERVICE_TYPE> // 'LoadBalancer' or 'NodePort'
+ ports: [
+ {
+ port: 1883
+ nodePort: 31883 //If using NodePort
+ // Omitting authenticationRef and tls for testing only
+ }
+ ]
+ }
+}
+
+```
+
+Deploy the Bicep file using Azure CLI.
+
+```azurecli
+az deployment group create --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+```
+
+# [Kubernetes](#tab/kubernetes)
```yaml apiVersion: mqttbroker.iotoperations.azure.com/v1beta1 kind: BrokerListener metadata:
- name: <NAME>
+ name: <LISTENER_NAME>
namespace: azure-iot-operations spec: brokerRef: default serviceType: <SERVICE_TYPE> # LoadBalancer or NodePort
- serviceName: <NAME>
+ serviceName: <SERVICE_NAME>
ports: - port: 1883 nodePort: 31883 # If using NodePort # Omitting authenticationRef and tls for testing only ``` ++ ## Related content - [Configure TLS with manual certificate management to secure MQTT communication](howto-configure-tls-manual.md)
migrate Add Server Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/add-server-credentials.md
ms. Previously updated : 05/15/2024- Last updated : 11/05/2024+ # Provide server credentials to discover software inventory, dependencies, web apps, and SQL Server instances and databases
Feature | Windows credentials | Linux credentials
**Software inventory** | Guest user account | Regular/normal user account (nonsudo access permissions) **Discovery of SQL Server instances and databases** | User account that is a member of the sysadmin server role or has [these permissions](migrate-support-matrix-vmware.md?tabs=businesscase&pivots=sql-server-instance-database-discovery-requirements#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance.| _Not supported currently_ **Discovery of ASP.NET web apps** | Domain or nondomain (local) account with administrative permissions | _Not supported currently_
-**Agentless dependency analysis** | Domain or nondomain (local) account with administrative permissions | Sudo user account with permissions to execute ls and netstat commands. When providing a sudo user account, ensure that you have enabled **NOPASSWD** for the account to run the required commands without prompting for a password every time the sudo command is invoked. <br /><br /> Alternatively, you can create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files, set using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
+**Agentless dependency analysis** | Local or domain guest user account | Sudo user account with permissions to execute ls and netstat commands. When providing a sudo user account, ensure that you have enabled **NOPASSWD** for the account to run the required commands without prompting for a password every time the sudo command is invoked. <br /><br /> Alternatively, you can create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files, set using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
### Recommended practices to provide credentials
operator-nexus Howto Disable Cgroupsv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-disable-cgroupsv2.md
spec:
CGROUP_VERSION=`stat -fc %T /sys/fs/cgroup/` if [ "$CGROUP_VERSION" == "cgroup2fs" ]; then echo "Using v2, reverting..."
- sed -i 's/systemd.unified_cgroup_hierarchy=1 cgroup_no_v1=all/systemd.unified_cgroup_hierarchy=0/' /boot/grub2/grub.cfg
+ if uname -r | grep -q "cm2"; then
+ echo "Detected Azure Linux OS version older than v3"
+ sed -i 's/systemd.unified_cgroup_hierarchy=1 cgroup_no_v1=all/systemd.unified_cgroup_hierarchy=0/' /boot/grub2/grub.cfg
+ else
+ sed -i 's/systemd.unified_cgroup_hierarchy=1 cgroup_no_v1=all/systemd.unified_cgroup_hierarchy=0/' /etc/default/grub
+ grub2-mkconfig -o /boot/grub2/grub.cfg
+ if ! grep -q systemd.unified_cgroup_hierarchy=0 /boot/grub2/grub.cfg; then
+ echo "failed to update grub2 config"
+ exit 1
+ fi
+ fi
reboot fi
operator-service-manager Get Started With Cluster Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/get-started-with-cluster-registry.md
az k8s-extension create --cluster-name
[--config global.networkfunctionextension.clusterRegistry.storageClassName=] [--config global.networkfunctionextension.clusterRegistry.storageSize=] [--config global.networkfunctionextension.webhook.pod.mutation.matchConditionExpression=]
+ [--config global.networkfunctionextension.clusterRegistry.clusterRegistryGCCadence=]
+ [--config global.networkfunctionextension.clusterRegistry.clusterRegistryGCThreshold=]
[--version] ``` When the cluster registry feature is enabled in the Network Function Operator Arc K8s extension, any container images deployed from AOSM artifact store are accessible locally in the Nexus K8s cluster. The user can choose the persistent storage size for the cluster registry.
The cluster registry feature deploys helper pods on the target edge cluster to a
#### Registry * This pod stores and retrieves container images for CNF.
+### Cluster registry garbage collection
+AOSM cluster extension runs a background job to regularly clean up container images. The job schedule and conditions are configured by end-user, but by default the job runs once per days at a 0% utilization threshold. This job will check if the cluster registry usage has reached the specified threshold, and if so, it will initiate the garbage collection process.
+
+#### Clean up garbage image manifests
+AOSM maintains references between pod owner resource and consuming images in cluster registry. Upon initiating the images cleanup process, images will be identified which are not linked to any pods, issuing a soft delete to remove them from cluster registry. This type of soft delete doesn't immediately free cluster registry storage space. Actual image files removal depends on the CNCF distribution registry garbage collection outlined below.
+
+> [!NOTE]
+> The reference between a pod's owner and its container images ensures that AOSM does not mistakenly delete images. For example, if a replicaset pod goes down, AOSM will not dereference the container images. AOSM only dereferences container images when the replicaset is deleted. The same principle applies to pods managed by Kubernetes jobs and daemonsets.
+
+#### CNCF garbage collection distribution
+AOSM sets up the cluster registry using open source [CNCF distribution registry](https://distribution.github.io/distribution/). Therefore, AOSM relies on garbage collection capabilities that provided by [Garbage collection | CNCF Distribution](https://distribution.github.io/distribution/about/garbage-collection/#:~:text=About%20garbage%20collection,considerable%20amounts%20of%20disk%20space.). Overall, it follows standard 2 phase ΓÇ£mark and sweepΓÇ¥ process to delete image files to free registry storage space.
+
+> [!NOTE]
+> This process requires the cluster registry in read-only mode. If images are uploaded when registry not in read-only mode, there is the risk that images layers are mistakenly deleted leading to a corrupted image. Registry requires lock in read-only mode for a duration of up to 1 minute. Consequently, AOSM will defer other NF deployment when cluster registry in read-only mode.
+
+#### Garbage collection configuration parameters
+Customers can adjust the following settings to configure the schedule and conditions for the garbage collection job.
+* global.networkfunctionextension.clusterRegistry.clusterRegistryGCCadence
+* global.networkfunctionextension.clusterRegistry.clusterRegistryGCThreshold
+* For more configuration details, please refer to the [Network function extension installation instructions](manage-network-function-operator.md)
+ ## High availability and resiliency considerations The AOSM NF extension relies uses a mutating webhook and edge registry to support key features. * Onboarding helm charts without requiring customization of image path.
operator-service-manager Manage Network Function Operator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/manage-network-function-operator.md
The referenced matchCondition implies that the pods getting accepted in kube-sys
* This configuration specified as an empty string disable the scheduled job, allowing customers to opt out of running garbage collection. * Default value: "0 0 * * *" -- Runs the job once everyday.
-`--config global.networkfunctionextension.backgroundJobThreshold=`
+`--config global.networkfunctionextension.clusterRegistry.clusterRegistryGCThreshold=`
* This configuration specifies the precent threshold value to trigger the cluster registry garbage collection process. * This configuration triggers garbage collection process when cluster registry usage exceeds this value. * Default value: 0.
reliability Availability Service By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-service-by-category.md
Azure services are presented in the following tables by category. Note that some
> [!div class="mx-tableFixed"] > | ![An icon that signifies this service is foundational.](media/icon-foundational.svg) Foundational | ![An icon that signifies this service is mainstream.](media/icon-mainstream.svg) Mainstream |
-> |-||
-> | Azure Application Gateway | Azure API Management |
-> | Azure Backup | Azure App Configuration |
-> | Azure Cosmos DB for NoSQL | Azure App Service |
-> | Azure Event Hubs | Microsoft Entra Domain Services |
-> | Azure ExpressRoute | Azure Bastion |
-> | Azure Key Vault | Azure Batch |
-> | Azure Load Balancer | Azure Cache for Redis |
-> | Azure Public IP | Azure AI Search |
-> | Azure Service Bus | Azure Container Registry |
-> | Azure Service Fabric | Azure Container Instances |
-> | Azure Site Recovery | Azure Data Explorer |
-> | Azure SQL | Azure Data Factory |
-> | Azure Storage: Disk Storage | Azure Database for MySQL |
-> | Azure Storage Accounts | Azure Database for PostgreSQL |
-> | Azure Storage: Blob Storage | Azure DDoS Protection |
-> | Azure Storage Data Lake Storage | Azure Event Grid |
-> | Azure Virtual Machines | Azure Firewall |
-> | Azure Virtual Machine Scale Sets | Azure Firewall Manager |
-> | Virtual Machines: Av2-series | Azure Functions |
-> | Virtual Machines: Bs-series | Azure HDInsight |
-> | Virtual Machines: Dv2 and DSv2-series | Azure IoT Hub |
-> | Virtual Machines: Dv3 and DSv3-series | Azure Kubernetes Service (AKS) |
-> | Virtual Machines: ESv3 abd ESv3-series | Azure Logic Apps |
-> | Azure Virtual Network | Azure Media Services |
-> | Azure VPN Gateway | Azure Monitor: Application Insights |
-> | | Azure Monitor: Log Analytics |
-> | | Azure Network Watcher |
-> | | Azure Private Link |
-> | | Azure Storage: Files Storage |
-> | | Azure Virtual WAN |
-> | | Premium Blob Storage |
-> | | Virtual Machines: Ddsv4-series |
-> | | Virtual Machines: Ddv4-series |
-> | | Virtual Machines: Dsv4-series |
-> | | Virtual Machines: Dv4-series |
-> | | Virtual Machines: Edsv4-series |
-> | | Virtual Machines: Edv4-series |
-> | | Virtual Machines: Esv4-series |
-> | | Virtual Machines: Ev4-series |
-> | | Virtual Machines: Fsv2-series |
-> | | Virtual Machines: M-series |
+> |-|-|
+> | Azure Application Gateway | Azure AI Search |
+> | Azure Backup | Azure API Management |
+> | Azure Cosmos DB for NoSQL | Azure App Configuration |
+> | Azure Event Hubs | Azure App Service |
+> | Azure ExpressRoute | Azure Bastion |
+> | Azure Key Vault | Azure Batch |
+> | Azure Load Balancer | Azure Cache for Redis |
+> | Azure NAT Gateway | Azure Container Instances |
+> | Azure Public IP | Azure Container Registry |
+> | Azure Service Bus | Azure Data Explorer |
+> | Azure Service Fabric | Azure Data Factory |
+> | Azure Site Recovery | Azure Database for MySQL |
+> | Azure SQL | Azure Database for PostgreSQL |
+> | Azure Storage Accounts | Azure DDoS Protection |
+> | Azure Storage Data Lake Storage | Azure Event Grid |
+> | Azure Storage: Blob Storage | Azure Firewall |
+> | Azure Storage: Disk Storage | Azure Firewall Manager |
+> | Azure Virtual Machine Scale Sets | Azure Functions |
+> | Azure Virtual Machines | Azure HDInsight |
+> | Azure Virtual Network | Azure IoT Hub |
+> | Azure VPN Gateway | Azure Kubernetes Service (AKS) |
+> | Virtual Machines: Av2-series | Azure Logic Apps |
+> | Virtual Machines: Bs-series | Azure Media Services |
+> | Virtual Machines: Dv2 and DSv2-series | Azure Monitor: Application Insights |
+> | Virtual Machines: Dv3 and DSv3-series | Azure Monitor: Log Analytics |
+> | Virtual Machines: ESv3 and ESv3-series | Azure Network Watcher |
+> | | Azure Private Link |
+> | | Azure Storage: Files Storage |
+> | | Azure Storage: Premium Blob Storage |
+> | | Azure Virtual WAN |
+> | | Microsoft Entra Domain Services |
+> | | Virtual Machines: Ddsv4-series |
+> | | Virtual Machines: Ddv4-series |
+> | | Virtual Machines: Dsv4-series |
+> | | Virtual Machines: Dv4-series |
+> | | Virtual Machines: Edsv4-series |
+> | | Virtual Machines: Edv4-series |
+> | | Virtual Machines: Esv4-series |
+> | | Virtual Machines: Ev4-series |
+> | | Virtual Machines: Fsv2-series |
+> | | Virtual Machines: M-series |
### Strategic services As mentioned previously, Azure classifies services into three categories: foundational, mainstream, and strategic. Service categories are assigned at general availability. Often, services start their lifecycle as a strategic service and as demand and utilization increases may be promoted to mainstream or foundational. The following table lists strategic services.
As mentioned previously, Azure classifies services into three categories: founda
> [!div class="mx-tableFixed"] > | ![An icon that signifies this service is strategic.](media/icon-strategic.svg) Strategic | > |-|
-> | Azure API for FHIR |
+> | Azure AI services |
> | Azure Analysis Services |
-> | Azure AI services |
+> | Azure API for FHIR |
> | Azure Automation | > | Azure Container Apps | > | Azure Data Share |
-> | Azure Databricks |
> | Azure Database for MariaDB | > | Azure Database Migration Service |
+> | Azure Databricks |
> | Azure Dedicated HSM | > | Azure Digital Twins | > | Azure HPC Cache |
As mentioned previously, Azure classifies services into three categories: founda
> | Azure Managed HSM | > | Azure Managed Instance for Apache Cassandra | > | Azure NetApp Files |
-> | Microsoft Purview |
> | Azure Red Hat OpenShift | > | Azure Remote Rendering | > | Azure SignalR Service | > | Azure Spatial Anchors |
-> | Azure Spring Apps |
+> | Azure Spring Apps |
> | Azure Storage: Archive Storage | > | Azure Synapse Analytics | > | Azure Ultra Disk Storage | > | Azure VMware Solution | > | Microsoft Azure Attestation |
+> | Microsoft Purview |
> | SQL Server Stretch Database |
-> | Virtual Machines: DAv4 and DASv4-series |
> | Virtual Machines: Dasv5 and Dadsv5-series |
+> | Virtual Machines: DAv4 and DASv4-series |
> | Virtual Machines: DCsv2-series | > | Virtual Machines: Ddv5 and Ddsv5-series | > | Virtual Machines: Dv5 and Dsv5-series |
-> | Virtual Machines: Eav4 and Easv4-series |
> | Virtual Machines: Easv5 and Eadsv5-series |
+> | Virtual Machines: Eav4 and Easv4-series |
> | Virtual Machines: Edv5 and Edsv5-series | > | Virtual Machines: Ev5 and Esv5-series | > | Virtual Machines: FX-series |
As mentioned previously, Azure classifies services into three categories: founda
> | Virtual Machines: LSv2-series | > | Virtual Machines: LSv3-series | > | Virtual Machines: Mv2-series |
-> | Virtual Machines: NCv3-series |
> | Virtual Machines: NCasT4 v3-series |
+> | Virtual Machines: NCv3-series |
> | Virtual Machines: NDasr A100 v4-Series | > | Virtual Machines: NDm A100 v4-Series | > | Virtual Machines: NDv2-series |
reliability Reliability Postgresql Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-postgresql-flexible-server.md
description: Find out about reliability and high availability in Azure Database
Previously updated : 12/21/2023 Last updated : 11/5/2024
Azure Database for PostgreSQL - Flexible Server supports both [zone-redundant an
- Periodic maintenance activities such as minor version upgrades happen at the standby first and, to reduce downtime, the standby is promoted to primary so that workloads can keep on, while the maintenance tasks are applied on the remaining node. +
+### Monitor High-Availability Health
+
+High Availability (HA) health status monitoring in Azure Database for PostgreSQL - Flexible Server provides a continuous overview of the health and readiness of HA-enabled instances. This monitoring feature leverages [AzureΓÇÖs Resource Health Check (RHC)](/azure/service-health/resource-health-overview) framework to detect and alert on any issues that may impact your database's failover readiness or overall availability. By assessing key metrics like connection status, failover state, and data replication health, HA health status monitoring enables proactive troubleshooting and helps maintain your databaseΓÇÖs uptime and performance.
+
+Customers can use HA health status monitoring to:
+
+- Gain real-time insights into the health of both primary and standby replicas, with status indicators that reveal potential issues, such as degraded performance or network blocking.
+- Configure alerts for timely notifications on any changes in HA status, ensuring immediate action to address potential disruptions.
+- Optimize failover readiness by identifying and addressing issues before they impact database operations.
+
+For a detailed guide on configuring and interpreting HA health statuses, refer to the main article [High Availability (HA) health status monitoring for Azure Database for PostgreSQL - Flexible Server](/azure/postgresql/flexible-server/how-to-monitor-high-availability).
+ ### High availability limitations - Due to synchronous replication to the standby server, especially with a zone-redundant configuration, applications can experience elevated write and commit latency.
route-server Route Server Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-server-faq.md
No. Azure Route Server is a service designed with high availability. Your route
### Do I need to peer each NVA with both Azure Route Server instances?
-Yes, to ensure that virtual network routes are successfully advertised over the target NVA connections, and to configure High Availability, we recommend peering each NVA instance with both instances of Route Server.
+Yes, to ensure that routes are successfully advertised to Route Server and to configure high availability, it is required to peer each NVA instance with both instances of Route Server. It is also recommended to peer at least 2 NVA instances with both instances of Route Server.
### Does Azure Route Server store customer data?
sap Sap Hana High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability.md
[2235581]:https://launchpad.support.sap.com/#/notes/2235581 [2684254]:https://launchpad.support.sap.com/#/notes/2684254
-[sles-for-sap-bp]:https://documentation.suse.com/en-us/?tab=sbp
+[sles-for-sap-bp]:https://documentation.suse.com/?tab=sbp
[sles-for-sap-bp12]:https://documentation.suse.com/sbp/sap-12/ [sles-for-sap-bp15]:https://documentation.suse.com/sbp/sap-15/
sentinel Audit Sentinel Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/audit-sentinel-data.md
Title: Audit Microsoft Sentinel queries and activities | Microsoft Docs
description: This article describes how to audit queries and activities performed in Microsoft Sentinel. Previously updated : 01/09/2023 Last updated : 09/26/2024 - #Customer intent: As a security analyst, I want to audit queries and activities in my SOC environment so that I can ensure compliance and monitor security operations effectively.- # Audit Microsoft Sentinel queries and activities
Microsoft Sentinel provides access to:
> > In the Microsoft Sentinel **Workbooks** area, search for the **Workspace audit** workbook.
+## Prerequisites
+
+- Before you can successfully run the sample queries in this article, you need to have relevant data in your Microsoft Sentinel workspace to query on and access to Microsoft Sentinel.
+
+ For more information, see [Configure Microsoft Sentinel content](configure-content.md) and [Roles and permissions in Microsoft Sentinel](roles.md).
+ ## Auditing with Azure Activity logs Microsoft Sentinel's audit logs are maintained in the [Azure Activity Logs](/azure/azure-monitor/essentials/platform-logs-overview), where the **AzureActivity** table includes all actions taken in your Microsoft Sentinel workspace.
You can use the **AzureActivity** table when auditing activity in your SOC envir
**To query the AzureActivity table**:
-1. Connect the [Azure Activity](./data-connectors/azure-activity.md) data source to start streaming audit events into a new table in the **Logs** screen called AzureActivity.
+1. Connect the [Azure Activity](./data-connectors/azure-activity.md) data source to start streaming audit events into a new table called `AzureActivity`. In the Azure portal, query this table in the **[Logs](hunts-custom-queries.md)** page. In the Defender portal, query this table in the **Investigation & response > Hunting > [Advanced hunting](/defender-xdr/advanced-hunting-overview)** page. For more information, see
-1. Then, query the data using KQL, like you would any other table.
+1. Query the data using KQL, like you would any other table.
The **AzureActivity** table includes data from many services, including Microsoft Sentinel. To filter in only data from Microsoft Sentinel, start your query with the following code:
LAQueryLogs data includes information such as:
- Performance data on each query run > [!NOTE]
-> - The **LAQueryLogs** table only includes queries that have been run in the Logs blade of Microsoft Sentinel. It does not include the queries run by scheduled analytics rules, using the **Investigation Graph** or in the Microsoft Sentinel **Hunting** page.
+> - The **LAQueryLogs** table only includes queries that have been run in the Logs blade of Microsoft Sentinel. It does not include the queries run by scheduled analytics rules, using the **Investigation Graph**, in the Microsoft Sentinel **Hunting** page, or in the Defender portal's **Advanced hunting** page. <!--is this correct?-->
+>
> - There may be a short delay between the time a query is run and the data is populated in the **LAQueryLogs** table. We recommend waiting about 5 minutes to query the **LAQueryLogs** table for audit data. **To query the LAQueryLogs table**:
LAQueryLogs
Use Microsoft Sentinel's own features to monitor events and actions that occur within Microsoft Sentinel. -- **Monitor with workbooks**. The following workbooks were built to monitor workspace activity:-
- - **Workspace Auditing**. Includes information about which users in the environment are performing actions, which actions they have performed, and more.
- - **Analytics Efficiency**. Provides insight into which analytic rules are being used, which MITRE tactics are most covered, and incidents generated from the rules.
- - **Security Operations Efficiency**. Presents metrics on SOC team performance, incidents opened, incidents closed, and more. This workbook can be used to show team performance and highlight any areas that might be lacking that require attention.
- - **Data collection health monitoring**. Helps watch for stalled or stopped ingestions.
+- **Monitor with workbooks**. Several built-in Microsoft Sentinel workbooks can help you monitor workspace activity, including information about the users working in your workspace, the analytics rules being used, the MITRE tactics most covered, stalled or stopped ingestions, and SOC team performance.
- For more information, see [Commonly used Microsoft Sentinel workbooks](top-workbooks.md).
+ For more information, see [Visualize and monitor your data by using workbooks in Microsoft Sentinel](monitor-your-data.md) and [Commonly used Microsoft Sentinel workbooks](top-workbooks.md)
- **Watch for ingestion delay**. If you have concerns about ingestion delay, [set a variable in an analytics rule](ingestion-delay.md) to represent the delay.
sentinel Configure Data Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/configure-data-retention.md
Previously updated : 01/05/2023 Last updated : 09/26/2024
In your Log Analytics workspace, change the interactive retention policy of the
## Review interactive and total retention policies
-On the **Tables** page for the table you updated, review the field values for **Interactive retention** and **Total retention**.
+On the **Tables** page, for the table you updated, review the field values for **Interactive retention** and **Total retention**.
:::image type="content" source="media/configure-data-retention/data-retention-archive-period.png" alt-text="Screenshot of the table view that shows the interactive retention and archive period columns.":::
On the **Tables** page for the table you updated, review the field values for **
No resources were created but you might want to restore the data retention settings you changed.
+Depending on the settings set for your entire workspace, the settings updated in this tutorial might incur additional charges. To avoid these charges, restore the settings to their original values.
+ ## Next steps > [!div class="nextstepaction"]
sentinel Connect Defender For Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-defender-for-cloud.md
Title: Ingest Microsoft Defender for Cloud subscription-based alerts to Microsof
description: Learn how to connect security alerts from Microsoft Defender for Cloud and stream them into Microsoft Sentinel. Previously updated : 11/09/2021 Last updated : 09/26/2024
# Ingest Microsoft Defender for Cloud alerts to Microsoft Sentinel
-[Microsoft Defender for Cloud](/azure/defender-for-cloud/)'s integrated cloud workload protections allow you to detect and quickly respond to threats across hybrid and multicloud workloads.
+[Microsoft Defender for Cloud](/azure/defender-for-cloud/)'s integrated cloud workload protections allow you to detect and quickly respond to threats across hybrid and multicloud workloads. The **Microsoft Defender for Cloud** connector allows you to ingest [security alerts from Defender for Cloud](/azure/defender-for-cloud/alerts-reference) into Microsoft Sentinel, so you can view, analyze, and respond to Defender alerts, and the incidents they generate, in a broader organizational threat context.
-This connector allows you to ingest [security alerts from Defender for Cloud](/azure/defender-for-cloud/alerts-reference) into Microsoft Sentinel, so you can view, analyze, and respond to Defender alerts, and the incidents they generate, in a broader organizational threat context.
-
-As [Microsoft Defender for Cloud Defender plans](/azure/defender-for-cloud/defender-for-cloud-introduction#protect-cloud-workloads) are enabled per subscription, this data connector is also enabled or disabled separately for each subscription.
-
-The new **Tenant-based Microsoft Defender for Cloud connector**, in PREVIEW, allows you to collect Defender for Cloud alerts over your entire tenant, without having to enable each subscription separately. It also leverages [Defender for Cloud's integration with Microsoft Defender XDR](ingest-defender-for-cloud-incidents.md) (formerly Microsoft 365 Defender) to ensure that all of your Defender for Cloud alerts are fully included in any incidents you receive through [Microsoft Defender XDR incident integration](microsoft-365-defender-sentinel-integration.md).
+[Microsoft Defender for Cloud Defender plans](/azure/defender-for-cloud/defender-for-cloud-introduction#protect-cloud-workloads) are enabled per subscription. While Microsoft Sentinel's legacy connector for Defender for Cloud Apps is also configured per subscription, the **Tenant-based Microsoft Defender for Cloud** connector, in preview, allows you to collect Defender for Cloud alerts over your entire tenant without having to enable each subscription separately. The tenant-based connector also works with [Defender for Cloud's integration with Microsoft Defender XDR](ingest-defender-for-cloud-incidents.md) to ensure that all of your Defender for Cloud alerts are fully included in any incidents you receive through [Microsoft Defender XDR incident integration](microsoft-365-defender-sentinel-integration.md).
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
Enabling **bi-directional sync** will automatically sync the status of original
## Connect to Microsoft Defender for Cloud
-1. In Microsoft Sentinel, select **Data connectors** from the navigation menu.
+1. After installing the solution, in Microsoft Sentinel, select **Configuration > Data connectors**.
-1. From the data connectors gallery, select **Microsoft Defender for Cloud**, and select **Open connector page** in the details pane.
+1. From the **Data connectors** page, select the either the **Subscription-based Microsoft Defender for Cloud (Legacy)** or the **Tenant-based Microsoft Defender for Cloud (Preview)** connector, and then select **Open connector page**.
1. Under **Configuration**, you will see a list of the subscriptions in your tenant, and the status of their connection to Microsoft Defender for Cloud. Select the **Status** toggle next to each subscription whose alerts you want to stream into Microsoft Sentinel. If you want to connect several subscriptions at once, you can do this by marking the check boxes next to the relevant subscriptions and then selecting the **Connect** button on the bar above the list.
- > [!NOTE]
- > - The check boxes and **Connect** toggles will be active only on the subscriptions for which you have the required permissions.
- > - The **Connect** button will be active only if at least one subscription's check box has been marked.
+ - The check boxes and **Connect** toggles are active only on the subscriptions for which you have the [required permissions](#prerequisites).
+ - The **Connect** button is active only if at least one subscription's check box has been marked.
1. To enable bi-directional sync on a subscription, locate the subscription in the list, and choose **Enabled** from the drop-down list in the **Bi-directional sync** column. To enable bi-directional sync on several subscriptions at once, mark their check boxes and select the **Enable bi-directional sync** button on the bar above the list.
- > [!NOTE]
- > - The check boxes and drop-down lists will be active only on the subscriptions for which you have the [required permissions](#prerequisites).
- > - The **Enable bi-directional sync** button will be active only if at least one subscription's check box has been marked.
+ - The check boxes and drop-down lists will be active only on the subscriptions for which you have the [required permissions](#prerequisites).
+ - The **Enable bi-directional sync** button will be active only if at least one subscription's check box has been marked.
+
+1. In the **Microsoft Defender plans** column of the list, you can see if Microsoft Defender plans are enabled on your subscription (a prerequisite for enabling the connector).
-1. In the **Microsoft Defender plans** column of the list, you can see if Microsoft Defender plans are enabled on your subscription (a prerequisite for enabling the connector). The value for each subscription in this column will either be blank (meaning no Defender plans are enabled), "All enabled," or "Some enabled." Those that say "Some enabled" will also have an **Enable all** link you can select, that will take you to your Microsoft Defender for Cloud configuration dashboard for that subscription, where you can choose Defender plans to enable. The **Enable Microsoft Defender for all subscriptions** link button on the bar above the list will take you to your Microsoft Defender for Cloud Getting Started page, where you can choose on which subscriptions to enable Microsoft Defender for Cloud altogether.
+ The value for each subscription in this column is either blank (meaning no Defender plans are enabled), **All enabled**, or **Some enabled**. Those that say **Some enabled** also have an **Enable all** link you can select, that will take you to your Microsoft Defender for Cloud configuration dashboard for that subscription, where you can choose Defender plans to enable.
- :::image type="content" source="./media/connect-defender-for-cloud/azure-defender-config.png" alt-text="Screenshot of Microsoft Defender for Cloud connector configuration":::
+ The **Enable Microsoft Defender for all subscriptions** link button on the bar above the list will take you to your Microsoft Defender for Cloud Getting Started page, where you can choose on which subscriptions to enable Microsoft Defender for Cloud altogether. For example:
+
+ :::image type="content" source="./media/connect-defender-for-cloud/azure-defender-config.png" alt-text="Screenshot of Microsoft Defender for Cloud connector configuration.":::
1. You can select whether you want the alerts from Microsoft Defender for Cloud to automatically generate incidents in Microsoft Sentinel. Under **Create incidents**, select **Enabled** to turn on the default analytics rule that automatically [creates incidents from alerts](create-incidents-from-alerts.md). You can then edit this rule under **Analytics**, in the **Active rules** tab.
Enabling **bi-directional sync** will automatically sync the status of original
> When configuring [custom analytics rules](detect-threats-custom.md) for alerts from Microsoft Defender for Cloud, consider the alert severity to avoid opening incidents for informational alerts. > > Informational alerts in Microsoft Defender for Cloud don't represent a security risk on their own, and are relevant only in the context of an existing, open incident. For more information, see [Security alerts and incidents in Microsoft Defender for Cloud](../security-center/security-center-alerts-overview.md).
- >
-
+ >
## Find and analyze your data
sentinel Data Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-transformation.md
description: Learn about how Azure Monitor's custom log ingestion and data trans
Previously updated : 02/27/2022- Last updated : 09/25/2024 #Customer intent: As a security engineer, I want to customize data ingestion and transformation in Microsoft Sentinel so that analysts can filter, enrich, and secure log data efficiently.
Log Analytics' custom data ingestion process gives you a high level of control o
Microsoft Sentinel gives you two tools to control this process: -- The [**Logs ingestion API**](/azure/azure-monitor/logs/logs-ingestion-api-overview) allows you to send custom-format logs from any data source to your Log Analytics workspace, and store those logs either in certain specific standard tables, or in custom-formatted tables that you create. You have full control over the creation of these custom tables, down to specifying the column names and types. You create [**Data collection rules (DCRs)**](/azure/azure-monitor/essentials/data-collection-rule-overview) to define, configure, and apply transformations to these data flows.
+- The [**Logs ingestion API**](/azure/azure-monitor/logs/logs-ingestion-api-overview) allows you to send custom-format logs from any data source to your Log Analytics workspace, and store those logs either in certain specific standard tables, or in custom-formatted tables that you create. You have full control over the creation of these custom tables, down to specifying the column names and types. You create [**DCRs**](/azure/azure-monitor/essentials/data-collection-rule-overview) to define, configure, and apply transformations to these data flows.
- [**Data collection transformation**](/azure/azure-monitor/essentials/data-collection-transformations) uses DCRs to apply basic KQL queries to incoming standard logs (and certain types of custom logs) before they're stored in your workspace. These transformations can filter out irrelevant data, enrich existing data with analytics or external data, or mask sensitive or personal information.
Ingestion-time data transformation supports [multiple-workspace scenarios](exten
### Normalization
-Ingest-time transformation also allows you to normalize logs when ingested into built-in or customer ASIM normalized tables. Using ingest-time normalization improves normalized queries performance.
+Ingest-time transformation also allows you to normalize logs when they're ingested into built-in or customer-normalized tables with [Advanced Security Information Model (ASIM)](normalization.md). Using ingest-time normalization improves the performance of normalized queries.
-For more information on ingest-time normalization using transformations, refer to [Ingest-time normalization](normalization-ingest-time.md).
+For more information, see [Ingest-time normalization](normalization-ingest-time.md).
### Enrichment and tagging
Microsoft Sentinel collects data into the Log Analytics workspace from multiple
- Data from built-in data connectors is processed in Log Analytics using some combination of hardcoded workflows and ingestion-time transformations in the workspace DCR. This data can be stored in standard tables or in a specific set of custom tables. - Data ingested directly into the Logs ingestion API endpoint is processed by a standard DCR that may include an ingestion-time transformation. This data can then be stored in either standard or custom tables of any kind. ## DCR support in Microsoft Sentinel
In Log Analytics, data collection rules (DCRs) determine the data flow for diffe
Support for DCRs in Microsoft Sentinel includes: -- *Standard DCRs*, currently supported only for AMA-based connectors and workflows using the new [Logs ingestion API](/azure/azure-monitor/logs/logs-ingestion-api-overview).
+- *Standard DCRs*, currently supported only for AMA-based connectors and workflows using the [Logs ingestion API](/azure/azure-monitor/logs/logs-ingestion-api-overview). <!--"currently". is this still true?-->
Each connector or log source workflow can have its own dedicated *standard DCR*, though multiple connectors or sources can share a common *standard DCR* as well.
Support for DCRs in Microsoft Sentinel includes:
A single *workspace transformation DCR* serves all the supported workflows in a workspace that aren't served by standard DCRs. A workspace can have only one *workspace transformation DCR*, but that DCR contains separate transformations for each input stream. Also, *workspace transformation DCR*s are supported only for a [specific set of tables](/azure/azure-monitor/logs/tables-feature-support).
-Microsoft Sentinel's support for ingestion-time transformation depends on the type of data connector you're using. For more in-depth information on custom logs, ingestion-time transformation, and data collection rules, see the articles linked in the [Next steps](#next-steps) section at the end of this article.
+Microsoft Sentinel's support for ingestion-time transformation depends on the type of data connector you're using. For more in-depth information on custom logs, ingestion-time transformation, and data collection rules, see the articles linked in the [Related content](#related-content) section at the end of this article.
### DCR support for Microsoft Sentinel data connectors
Ingestion-time data transformation currently has the following known issues for
- You can only send logs from one specific data source to one workspace. To send data from a single data source to multiple workspaces (destinations) with a standard DCR, please create one DCR per workspace.
-## Next steps
-
-[Get started configuring ingestion-time data transformation in Microsoft Sentinel](configure-data-transformation.md).
+## Related content
-Learn more about Microsoft Sentinel data connector types. For more information, see:
+For more information, see:
+- [Transform or customize data at ingestion time in Microsoft Sentinel (preview)](configure-data-transformation.md)
- [Microsoft Sentinel data connectors](connect-data-sources.md) - [Find your Microsoft Sentinel data connector](data-connectors-reference.md)
sentinel Mitre Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/mitre-coverage.md
Title: View MITRE coverage for your organization from Microsoft Sentinel | Micro
description: Learn how to view coverage indicator in Microsoft Sentinel for MITRE tactics that are currently covered, and available to configure, for your organization. Previously updated : 12/21/2021 Last updated : 09/26/2024
# Understand security coverage by the MITRE ATT&CK® framework
-> [!IMPORTANT]
-> The MITRE page in Microsoft Sentinel is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
- [MITRE ATT&CK](https://attack.mitre.org/#) is a publicly accessible knowledge base of tactics and techniques that are commonly used by attackers, and is created and maintained by observing real-world observations. Many organizations use the MITRE ATT&CK knowledge base to develop specific threat models and methodologies that are used to verify security status in their environments. Microsoft Sentinel analyzes ingested data, not only to [detect threats](detect-threats-built-in.md) and help you [investigate](investigate-cases.md), but also to visualize the nature and coverage of your organization's security status. This article describes how to use the **MITRE** page in Microsoft Sentinel to view the detections already active in your workspace, and those available for you to configure, to understand your organization's security coverage, based on the tactics and techniques from the MITRE ATT&CK® framework.
+> [!IMPORTANT]
+> The MITRE page in Microsoft Sentinel is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## MITRE ATT&CK framework version
Microsoft Sentinel is currently aligned to The MITRE ATT&CK framework, version 13. ## View current MITRE coverage
-In Microsoft Sentinel, in the **Threat management** menu on the left, select **MITRE**. By default, both currently active scheduled query and near real-time (NRT) rules are indicated in the coverage matrix.
+In Microsoft Sentinel, under **Threat management**, select **MITRE ATTA&CK (Preview)**. By default, both currently active scheduled query and near real-time (NRT) rules are indicated in the coverage matrix.
+ - **Use the legend at the top-right** to understand how many detections are currently active in your workspace for specific technique.
In Microsoft Sentinel, in the **Threat management** menu on the left, select **M
- **Select a specific technique** in the matrix to view more details on the right. There, use the links to jump to any of the following locations:
- - Select **View technique details** for more information about the selected technique in the MITRE ATT&CK framework knowledge base.
+ - In the **Description** area, select **View full technique details ...** for more information about the selected technique in the MITRE ATT&CK framework knowledge base.
- - Select links to any of the active items to jump to the relevant area in Microsoft Sentinel.
+ - Scroll down in the pane and select links to any of the active items to jump to the relevant area in Microsoft Sentinel.
-## Simulate possible coverage with available detections
-
-In the MITRE coverage matrix, *simulated* coverage refers to detections that are available, but not currently configured, in your Microsoft Sentinel workspace. View your simulated coverage to understand your organization's possible security status, were you to configure all detections available to you.
-
-In Microsoft Sentinel, in the **General** menu on the left, select **MITRE**.
-
-Select items in the **Simulate** menu to simulate your organization's possible security status.
--- **Use the legend at the top-right** to understand how many detections, including analytics rule templates or hunting queries, are available for you to configure.--- **Use the search bar at the top-left** to search for a specific technique in the matrix, using the technique name or ID, to view your organization's simulated security status for the selected technique.
+ For example, select **Hunting queries** to jump to the **Hunting** page. There, you'll see a filtered list of the hunting queries that are associated with the selected technique, and available for you to configure in your workspace.
-- **Select a specific technique** in the matrix to view more details on the right. There, use the links to jump to any of the following locations:
+## Simulate possible coverage with available detections
- - Select **View technique details** for more information about the selected technique in the MITRE ATT&CK framework knowledge base.
+In the MITRE coverage matrix, *simulated* coverage refers to detections that are available, but not currently configured in your Microsoft Sentinel workspace. View your simulated coverage to understand your organization's possible security status, were you to configure all detections available to you.
- - Select links to any of the simulation items to jump to the relevant area in Microsoft Sentinel.
+In Microsoft Sentinel, under **Threat management**, select **MITRE ATTA&CK (Preview)**, and then select items in the **Simulated** menu to simulate your organization's possible security status.
- For example, select **Hunting queries** to jump to the **Hunting** page. There, you'll see a filtered list of the hunting queries that are associated with the selected technique, and available for you to configure in your workspace.
+From there, use the page's elements as you would otherwise to view the simulated coverage for a specific technique.
## Use the MITRE ATT&CK framework in analytics rules and incidents
Having a scheduled rule with MITRE techniques applied running regularly in your
For more information, see [Hunt for threats with Microsoft Sentinel](hunting.md) and [Keep track of data during hunting with Microsoft Sentinel](bookmarks.md).
-## Next steps
+## Related content
For more information, see:
sentinel Normalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization.md
Title: Normalization and the Advanced Security Information Model (ASIM) | Microsoft Docs description: This article explains how Microsoft Sentinel normalizes data from many different sources using the Advanced Security Information Model (ASIM) - Previously updated : 11/09/2021+ Last updated : 09/26/2024
Microsoft Sentinel ingests data from many sources. Working with various data types and tables together requires you to understand each of them, and write and use unique sets of data for analytics rules, workbooks, and hunting queries for each type or schema. - Sometimes, you'll need separate rules, workbooks, and queries, even when data types share common elements, such as firewall devices. Correlating between different types of data during an investigation and hunting can also be challenging. The Advanced Security Information Model (ASIM) is a layer that is located between these diverse sources and the user. ASIM follows the [robustness principle](https://en.wikipedia.org/wiki/Robustness_principle): **"Be strict in what you send, be flexible in what you accept"**. Using the robustness principle as design pattern, ASIM transforms the proprietary source telemetry collected by Microsoft Sentinel to user friendly data to facilitate exchange and integration.
-This article provides an overview of the Advanced Security Information Model (ASIM), its use cases and major components. Refer to the [next steps](#next-steps) section for more details.
+This article provides an overview of the Advanced Security Information Model (ASIM), its use cases, and major components.
> [!TIP] > Also watch the [ASIM Webinar](https://www.youtube.com/watch?v=WoGD-JeC7ng) or review the [webinar slides](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG).
For more information, see the [OSSEM reference documentation](https://ossemproje
The following image shows how non-normalized data can be translated into normalized content and used in Microsoft Sentinel. For example, you can start with a custom, product-specific, non-normalized table, and use a parser and a normalization schema to convert that table to normalized data. Use your normalized data in both Microsoft and custom analytics, rules, workbooks, queries, and more.
- :::image type="content" source="media/normalization/asim-architecture.png" alt-text="Non-normalized to normalized data conversion flow and usage in Microsoft Sentinel":::
ASIM includes the following components:
-### Normalized schemas
+### Normalized schemas
Normalized schemas cover standard sets of predictable event types that you can use when building unified capabilities. Each schema defines the fields that represent an event, a normalized column naming convention, and a standard format for the field values.
For more information, see [ASIM parsers](normalization-parsers-overview.md).
### Ingest time normalization Query time parsers have many advantages:
-
+ - They do not require the data to be modified, thus preserving the source format. - Since they do not modify the data, but rather presents a view of the data, they are easy to develop. Developing, testing and fixing a parser can all be done on existing data. Moreover, parsers can be fixed when an issue is discovered and the fix will apply to existing data. On the other hand, while ASIM parsers are optimized, query time parsing can slow down queries, especially on large data sets. To resolve this, Microsoft Sentinel complements query time parsing with ingest time parsing. Using ingest transformation the events are normalized to normalized table, accelerating queries that use normalized data. Currently, ASIM supports the following native normalized tables as a destination for ingest time normalization:+ - [**ASimAuditEventLogs**](/azure/azure-monitor/reference/tables/asimauditeventlogs) for the [Audit Event](normalization-schema-audit.md) schema. - **ASimAuthenticationEventLogs** for the [Authentication](normalization-schema-authentication.md) schema. - [**ASimDnsActivityLogs**](/azure/azure-monitor/reference/tables/asimdnsactivitylogs) for the [DNS](normalization-schema-dns.md) schema. - [**ASimNetworkSessionLogs**](/azure/azure-monitor/reference/tables/asimnetworksessionlogs) for the [Network Session](normalization-schema-network.md) schema - [**ASimWebSessionLogs**](/azure/azure-monitor/reference/tables/asimwebsessionlogs) for the [Web Session](normalization-schema-web.md) schema.
-
+ For more information, see [Ingest Time Normalization](normalization-ingest-time.md). ### Content for each normalized schema
-Content which uses ASIM includes solutions, analytics rules, workbooks, hunting queries, and more. Content for each normalized schema works on any normalized data without the need to create source-specific content.
+Content which uses ASIM includes solutions, analytics rules, workbooks, hunting queries, and more. Content for each normalized schema works on any normalized data without the need to create source-specific content.
For more information, see [ASIM content](normalization-content.md).
To start using ASIM:
- Enable your custom data to use built-in analytics by [writing parsers](normalization-develop-parsers.md) for your custom sources and [adding](normalization-manage-parsers.md) them to the relevant source agnostic parser.
-## <a name="next-steps"></a>Next steps
+## <a name="next-steps"></a>Related content
This article provides an overview of normalization in Microsoft Sentinel and ASIM.
sentinel Deploy Command Line https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-command-line.md
Previously updated : 09/15/2024 Last updated : 10/31/2024 #Customer intent: As a security, infrastructure, or SAP BASIS team member, I want to deploy and configure a containerized SAP data connector agent from the command line so that I can ingest SAP data into Microsoft Sentinel for enhanced monitoring and threat detection.
This procedure describes how to prepare the deployment script to configure setti
For more information, see [Kickstart deployment script reference for the Microsoft Sentinel for SAP applications data connector agent](reference-kickstart.md).
+## Optimize SAP PAHI table monitoring (recommended)
+
+For optimal results in monitoring the SAP PAHI table, open the **systemconfig.json** file for editing and under the `[ABAP Table Selector](reference-systemconfig-json.md#abap-table-selector)` section, enable both the `PAHI_FULL` and the `PAHI_INCREMENTAL` parameters.
+
+For more information, see [Systemconfig.json file reference](reference-systemconfig-json.md#abap-table-selector) and [Verify that the PAHI table is updated at regular intervals](preparing-sap.md#verify-that-the-pahi-table-is-updated-at-regular-intervals).
+ ## Check connectivity and health After you deploy the SAP data connector agent, check your agent's health and connectivity. For more information, see [Monitor the health and role of your SAP systems](../monitor-sap-system-health.md).
sentinel Preparing Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/preparing-sap.md
The SAP PAHI table includes data on the history of the SAP system, the database,
- [SAP note 12103](https://launchpad.support.sap.com/#/notes/12103) - [Monitoring the configuration of static SAP security parameters (Preview)](sap-solution-security-content.md#monitor-the-configuration-of-static-sap-security-parameters-preview)
-> [!TIP]
-> For optimal results, in the *systemconfig.json* file on your data connector agent machine, under the `[ABAP Table Selector](reference-systemconfig-json.md#abap-table-selector)` section, enable both the `PAHI_FULL` and the `PAHI_INCREMENTAL` parameters. For more information, see [Systemconfig.json file reference](reference-systemconfig-json.md#abap-table-selector).
+If the PAHI table is updated regularly, the `SAP_COLLECTOR_FOR_PERFMONITOR` job is scheduled and runs hourly. If the `SAP_COLLECTOR_FOR_PERFMONITOR` job doesn't exist, make sure to configure it as needed.
-If the PAHI table is updated regularly, the `SAP_COLLECTOR_FOR_PERFMONITOR` job is scheduled and runs hourly. If the `SAP_COLLECTOR_FOR_PERFMONITOR` job doesn't exist, make sure to configure it as needed. For more information, see the SAP documentation: [Database Collector in Background Processing](https://help.sap.com/doc/saphelp_nw75/7.5.5/en-US/c4/3a735b505211d189550000e829fbbd/frameset.htm) and [Configuring the Data Collector](https://help.sap.com/docs/SAP_NETWEAVER_AS_ABAP_752/3364beced9d145a5ad185c89a1e04658/c43a818c505211d189550000e829fbbd.html)
+For more information, see:
+
+- SAP documentation: [Database Collector in Background Processing](https://help.sap.com/doc/saphelp_nw75/7.5.5/c4/3a735b505211d189550000e829fbbd/frameset.htm) and [Configuring the Data Collector](https://help.sap.com/docs/SAP_NETWEAVER_AS_ABAP_752/3364beced9d145a5ad185c89a1e04658/c43a818c505211d189550000e829fbbd.html)
+- [Optimize SAP PAHI table monitoring (recommended)](deploy-command-line.md#optimize-sap-pahi-table-monitoring-recommended)
## Configure your system to use SNC for secure connections
sentinel Sap Solution Deploy Alternate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-deploy-alternate.md
For more information, see [Systemconfig.json file reference](reference-systemcon
### Define the SAP logs that are sent to Microsoft Sentinel
-The default **systemconfig** file is configured to cover built-in analytics, the SAP user authorization master data tables, with users and privilege information, and the ability to track changes and activities on the SAP landscape. The default configuration provides more logging information to allow for post-breach investigations and extended hunting abilities.
+The default **systemconfig.json** file is configured to cover built-in analytics, the SAP user authorization master data tables, with users and privilege information, and the ability to track changes and activities on the SAP landscape.
-However you might want to customize your configuration over time, especially as business processes tend to be seasonal.
+The default configuration provides more logging information to allow for post-breach investigations and extended hunting abilities. However you might want to customize your configuration over time, especially as business processes tend to be seasonal.
Use the following sets of code to configure the **systemconfig.json** file to define the logs that are sent to Microsoft Sentinel.
For more information, see [Microsoft Sentinel solution for SAP applications solu
The following code configures a default configuration:
-```python
-##############################################################
-# Enter True OR False for each log to send those logs to Microsoft Sentinel
-[Logs Activation Status]
-ABAPAuditLog = True
-ABAPJobLog = True
-ABAPSpoolLog = True
-ABAPSpoolOutputLog = True
-ABAPChangeDocsLog = True
-ABAPAppLog = True
-ABAPWorkflowLog = True
-ABAPCRLog = True
-ABAPTableDataLog = False
-# ABAP SAP Control Logs - Retrieved by using SAP Conntrol interface and OS Login
-ABAPFilesLogs = False
-SysLog = False
-ICM = False
-WP = False
-GW = False
-# Java SAP Control Logs - Retrieved by using SAP Conntrol interface and OS Login
-JAVAFilesLogs = False
-##############################################################
+```json
+"logs_activation_status": {
+ "abapauditlog": "True",
+ "abapjoblog": "True",
+ "abapspoollog": "True",
+ "abapspooloutputlog": "True",
+ "abapchangedocslog": "True",
+ "abapapplog": "True",
+ "abapworkflowlog": "True",
+ "abapcrlog": "True",
+ "abaptabledatalog": "False",
+ "abapfileslogs": "False",
+ "syslog": "False",
+ "icm": "False",
+ "wp": "False",
+ "gw": "False",
+ "javafileslogs": "False"
``` #### Configure a detection-focused profile Use the following code to configure a detection-focused profile, which includes the core security logs of the SAP landscape required for the most of the analytics rules to perform well. Post-breach investigations and hunting capabilities are limited.
-```python
-##############################################################
-[Logs Activation Status]
-# ABAP RFC Logs - Retrieved by using RFC interface
-ABAPAuditLog = True
-ABAPJobLog = False
-ABAPSpoolLog = False
-ABAPSpoolOutputLog = False
-ABAPChangeDocsLog = True
-ABAPAppLog = False
-ABAPWorkflowLog = False
-ABAPCRLog = True
-ABAPTableDataLog = False
-# ABAP SAP Control Logs - Retrieved by using SAP Conntrol interface and OS Login
-ABAPFilesLogs = False
-SysLog = False
-ICM = False
-WP = False
-GW = False
-# Java SAP Control Logs - Retrieved by using SAP Conntrol interface and OS Login
-JAVAFilesLogs = False
-[ABAP Table Selector]
-AGR_TCODES_FULL = True
-USR01_FULL = True
-USR02_FULL = True
-USR02_INCREMENTAL = True
-AGR_1251_FULL = True
-AGR_USERS_FULL = True
-AGR_USERS_INCREMENTAL = True
-AGR_PROF_FULL = True
-UST04_FULL = True
-USR21_FULL = True
-ADR6_FULL = True
-ADCP_FULL = True
-USR05_FULL = True
-USGRP_USER_FULL = True
-USER_ADDR_FULL = True
-DEVACCESS_FULL = True
-AGR_DEFINE_FULL = True
-AGR_DEFINE_INCREMENTAL = True
-PAHI_FULL = False
-AGR_AGRS_FULL = True
-USRSTAMP_FULL = True
-USRSTAMP_INCREMENTAL = True
-AGR_FLAGS_FULL = True
-AGR_FLAGS_INCREMENTAL = True
-SNCSYSACL_FULL = False
-USRACL_FULL = False
+```json
+"logs_activation_status": {
+ "abapauditlog": "True",
+ "abapjoblog": "False",
+ "abapspoollog": "False",
+ "abapspooloutputlog": "False",
+ "abapchangedocslog": "True",
+ "abapapplog": "False",
+ "abapworkflowlog": "False",
+ "abapcrlog": "True",
+ "abaptabledatalog": "False",
+ "abapfileslogs": "False",
+ "syslog": "False",
+ "icm": "False",
+ "wp": "False",
+ "gw": "False",
+ "javafileslogs": "False"
+ },
+....
+ "abap_table_selector": {
+ "agr_tcodes_full": "True",
+ "usr01_full": "True",
+ "usr02_full": "True",
+ "usr02_incremental": "True",
+ "agr_1251_full": "True",
+ "agr_users_full": "True",
+ "agr_users_incremental": "True",
+ "agr_prof_full": "True",
+ "ust04_full": "True",
+ "usr21_full": "True",
+ "adr6_full": "True",
+ "adcp_full": "True",
+ "usr05_full": "True",
+ "usgrp_user_full": "True",
+ "user_addr_full": "True",
+ "devaccess_full": "True",
+ "agr_define_full": "True",
+ "agr_define_incremental": "True",
+ "pahi_full": "True",
+ "pahi_incremental": "True",
+ "agr_agrs_full": "True",
+ "usrstamp_full": "True",
+ "usrstamp_incremental": "True",
+ "agr_flags_full": "True",
+ "agr_flags_incremental": "True",
+ "sncsysacl_full": "False",
+ "usracl_full": "False",
``` Use the following code to configure a minimal profile, which includes the SAP Security Audit Log, which is the most important source of data that the Microsoft Sentinel solution for SAP applications uses to analyze activities on the SAP landscape. Enabling this log is the minimal requirement to provide any security coverage.
-```python
-[Logs Activation Status]
-# ABAP RFC Logs - Retrieved by using RFC interface
-ABAPAuditLog = True
-ABAPJobLog = False
-ABAPSpoolLog = False
-ABAPSpoolOutputLog = False
-ABAPChangeDocsLog = False
-ABAPAppLog = False
-ABAPWorkflowLog = False
-ABAPCRLog = False
-ABAPTableDataLog = False
-# ABAP SAP Control Logs - Retrieved by using SAP Conntrol interface and OS Login
-ABAPFilesLogs = False
-SysLog = False
-ICM = False
-WP = False
-GW = False
-# Java SAP Control Logs - Retrieved by using SAP Conntrol interface and OS Login
-JAVAFilesLogs = False
-[ABAP Table Selector]
-AGR_TCODES_FULL = False
-USR01_FULL = False
-USR02_FULL = False
-USR02_INCREMENTAL = False
-AGR_1251_FULL = False
-AGR_USERS_FULL = False
-AGR_USERS_INCREMENTAL = False
-AGR_PROF_FULL = False
-UST04_FULL = False
-USR21_FULL = False
-ADR6_FULL = False
-ADCP_FULL = False
-USR05_FULL = False
-USGRP_USER_FULL = False
-USER_ADDR_FULL = False
-DEVACCESS_FULL = False
-AGR_DEFINE_FULL = False
-AGR_DEFINE_INCREMENTAL = False
-PAHI_FULL = False
-AGR_AGRS_FULL = False
-USRSTAMP_FULL = False
-USRSTAMP_INCREMENTAL = False
-AGR_FLAGS_FULL = False
-AGR_FLAGS_INCREMENTAL = False
-SNCSYSACL_FULL = False
-USRACL_FULL = False
+```json
+"logs_activation_status": {
+ "abapauditlog": "True",
+ "abapjoblog": "False",
+ "abapspoollog": "False",
+ "abapspooloutputlog": "False",
+ "abapchangedocslog": "True",
+ "abapapplog": "False",
+ "abapworkflowlog": "False",
+ "abapcrlog": "True",
+ "abaptabledatalog": "False",
+ "abapfileslogs": "False",
+ "syslog": "False",
+ "icm": "False",
+ "wp": "False",
+ "gw": "False",
+ "javafileslogs": "False"
+ },
+....
+ "abap_table_selector": {
+ "agr_tcodes_full": "False",
+ "usr01_full": "False",
+ "usr02_full": "False",
+ "usr02_incremental": "False",
+ "agr_1251_full": "False",
+ "agr_users_full": "False",
+ "agr_users_incremental": "False",
+ "agr_prof_full": "False",
+ "ust04_full": "False",
+ "usr21_full": "False",
+ "adr6_full": "False",
+ "adcp_full": "False",
+ "usr05_full": "False",
+ "usgrp_user_full": "False",
+ "user_addr_full": "False",
+ "devaccess_full": "False",
+ "agr_define_full": "False",
+ "agr_define_incremental": "False",
+ "pahi_full": "False",
+ "pahi_incremental": "False",
+ "agr_agrs_full": "False",
+ "usrstamp_full": "False",
+ "usrstamp_incremental": "False",
+ "agr_flags_full": "False",
+ "agr_flags_incremental": "False",
+ "sncsysacl_full": "False",
+ "usracl_full": "False",
``` ### SAL logs connector settings
Add the following code to the Microsoft Sentinel for SAP data connector **system
For more information, see [Perform an expert / custom SAP data connector installation](#perform-an-expert--custom-installation).
-```python
-##############################################################
-[Connector Configuration]
-extractuseremail = True
-apiretry = True
-auditlogforcexal = False
-auditlogforcelegacyfiles = False
-timechunk = 60
-##############################################################
+```json
+ "connector_configuration": {
+ "extractuseremail": "True",
+ "apiretry": "True",
+ "auditlogforcexal": "False",
+ "auditlogforcelegacyfiles": "False",
+ "timechunk": "60"
``` This section enables you to configure the following parameters:
To ingest tables directly from your SAP system with details about your users and
For example:
-```python
-[ABAP Table Selector]
-USR01_FULL = True
-USR02_FULL = True
-USR02_INCREMENTAL = True
-UST04_FULL = True
-AGR_USERS_FULL = True
-AGR_USERS_INCREMENTAL = True
-USR21_FULL = True
-AGR_1251_FULL = True
-ADR6_FULL = True
-AGR_TCODES_FULL = True
-DEVACCESS_FULL = True
-AGR_DEFINE_FULL = True
-AGR_DEFINE_INCREMENTAL = True
-AGR_PROF_FULL = True
-PAHI_FULL = True
+```json
+ "abap_table_selector": {
+ "agr_tcodes_full": "True",
+ "usr01_full": "True",
+ "usr02_full": "True",
+ "usr02_incremental": "True",
+ "agr_1251_full": "True",
+ "agr_users_full": "True",
+ "agr_users_incremental": "True",
+ "agr_prof_full": "True",
+ "ust04_full": "True",
+ "usr21_full": "True",
+ "adr6_full": "True",
+ "adcp_full": "True",
+ "usr05_full": "True",
+ "usgrp_user_full": "True",
+ "user_addr_full": "True",
+ "devaccess_full": "True",
+ "agr_define_full": "True",
+ "agr_define_incremental": "True",
+ "pahi_full": "True",
+ "pahi_incremental": "True",
+ "agr_agrs_full": "True",
+ "usrstamp_full": "True",
+ "usrstamp_incremental": "True",
+ "agr_flags_full": "True",
+ "agr_flags_incremental": "True",
+ "sncsysacl_full": "False",
+ "usracl_full": "False",
``` For more information, see [Reference of tables retrieved directly from SAP systems](sap-solution-log-reference.md#reference-of-tables-retrieved-directly-from-sap-systems).
sentinel Sentinel Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-service-limits.md
The following limits apply to incidents in Microsoft Sentinel.
| Description | Limit | Dependency | | | | - | | Investigation experience availability | 90 days from the incident last update time | None |
+| Retention period for incident entities | 180 days | Entities database retention |
| Number of alerts | 150 alerts | None | | Number of automation rules | 512 rules | None | | Number of automation rule actions | 20 actions | None |
storage Storage Blob Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-dotnet-get-started.md
[!INCLUDE [storage-dev-guide-selector-getting-started](../../../includes/storage-dev-guides/storage-dev-guide-selector-getting-started.md)]
-This article shows you how to connect to Azure Blob Storage by using the Azure Blob Storage client library for .NET. Once connected, your code can operate on containers, blobs, and features of the Blob Storage service.
+This article shows you how to connect to Azure Blob Storage by using the Azure Blob Storage client library for .NET. Once connected, use the [developer guides](#build-your-app) to learn how your code can operate on containers, blobs, and features of the Blob Storage service.
+
+If you're looking to start with a complete example, see [Quickstart: Azure Blob Storage client library for .NET](storage-quickstart-blobs-dotnet.md).
[API reference](/dotnet/api/azure.storage.blobs) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs) | [Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Blobs) | [Samples](../common/storage-samples-dotnet.md?toc=/azure/storage/blobs/toc.json#blob-samples) | [Give feedback](https://github.com/Azure/azure-sdk-for-net/issues)
storage Storage Blob Go Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-go-get-started.md
[!INCLUDE [storage-dev-guide-selector-getting-started](../../../includes/storage-dev-guides/storage-dev-guide-selector-getting-started.md)]
-This article shows you how to connect to Azure Blob Storage by using the Azure Blob Storage client module for Go. Once connected, your code can operate on containers, blobs, and features of the Blob Storage service.
+This article shows you how to connect to Azure Blob Storage by using the Azure Blob Storage client module for Go. Once connected, use the [developer guides](#build-your-app) to learn how your code can operate on containers, blobs, and features of the Blob Storage service.
+
+If you're looking to start with a complete example, see [Quickstart: Azure Blob Storage client library for Go](storage-quickstart-blobs-go.md).
[API reference documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#section-readme) | [Library source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob) | [Package (pkg.go.dev)](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob)
storage Storage Blob Java Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-java-get-started.md
[!INCLUDE [storage-dev-guide-selector-getting-started](../../../includes/storage-dev-guides/storage-dev-guide-selector-getting-started.md)]
-This article shows you how to connect to Azure Blob Storage by using the Azure Blob Storage client library for Java. Once connected, your code can operate on containers, blobs, and features of the Blob Storage service.
+This article shows you how to connect to Azure Blob Storage by using the Azure Blob Storage client library for Java. Once connected, use the [developer guides](#build-your-app) to learn how your code can operate on containers, blobs, and features of the Blob Storage service.
+
+If you're looking to start with a complete example, see [Quickstart: Azure Blob Storage client library for Java](storage-quickstart-blobs-java.md).
[API reference](/jav?toc=/azure/storage/blobs/toc.json#blob-samples) | [Give feedback](https://github.com/Azure/azure-sdk-for-java/issues)
storage Storage Blob Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-javascript-get-started.md
[!INCLUDE [storage-dev-guide-selector-getting-started](../../../includes/storage-dev-guides/storage-dev-guide-selector-getting-started.md)]
-This article shows you how to connect to Azure Blob Storage by using the Azure Blob Storage client library for JavaScript. Once connected, your code can operate on containers, blobs, and features of the Blob Storage service.
+This article shows you how to connect to Azure Blob Storage by using the Azure Blob Storage client library for JavaScript. Once connected, use the [developer guides](#build-your-app) to learn how your code can operate on containers, blobs, and features of the Blob Storage service.
+
+If you're looking to start with a complete example, see the client library quickstart for [JavaScript](storage-quickstart-blobs-nodejs.md) or [TypeScript](storage-quickstart-blobs-nodejs-typescript.md).
[API reference](/javascript/api/preview-docs/@azure/storage-blob) | [Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/storage/storage-blob) | [Samples](../common/storage-samples-javascript.md?toc=/azure/storage/blobs/toc.json#blob-samples) | [Give feedback](https://github.com/Azure/azure-sdk-for-js/issues)
storage Storage Blob Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-python-get-started.md
ai-usage: ai-assisted
[!INCLUDE [storage-dev-guide-selector-getting-started](../../../includes/storage-dev-guides/storage-dev-guide-selector-getting-started.md)]
-This article shows you how to connect to Azure Blob Storage by using the Azure Blob Storage client library for Python. Once connected, your code can operate on containers, blobs, and features of the Blob Storage service.
+This article shows you how to connect to Azure Blob Storage by using the Azure Blob Storage client library for Python. Once connected, use the [developer guides](#build-your-app) to learn how your code can operate on containers, blobs, and features of the Blob Storage service.
+
+If you're looking to start with a complete example, see [Quickstart: Azure Blob Storage client library for Python](storage-quickstart-blobs-python.md).
[API reference](/python/api/azure-storage-blob) | [Package (PyPi)](https://pypi.org/project/azure-storage-blob/) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-blob) | [Samples](../common/storage-samples-python.md?toc=/azure/storage/blobs/toc.json#blob-samples) | [Give feedback](https://github.com/Azure/azure-sdk-for-python/issues)
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Resources of some services can access your storage account for selected operatio
| Azure File Sync | `Microsoft.StorageSync` | Transform your on-premises file server to a cache for Azure file shares. This capability allows multiple-site sync, fast disaster recovery, and cloud-side backup. [Learn more](../file-sync/file-sync-planning.md). | | Azure HDInsight | `Microsoft.HDInsight` | Provision the initial contents of the default file system for a new HDInsight cluster. [Learn more](../../hdinsight/hdinsight-hadoop-use-blob-storage.md). | | Azure Import/Export | `Microsoft.ImportExport` | Import data to Azure Storage or export data from Azure Storage. [Learn more](../../import-export/storage-import-export-service.md). |
-| Azure Monitor | `Microsoft.Insights` | Write monitoring data to a secured storage account, including resource logs, Microsoft Entra sign-in and audit logs, and Microsoft Intune logs. [Learn more](/azure/azure-monitor/roles-permissions-security). |
+| Azure Monitor | `Microsoft.Insights` | Write monitoring data to a secured storage account, including resource logs, Microsoft Defender for Endpoint data, Microsoft Entra sign-in and audit logs, and Microsoft Intune logs. [Learn more](/azure/azure-monitor/roles-permissions-security). |
| Azure networking services | `Microsoft.Network` | Store and analyze network traffic logs, including through the Azure Network Watcher and Azure Traffic Manager services. [Learn more](../../network-watcher/network-watcher-nsg-flow-logging-overview.md). | | Azure Site Recovery | `Microsoft.SiteRecovery` | Enable replication for disaster recovery of Azure IaaS virtual machines when you're using firewall-enabled cache, source, or target storage accounts. [Learn more](../../site-recovery/azure-to-azure-tutorial-enable-replication.md). |
synapse-analytics Synapse Workspace Synapse Rbac Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-synapse-rbac-roles.md
description: This article describes the built-in Synapse RBAC (role-based access
Previously updated : 06/16/2023 Last updated : 11/05/2024 -+ # Synapse RBAC Roles
-The article describes the built-in Synapse RBAC (role-based access control) roles, the permissions they grant, and the scopes at which they can be used.
+The article describes the built-in Synapse RBAC (role-based access control) roles, the permissions they grant, and the scopes at which they can be used.
For more information on reviewing and assigning Synapse role memberships, see [how to review Synapse RBAC role assignments](./how-to-review-synapse-rbac-role-assignments.md) and [how to assign Synapse RBAC roles](./how-to-manage-synapse-rbac-role-assignments.md).
For more information on reviewing and assigning Synapse role memberships, see [h
The following table describes the built-in roles and the scopes at which they can be used. > [!NOTE]
-> Users with any Synapse RBAC role at any scope automatically have the Synapse User role at workspace scope.
+> Users with any Synapse RBAC role at any scope automatically have the Synapse User role at workspace scope.
> [!IMPORTANT] > Synapse RBAC roles do not grant permissions to create or manage SQL pools, Apache Spark pools, and Integration runtimes in Azure Synapse workspaces. Azure Owner or Azure Contributor roles on the resource group are required for these actions.
The following table lists the built-in roles and the actions/permissions that ea
|Role|Actions| |--|--|
-|Synapse Administrator|workspaces/read</br>workspaces/roleAssignments/write</br>workspaces/roleAssignments/delete</br>workspaces/managedPrivateEndpoint/write</br>workspaces/managedPrivateEndpoint/delete</br>workspaces/bigDataPool/useCompute/action</br>workspaces/bigDataPool/viewLogs/action</br>workspaces/scopePool/useCompute/action</br>workspaces/scopePool/viewLogs/action</br>workspaces/integrationRuntime/useCompute/action</br>workspaces/integrationRuntime/viewLogs/action</br>workspaces/artifacts/read</br>workspaces/notebooks/write</br>workspaces/sparkJobDefinitions/write</br>workspaces/scopeJobDefinitions/write</br>workspaces/sqlScripts/write</br>workspaces/dataFlows/write</br>workspaces/dataMappers/write</br>workspaces/pipelines/write</br>workspaces/triggers/write</br>workspaces/datasets/write</br>workspaces/linkedServices/write</br>workspaces/credentials/write</br>workspaces/notebooks/delete</br>workspaces/sparkJobDefinitions/delete</br>workspaces/scopeJobDefinitions/delete</br>workspaces/sqlScripts/delete</br>workspaces/dataFlows/delete</br>workspaces/dataMappers/delete</br>workspaces/pipelines/delete</br>workspaces/triggers/delete</br>workspaces/datasets/delete</br>workspaces/linkedServices/delete</br>workspaces/credentials/delete</br>workspaces/cancelPipelineRun/action</br>workspaces/notebooksViewOutputs/action</br>workspaces/pipelinesViewOutputs/action</br>workspaces/linkedServicesUseSecret/action</br>workspaces/credentialsUseSecret/action</br>workspaces/libraries/delete</br>workspaces/libraries/write</br>workspaces/kQLScripts/write</br>workspaces/kQLScripts/delete</br>workspaces/sparkConfigurations/write</br>workspaces/sparkConfigurations/delete</br>workspaces/synapseLinkConnections/read</br>workspaces/synapseLinkConnections/write</br>workspaces/synapseLinkConnections/delete</br>workspaces/synapseLinkConnections/useCompute/action|
-|Synapse Apache Spark Administrator|workspaces/read</br>workspaces/bigDataPools/useCompute/action</br>workspaces/bigDataPools/viewLogs/action</br>workspaces/notebooks/viewOutputs/action</br>workspaces/artifacts/read</br>workspaces/notebooks/write, delete</br>workspaces/sparkJobDefinitions/write, delete</br>workspaces/libraries/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete|
+|Synapse Administrator|workspaces/read</br>workspaces/roleAssignments/write, delete</br>workspaces/managedPrivateEndpoint/write, delete</br>workspaces/bigDataPool/useCompute/action</br>workspaces/bigDataPool/viewLogs/action</br>workspaces/scopePool/useCompute/action</br>workspaces/scopePool/viewLogs/action</br>workspaces/integrationRuntime/useCompute/action</br>workspaces/integrationRuntime/viewLogs/action</br>workspaces/artifacts/read</br>workspaces/notebooks/write</br>workspaces/sparkJobDefinitions/write, delete</br>workspaces/scopeJobDefinitions/write, delete</br>workspaces/sqlScripts/write, delete</br>workspaces/dataFlows/write, delete</br>workspaces/dataMappers/write, delete</br>workspaces/pipelines/write, delete</br>workspaces/triggers/write, delete</br>workspaces/datasets/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete</br>workspaces/notebooks/delete</br>workspaces/cancelPipelineRun/action</br>workspaces/notebooksViewOutputs/action</br>workspaces/pipelinesViewOutputs/action</br>workspaces/linkedServicesUseSecret/action</br>workspaces/credentialsUseSecret/action</br>workspaces/libraries/write, delete</br>workspaces/kQLScripts/write, delete</br>workspaces/sparkConfigurations/write, delete</br>workspaces/synapseLinkConnections/read, write, delete</br>workspaces/synapseLinkConnections/useCompute/action|
+|Synapse Apache Spark Administrator|workspaces/read</br>orkspaces/bigDataPoolUseCompute/action</br>orkspaces/bigDataPoolViewLogs/action</br>orkspaces/artifacts/read</br>orkspaces/notebooks/write, delete</br>orkspaces/sparkJobDefinitions/write, delete</br>orkspaces/linkedServices/write, delete</br>orkspaces/credentials/write, delete</br>orkspaces/libraries/write, delete</br>orkspaces/notebooksViewOutputs/action</br>|
|Synapse SQL Administrator|workspaces/read</br>workspaces/artifacts/read</br>workspaces/sqlScripts/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete|
-|Synapse Contributor|workspaces/read</br>workspaces/bigDataPool/useCompute/action</br>workspaces/bigDataPool/viewLogs/action</br>workspaces/scopePool/useCompute/action</br>workspaces/scopePool/viewLogs/action</br>workspaces/integrationRuntime/useCompute/action</br>workspaces/integrationRuntime/viewLogs/action</br>workspaces/artifacts/read</br>workspaces/notebooks/write</br>workspaces/sparkJobDefinitions/write</br>workspaces/sqlScripts/write</br>workspaces/dataFlows/write</br>workspaces/dataMappers/write</br>workspaces/pipelines/write</br>workspaces/triggers/write</br>workspaces/datasets/write</br>workspaces/linkedServices/write</br>workspaces/credentials/write</br>workspaces/notebooks/delete</br>workspaces/sparkJobDefinitions/delete</br>workspaces/sqlScripts/delete</br>workspaces/dataFlows/delete</br>workspaces/dataMappers/delete</br>workspaces/pipelines/delete</br>workspaces/triggers/delete</br>workspaces/datasets/delete</br>workspaces/linkedServices/delete</br>workspaces/credentials/delete</br>workspaces/cancelPipelineRun/action</br>workspaces/notebooksViewOutputs/action</br>workspaces/pipelinesViewOutputs/action</br>workspaces/libraries/delete</br>workspaces/libraries/write</br>workspaces/kQLScripts/write</br>workspaces/kQLScripts/delete</br>workspaces/sparkConfigurations/write</br>workspaces/sparkConfigurations/delete</br>workspaces/synapseLinkConnections/read</br>workspaces/synapseLinkConnections/write</br>workspaces/synapseLinkConnections/delete</br>workspaces/synapseLinkConnections/useComputeAction|
-|Synapse Artifact Publisher|workspaces/read</br>workspaces/artifacts/read</br>workspaces/notebooks/write</br>workspaces/sparkJobDefinitions/write</br>workspaces/scopeJobDefinitions/write</br>workspaces/sqlScripts/write</br>workspaces/dataFlows/write</br>workspaces/dataMappers/write</br>workspaces/pipelines/write</br>workspaces/triggers/write</br>workspaces/datasets/write</br>workspaces/linkedServices/write</br>workspaces/credentials/write</br>workspaces/notebooks/delete</br>workspaces/sparkJobDefinitions/delete</br>workspaces/scopeJobDefinitions/delete</br>workspaces/sqlScripts/delete</br>workspaces/dataFlows/delete</br>workspaces/dataMappers/delete</br>workspaces/pipelines/delete</br>workspaces/triggers/delete</br>workspaces/datasets/delete</br>workspaces/linkedServices/delete</br>workspaces/credentials/delete</br>workspaces/notebooksViewOutputs/action</br>workspaces/pipelinesViewOutputs/action</br>workspaces/libraries/delete</br>workspaces/libraries/write</br>workspaces/kQLScripts/write</br>workspaces/kQLScripts/delete</br>workspaces/sparkConfigurations/write</br>workspaces/sparkConfigurationsDeleteAction|
+|Synapse Scope Administrator|workspaces/read</br>workspaces/scopePoolUseCompute/action</br>workspaces/scopePoolViewLogs/action</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete</br>workspaces/scopeJobDefinitions/write, delete|
+|Synapse Private Endpoint Manager|workspaces/read</br>workspaces/managedPrivateEndpoint/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete|
+|Synapse Contributor|workspaces/read</br>workspaces/bigDataPool/useCompute/action</br>workspaces/bigDataPool/viewLogs/action</br>workspaces/scopePool/useCompute/action</br>workspaces/scopePool/viewLogs/action</br>workspaces/integrationRuntime/useCompute/action</br>workspaces/integrationRuntime/viewLogs/action</br>workspaces/artifacts/read</br>workspaces/notebooks/write, delete</br>workspaces/sparkJobDefinitions/write, delete</br>workspaces/sqlScripts/write, delete</br>workspaces/dataFlows/write, delete</br>workspaces/dataMappers/write, delete</br>workspaces/pipelines/write, delete</br>workspaces/triggers/write, delete</br>workspaces/datasets/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete</br>workspaces/cancelPipelineRun/action</br>workspaces/notebooksViewOutputs/action</br>workspaces/pipelinesViewOutputs/action</br>workspaces/libraries/write, delete</br>workspaces/kQLScripts/write, delete</br>workspaces/sparkConfigurations/write, delete</br>workspaces/synapseLinkConnections/read,write, delete</br>workspaces/synapseLinkConnections/useComputeAction|
+|Synapse Artifact Publisher|workspaces/read</br>workspaces/artifacts/read</br>workspaces/notebooks/write, delete</br>workspaces/sparkJobDefinitions/write, delete</br>workspaces/scopeJobDefinitions/write, delete</br>workspaces/sqlScripts/write, delete</br>workspaces/dataFlows/write, delete</br>workspaces/dataMappers/write, delete</br>workspaces/pipelines/write, delete</br>workspaces/triggers/write, delete</br>workspaces/datasets/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete</br>workspaces/notebooksViewOutputs/action</br>workspaces/pipelinesViewOutputs/action</br>workspaces/libraries/write, delete</br>workspaces/kQLScripts/write, delete</br>workspaces/sparkConfigurations/write, delete|
|Synapse Artifact User|workspaces/read</br>workspaces/artifacts/read</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action|
-|Synapse Compute Operator |workspaces/read</br>workspaces/bigDataPools/useCompute/action</br>workspaces/bigDataPools/viewLogs/action</br>workspaces/integrationRuntimes/useCompute/action</br>workspaces/integrationRuntimes/viewLogs/action</br>workspaces/linkConnections/read</br>workspaces/linkConnections/useCompute/action|
+|Synapse Compute Operator |workspaces/read</br>workspaces/bigDataPools/useCompute/action</br>workspaces/bigDataPools/viewLogs/action</br>workspaces/scopePool/useCompute/action</br>workspaces/scopePool/viewLogs/action</br>workspaces/integrationRuntimes/useCompute/action</br>workspaces/integrationRuntimes/viewLogs/action</br>workspaces/cancelPipelineRun/action</br>workspaces/linkConnections/read</br>workspaces/linkConnections/useCompute/action|
|Synapse Monitoring Operator |workspaces/read</br>workspaces/artifacts/read</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action</br>workspaces/integrationRuntimes/viewLogs/action</br>workspaces/bigDataPools/viewLogs/action| |Synapse Credential User|workspaces/read</br>workspaces/linkedServices/useSecret/action</br>workspaces/credentials/useSecret/action| |Synapse Linked Data Manager|workspaces/read</br>workspaces/managedPrivateEndpoint/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete|
virtual-desktop Client Device Redirection Intune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/client-device-redirection-intune.md
Title: Configure client device redirection settings for Windows App and the Remote Desktop app using Microsoft Intune
-description: Learn how to configure redirection settings for Windows App and the Remote Desktop app for iOS/iPadOS and Android client devices using Microsoft Intune.
+description: Learn how to configure redirection settings for iOS/iPadOS Windows App and Android Remote Desktop client using Microsoft Intune.
Previously updated : 08/21/2024 Last updated : 10/31/2024 # Configure client device redirection settings for Windows App and the Remote Desktop app using Microsoft Intune
+> [!IMPORTANT]
+> Configure redirection settings for the **Remote Desktop app on Android** using Microsoft Intune is currently in PREVIEW. Configure redirection settings for **Windows App on iOS/iPadOS** using Microsoft Intune is generally available.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ > [!TIP] > This article contains information for multiple products that use the Remote Desktop Protocol (RDP) to provide remote access to Windows desktops and applications.
These features enable you to achieve the following scenarios:
- Provide an extra layer of protection against misconfigured redirection on the host pool or session host. -- Apply extra security settings to Windows App and the Remote Desktop app, such as, require a PIN, block third-party keyboards, and restrict cut, copy and paste operations between other apps on the client device.
+- Apply extra security settings to Windows App and the Remote Desktop app, such as, require a PIN, block third-party keyboards, and restrict cut, copy, and paste operations between other apps on the client device.
If the redirection settings on a client device conflict with the host pool RDP properties and session host for Azure Virtual Desktop, or Cloud PC for Windows 365, the more restrictive setting between the two takes effect. For example, if the session host disallows drive redirection and the client device allowing drive redirection, drive redirection is disallowed. If the redirection settings on session host and client device are both the same, the redirection behavior is consistent.
For Windows App:
|--|:--:|:--:| | iOS and iPadOS | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | + For the Remote Desktop app: | Device platform | Managed devices | Unmanaged devices |
Before you can configure redirection settings on a client device using Microsoft
- At least one security group containing users to apply the policies to. -- To use the Remote Desktop app with enrolled devices on iOS and iPadOS, you need to add each app to Intune from the App Store. For more information, see [Add iOS store apps to Microsoft Intune](/mem/intune/apps/store-apps-ios).
+- To use Windows App with enrolled devices on iOS and iPadOS, you need to add each app to Intune from the App Store. For more information, see [Add iOS store apps to Microsoft Intune](/mem/intune/apps/store-apps-ios).
- A client device running one of the following versions of Windows App or the Remote Desktop app: - For Windows App:
- - iOS and iPadOS: 10.5.2 or later.
+ - iOS and iPadOS: 11.0.4 or later.
- Remote Desktop app:
- - iOS and iPadOS: 10.5.8 or later.
- Android: 10.0.19.1279 or later. - There are more Intune prerequisites for configuring app configuration policies, app protection policies, and Conditional Access policies. For more information, see:
To learn about filters and how to create them, see [Use filters when assigning y
## Create an app configuration policy for managed devices
-For iOS and iPadOS devices that are enrolled only, you need to create an [app configuration policy for managed devices](/mem/intune/apps/app-configuration-policies-overview#managed-devices) for the Remote Desktop app.
+For iOS and iPadOS devices that are enrolled only, you need to create an [app configuration policy for managed devices](/mem/intune/apps/app-configuration-policies-overview#managed-devices) for Windows App. This step isn't needed for Android.
To create and apply an app configuration policy for managed devices, follow the steps in [Add app configuration policies for managed iOS/iPadOS devices](/mem/intune/apps/app-configuration-policies-use-ios) and use the following settings: -- On the **Basics** tab, for **targeted app**, select the **Remote Desktop Mobile** app from the list. You need to have added the app to Intune from the App Store for it to show in this list.
+- On the **Basics** tab, for **targeted app**, select **Windows App** from the list. You need to have added the app to Intune from the App Store for it to show in this list.
- On the **Settings** tab, for the **Configuration settings format** drop-down list, select **Use configuration designer**, then enter the following settings exactly as shown:
To create and apply an app configuration policy for managed devices, follow the
## Create an app configuration policy for managed apps
-You need to create an [app configuration policy for managed apps](/mem/intune/apps/app-configuration-policies-overview#managed-devices) for Windows App and the Remote Desktop app, which enable you to provide configuration settings.
+You need to create a separate [app configuration policy for managed apps](/mem/intune/apps/app-configuration-policies-overview#managed-devices) for Windows App (iOS/iPadOS) and the Remote Desktop app (Android), which enables you to provide configuration settings. Don't configure both Android and iOS in the same configuration policy or you won't be able to configure policy targeting based on managed and unmanaged devices.
To create and apply an app configuration policy for managed apps, follow the steps in [App configuration policies for Intune App SDK managed apps](/mem/intune/apps/app-configuration-policies-managed-app) and use the following settings: -- On the **Basics** tab, select **Select public apps**, then search for and select **Remote Desktop**. Selecting **Remote Desktop** applies to both Windows App and the Remote Desktop app.
+- On the **Basics** tab, select **Select public apps**, then search for and select **Remote Desktop** for Android and **Windows App** for iOS/iPadOS.
- On the **Settings** tab, expand **General configuration settings**, then enter the following name and value pairs for each redirection setting you want to configure exactly as shown. These values correspond to the RDP properties listed on [Supported RDP properties](/azure/virtual-desktop/rdp-properties#device-redirection), but the syntax is different:
To create and apply an app configuration policy for managed apps, follow the ste
## Create an app protection policy
-You need to create an [app protection policy](/mem/intune/apps/app-protection-policy) for Windows App and the Remote Desktop app, which enable you to control how data is accessed and shared by apps on mobile devices.
+You need to create a separate [app protection policy](/mem/intune/apps/app-protection-policy) for Windows App (iOS/iPadOS) and the Remote Desktop app (Android), which enable you to control how data is accessed and shared by apps on mobile devices. Don't configure both Android and iOS/iPadOS in the same protection policy or you won't be able to configure policy targeting based on managed and unmanaged devices.
-To create and apply an app protection policy, follow the steps in [How to create and assign app protection policies](/mem/intune/apps/app-protection-policies) and use the following settings. You need to create an app protection policy for each platform you want to target.
+To create and apply an app protection policy, follow the steps in [How to create and assign app protection policies](/mem/intune/apps/app-protection-policies) and use the following settings.
-- On the **Apps** tab, select **Select public apps**, then search for and select **Remote Desktop**. Selecting **Remote Desktop** applies to both Windows App and the Remote Desktop app.
+- On the **Apps** tab, select **Select public apps**, then search for and select **Remote Desktop** for Android and **Windows App** for iOS/iPadOS.
-- On the **Data protection** tab, only the following settings are relevant to Windows App and the Remote Desktop app. The other settings don't apply as Windows App and the Remote Desktop app interact with the session host and not with data in the app. On mobile devices, unapproved keyboards are a source of keystroke logging and theft.
+- On the **Data protection** tab, only the following settings are relevant to Windows App and the Remote Desktop app. The other settings don't apply as Windows App and the Remote Desktop app interact with the session host and not with data in the app. On mobile devices, unapproved keyboards are a source of keystroke logging and theft.
- - For iOS and iPadOS you can configure the following settings:
+ - For iOS and iPadOS, you can configure the following settings:
- Restrict cut, copy, and paste between other apps - Third-party keyboards
- - For Android you can configure the following settings:
+ - For Android, you can configure the following settings:
- Restrict cut, copy, and paste between other apps - Screen capture and Google Assistant
To create and apply an app protection policy, follow the steps in [How to create
| Primary MTD service | Device condition | Based on your requirements.<br /><br />Your MTD connector must be set up. For **Microsoft Defender for Endpoint**, [configure Microsoft Defender for Endpoint in Intune](/mem/intune/protect/advanced-threat-protection-configure). | Block access | | Max allowed device threat level | Device condition | Secured | Block access |
- For version details, see [What's new in Windows App](/windows-app/whats-new), [What's new in the Remote Desktop client for iOS and iPadOS](whats-new-client-ios-ipados.md), and [What's new in the Remote Desktop client for Android and Chrome OS](whats-new-client-android-chrome-os.md).
+ For version details, see [What's new in Windows App](/windows-app/whats-new?tabs=ios-ipados), and [What's new in the Remote Desktop client for Android and Chrome OS](whats-new-client-android-chrome-os.md).
For more information about the available settings, see [Conditional launch in iOS app protection policy settings](/mem/intune/apps/app-protection-policy-settings-ios#conditional-launch) and [Conditional launch in Android app protection policy settings](/mem/intune/apps/app-protection-policy-settings-android#conditional-launch).
Now that you configure Intune to manage device redirection on personal devices,
## Known issues
-When creating an App Configuration Policy or an App Protection Policy, Remote Desktop is still shown instead of Windows App. This will be updated soon.
+When creating an app configuration policy or an app protection policy for Android, Remote Desktop is listed twice. Add both apps. This will be updated soon so Remote Desktop is only shown once.
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
You have a choice of operating systems (OS) that you can use for session hosts t
To learn more about licenses you can use, including per-user access pricing, see [Licensing Azure Virtual Desktop](licensing.md). > [!IMPORTANT]
-> - The following items are not supported:
+> - The following items aren't supported for session hosts:
> - 32-bit operating systems. > - N, KN, LTSC, and other editions of Windows operating systems not listed in the previous table. > - [Ultra disks](/azure/virtual-machines/disks-types#ultra-disks) for the OS disk type. > - [Ephemeral OS disks for Azure VMs](/azure/virtual-machines/ephemeral-os-disks). > - [Virtual Machine Scale Sets](/azure/virtual-machine-scale-sets/overview).
->
-> - Support for Windows 7 ended on January 10, 2023.
-> - Support for Windows Server 2012 R2 ended on October 10, 2023.
+> - Arm64-based Azure VMs.
For Azure, you can use operating system images provided by Microsoft in the [Azure Marketplace](https://azuremarketplace.microsoft.com), or create your own custom images stored in an Azure Compute Gallery or as a managed image. Using custom image templates for Azure Virtual Desktop enables you to easily create a custom image that you can use when deploying session host virtual machines (VMs). To learn more about how to create custom images, see:
virtual-network-manager Concept Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-limitations.md
This article provides an overview of the current limitations when you're using [
## Limitations for connected groups * A connected group can have up to 250 virtual networks. Virtual networks in a [mesh topology](concept-connectivity-configuration.md#mesh-network-topology) are in a [connected group](concept-connectivity-configuration.md#connected-group), so a mesh configuration has a limit of 250 virtual networks.
-* The following BareMetal Infrastructures are supported:
+* BareMetal Infastructures are not supported. This includes the following BareMetal Infrastructures:
* [Azure NetApp Files](../azure-netapp-files/index.yml) * [Azure VMware Solution](../azure-vmware/index.yml) * [Nutanix Cloud Clusters on Azure](../baremetal-infrastructure/workloads/nc2-on-azure/about-nc2-on-azure.md)
virtual-network-manager Concept User Defined Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-user-defined-route.md
description: Learn to automate and simplifying routing behaviors using user-defi
Previously updated : 10/23/2024 Last updated : 11/05/2024 # Customer Intent: As a network engineer, I want learn how I can automate and simplify routing within my Azure Network using User-defined routes.
You can also easily choose an Azure Firewall as the next hop by selecting **Impo
:::image type="content" source="media/how-to-deploy-user-defined-routes/add-routing-rule-azure-firewall.png" alt-text="Screenshot of routing rule with Azure Firewall option.":::
+### Use more user-defined routes in a single route table
+
+In Azure Virtual Network Manager UDR management, users can now create up to 1,000 user-defined routes (UDRs) in a single route table, compared to the traditional 400-route limit. This higher limit enables more complex routing configurations, such as directing traffic from on-premises data centers through a firewall to each spoke virtual network in a hub-and-spoke topology. This expanded capacity is especially useful for managing traffic inspection and security across large-scale network architectures with numerous spokes.
+ ## Common routing scenarios Here are the common routing scenarios that you can simplify and automate by using UDR management.
Here are the common routing scenarios that you can simplify and automate by usin
When you add other virtual networks to a network group, the routing configuration is automatically applied to the new virtual network. Your network manager automatically detects the new virtual network and applies the routing configuration to it. When you remove a virtual network from the network group, the applied routing configuration is automatically removed as well.
-Newly created or deleted subnets have their route table updated with eventual consistency. The processing time may vary based on the volume of subnet creation and deletion.
-
+Newly created or deleted subnets have their route table updated with eventual consistency. The processing time can vary based on the volume of subnet creation and deletion.
-## Limitations of UDR management
+## Impact of UDR Management on routes and route tables
-The following are the limitations of UDR management with Azure Virtual Network
+The following are impacts of UDR management with Azure Virtual Network Manager on routes and route tables:
- When conflicting routing rules exist (rules with same destination but different next hops), they aren't supported within or across rule collections that target the same virtual network or subnet. - When you create a route rule with the same destination as an existing route in the route table, the routing rule is ignored. - When a virtual network manager-created UDR is manually modified in the route table, the route isn't up when an empty commit is performed. Also, any update to the rule isn't reflected in the route with the same destination. - Existing Azure services in the Hub virtual network maintain their existing limitations with respect to Route Table and UDRs. - Azure Virtual Network Manager requires a managed resource group to store the route table. If you need to delete the resource group, deletion must happen before any new deployments are attempted for resources in the same subscription.-- UDR Management supports creating 1000 UDRs within a route table. This means that you can create a routing configuration with a maximum of 1,000 routing rules.
+- UDR management allows users to create up to 1000 UDRs per route table.
## Next step
virtual-network-manager How To Create User Defined Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-user-defined-route.md
In this step, you deploy the routing configuration to create the UDRs for the ne
1. Select **Next** and then **Deploy** to deploy the routing configuration. > [!NOTE]
-> When you create and deploy a routing configuration, you need to be aware of the impact of existing routing rules. For more information, see [limitations for UDR management](./concept-user-defined-route.md#limitations-of-udr-management).
+> When you create and deploy a routing configuration, you need to be aware of the impact of existing routing rules. For more information, see [Impacts of user-defined routes](./concept-user-defined-route.md).
## Next steps
virtual-network-manager How To Manage Ip Addresses Network Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-manage-ip-addresses-network-manager.md
Previously updated : 10/08/2024 Last updated : 10/25/2024 #customer intent: As a network administrator, I want to learn how to manage IP addresses with Azure Virtual Network Manager so that I can create and assign IP address pools to my virtual networks.
In this step, you delegate permissions to other users to manage IP address pools
In this step, you create a virtual network with a nonoverlapping CIDR range by allowing IP address manager to automatically provide a nonoverlapping CIDR.
+# [Azure Portal](#tab/azureportal)
+ 1. In the Azure portal, search for and select **Virtual networks**. 2. Select **+ Create**. 3. On the **Basics** tab, enter the following information:
In this step, you create a virtual network with a nonoverlapping CIDR range by a
8. Optionally create subnets referring to the selected pool. 9. Select **Review + create** and then **Create** to create the virtual network.
+# [Azure Resource Manager Template](#tab/armtemplate)
+
+In this step, you create a virtual network with a nonoverlapping CIDR range using an Azure Resource Manager template.
+
+1. Sign in to Azure and search for **Deploy a custom template**.
+2. In the **Custom deployment** window, select **Build your own template in the editor**.
+3. Copy the following template into the editor:
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "virtualNetworkName": {
+ "defaultValue": "virtual-network",
+ "type": "String",
+ "metadata": {
+ "description": "VNet name"
+ }
+ },
+ "location": {
+ "defaultValue": "[resourceGroup().location]",
+ "type": "String",
+ "metadata": {
+ "description": "Location for all resources."
+ }
+ },
+ "poolResourceID": {
+ "defaultValue": "/subscriptions/<subscriptionId>/resourceGroups/resourceGroupName/providers/Microsoft.Network/networkManagers/<networkManagerName>/ipamPools/<ipAddressPoolName>",
+ "type": "String",
+ "metadata": {
+ "description": "Enter the Resource ID for your IP Address Pool. You can find this in the JSON View in the resource's overview window."
+ }
+ },
+ "numberOfIPAddresses": {
+ "defaultValue": "256",
+ "type": "String",
+ "metadata": {
+ "description": "Enter the number of IP addresses for the virtual network."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Network/virtualNetworks",
+ "apiVersion": "2024-01-01",
+ "name": "[parameters('virtualNetworkName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "addressSpace": {
+ "ipamPoolPrefixAllocations": [
+ {
+ "pool": {
+ "id": "[parameters('poolResourceID')]"
+ },
+ "numberOfIpAddresses": "[parameters('numberOfIPAddresses')]"
+ }
+ ]
+ }
+ }
+ }
+ ]
+ }
+
+ ```
+
+4. In the **Custom deployment** windows, enter or select the following information:
+
+ | **Field** | **Description** |
+ | | |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select the resource group for the virtual network. In this case, the example uses **resource-group**. |
+ | **Instance details** | |
+ | Region | Select the region for the virtual network. IP address pools must be in the same region as your virtual network in order to be associated. |
+ | Virtual network name | Enter a name for the virtual network. The template will default to **virtual-network**. |
+ | Location | Select the location for the virtual network. This will be the same as the region except all lower case and no spaces.</br>For example, if the region is **(US)westus2**, the location will be **westus2**. |
+
+ :::image type="content" source="media/how-to-manage-ip-addresses/custom-deployment-template.png" alt-text="Screenshot of custom deployment page with values.":::
+
+ > [!NOTE]
+ > The **poolResourceID** parameter is the Resource ID for your IP Address Pool. You can find this in the JSON View in the resource's overview window.
+
+5. Select **Review + create** and then **Create** to create the virtual network.
+ ## Next steps > [!div class="nextstepaction"]
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
No. Virtual WAN doesn't support ASN changes for VPN gateways.
Local circuits can only be connected to ExpressRoute gateways in their corresponding Azure region. However, there is no limitation to route traffic to spoke virtual networks in other regions. -
-### <a name="update-router"></a>Why am I seeing a message and button called "Update router to latest software version" in portal?
-
-> [!NOTE]
-> As of July 1 2024, hubs on the old version will be retired in phases and stop functioning as expected.
->
-
-Azure-wide Cloud Services-based infrastructure is deprecating. As a result, the Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. **All newly created Virtual Hubs will automatically be deployed on the latest Virtual Machine Scale Sets based infrastructure.** If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure portal. If the button isn't visible, open a support case.
-
-YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Make sure all your spoke virtual networks are in active/enabled subscriptions and that your spoke virtual networks aren't deleted. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 1-2 minutes for VNet-to-VNet traffic through the same hub and 5-7 minutes for all other traffic flows through the hub. Please plan a maintenance window of at least 30 minutes, as downtime can last up to 30 minutes in the worst-case scenario. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update.
-
-There are several things to note with the virtual hub router upgrade:
-
-* If you have already configured BGP peering between your Virtual WAN hub and an NVA in a spoke VNet, then you'll have to [delete and then recreate the BGP peer](create-bgp-peering-hub-portal.md). Since the virtual hub router's IP addresses change after the upgrade, you'll also have to reconfigure your NVA to peer with the virtual hub router's new IP addresses. These IP addresses are represented as the "virtualRouterIps" field in the Virtual Hub's Resource JSON.
-
-* If you have a network virtual appliance (NVA) in the virtual hub, you'll have to work with your NVA partner to obtain instructions on how to upgrade your Virtual WAN hub.
-
-* If your virtual hub is configured with more than 15 routing infrastructure units, please scale in your virtual hub to 2 routing infrastructure units before attempting to upgrade. You can scale back out your hub to more than 15 routing infrastructure units after upgrading your hub.
-
-If the update fails for any reason, your hub will be auto recovered to the old version to ensure there's still a working setup.
-
-Additional things to note:
-* The user will need to have an **owner** or **contributor** role to see an accurate status of the hub router version. If a user is assigned a **reader** role to the Virtual WAN resource and subscription, the Azure portal displays to that user that the hub router needs to be upgraded to the latest version, even if the hub is already on the latest version.
-
-* If you change your spoke virtual network's subscription status from disabled to enabled and then upgrade the virtual hub, you'll need to update your virtual network connection after the virtual hub upgrade (Ex: you can configure the virtual network connection to propagate to a dummy label).
-
-* If your hub is connected to a large number of spoke virtual networks (60 or more), then you might notice that 1 or more spoke VNet peerings will enter a failed state after the upgrade. To restore these VNet peerings to a successful state after the upgrade, you can configure the virtual network connections to propagate to a dummy label, or you can delete and recreate these respective VNet connections.
- ### Why does the virtual hub router require a public IP address with opened ports? These public endpoints are required for Azure's underlying SDN and management platform to communicate with the virtual hub router. Because the virtual hub router is considered part of the customer's private network, Azure's underlying platform is unable to directly access and manage the hub router via its private endpoints due to compliance requirements. Connectivity to the hub router's public endpoints is authenticated via certificates, and Azure conducts routine security audits of these public endpoints. As a result, they do not constitute a security exposure of your virtual hub.