Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Whats New Docs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md | Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 09/11/2024 Last updated : 10/01/2024 +## September 2024 ++### Updated articles ++- [Developer notes for Azure Active Directory B2C](custom-policy-developer-notes.md) - Updated feature availability for China cloud + ## August 2024 This month, we changed Twitter to X in numerous articles and code samples. This month, we changed Twitter to X in numerous articles and code samples. - [Developer notes for Azure Active Directory B2C](custom-policy-developer-notes.md) - Updated Twitter to X - [Custom email verification with SendGrid](custom-email-sendgrid.md) - Updated the localization script--## June 2024 --### Updated articles --- [Define an OAuth2 custom error technical profile in an Azure Active Directory B2C custom policy](oauth2-error-technical-profile.md) - Error code updates-- [Configure authentication in a sample Python web app by using Azure AD B2C](configure-authentication-sample-python-web-app.md) - Python version updates |
api-management | Genai Gateway Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/genai-gateway-capabilities.md | In API Management, enable semantic caching by using Azure Redis Enterprise or an * [Labs for the GenAI gateway capabilities of Azure API Management](https://github.com/Azure-Samples/AI-Gateway) * [Azure API Management (APIM) - Azure OpenAI Sample (Node.js)](https://github.com/Azure-Samples/genai-gateway-apim) * [Python sample code for using Azure OpenAI with API Management](https://github.com/Azure-Samples/openai-apim-lb/blob/main/docs/sample-code.md)-* [AI hub gateway landing zone accelerator](https://github.com/Azure-Samples/ai-hub-gateway-solution-accelerator) ## Architecture and design considerations * [GenAI gateway reference architecture using API Management](/ai/playbook/technology-guidance/generative-ai/dev-starters/genai-gateway/reference-architectures/apim-based)+* [AI hub gateway landing zone accelerator](https://github.com/Azure-Samples/ai-hub-gateway-solution-accelerator) * [Designing and implementing a gateway solution with Azure OpenAI resources](/ai/playbook/technology-guidance/generative-ai/dev-starters/genai-gateway/) * [Use a gateway in front of multiple Azure OpenAI deployments or instances](/azure/architecture/ai-ml/guide/azure-openai-gateway-multi-backend) In API Management, enable semantic caching by using Azure Redis Enterprise or an * [Blog: Introducing GenAI capabilities in Azure API Management](https://techcommunity.microsoft.com/t5/azure-integration-services-blog/introducing-genai-gateway-capabilities-in-azure-api-management/ba-p/4146525) * [Blog: Integrating Azure Content Safety with API Management for Azure OpenAI Endpoints](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/integrating-azure-content-safety-with-api-management-for-azure/ba-p/4202505)+* [Training: Manage your generative AI APIs with Azure API Management](/training/modules/api-management) * [Smart load balancing for OpenAI endpoints and Azure API Management](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/smart-load-balancing-for-openai-endpoints-and-azure-api/ba-p/3991616) * [Authenticate and authorize access to Azure OpenAI APIs using Azure API Management](api-management-authenticate-authorize-azure-openai.md) |
api-management | Validate Azure Ad Token Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-azure-ad-token-policy.md | The `validate-azure-ad-token` policy enforces the existence and validity of a JS <!-- if there are multiple possible allowed values, then add additional value elements --> </required-claims> <decryption-keys>- <key>Base64 encoded signing key | certificate-id="mycertificate"</key> + <key certificate-id="mycertificate"/> <!-- if there are multiple keys, then add additional key elements --> </decryption-keys> </validate-azure-ad-token> The `validate-azure-ad-token` policy enforces the existence and validity of a JS | backend-application-ids | Contains a list of acceptable backend application IDs. This is only required in advanced cases for the configuration of options and can generally be removed. Policy expressions aren't allowed. | No | | client-application-ids | Contains a list of acceptable client application IDs. If multiple `application-id` elements are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. If a client application ID isn't provided, one or more `audience` claims should be specified. Policy expressions aren't allowed. | No | | required-claims | Contains a list of `claim` elements for claim values expected to be present on the token for it to be considered valid. When the `match` attribute is set to `all`, every claim value in the policy must be present in the token for validation to succeed. When the `match` attribute is set to `any`, at least one claim must be present in the token for validation to succeed. Policy expressions are allowed. | No |-| decryption-keys | A list of Base64-encoded keys, in [`key`](#key-attributes) subelements, used to decrypt the tokens. If multiple security keys are present, then each key is tried until either all keys are exhausted (in which case validation fails) or a key succeeds.<br/><br/>To decrypt a token encrypted with an asymmetric key, optionally specify the public key using a `certificate-id` attribute with value set to the identifier of a certificate uploaded to API Management. | No | +| decryption-keys | A list of [`key`](#key-attributes) subelements, used to decrypt a token signed with an asymmetric key. If multiple keys are present, then each key is tried until either all keys are exhausted (in which case validation fails) or a key succeeds.<br/><br/>Specify the public key using a `certificate-id` attribute with value set to the identifier of a certificate uploaded to API Management. | No | ### claim attributes The `validate-azure-ad-token` policy enforces the existence and validity of a JS ### key attributes | Attribute | Description | Required | Default | | - | | -- | |-| certificate-id | Identifier of a certificate entity [uploaded](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-certificate-entity#Add) to API Management, used to specify the public key to verify a token signed with an asymmetric key. | No | N/A | +| certificate-id | Identifier of a certificate entity [uploaded](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-certificate-entity#Add) to API Management, used to specify the public key to verify a token signed with an asymmetric key. | Yes | N/A | ## Usage |
api-management | Validate Jwt Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-jwt-policy.md | The `validate-jwt` policy enforces existence and validity of a supported JSON we output-token-variable-name="name of a variable to receive a JWT object representing successfully validated token"> <openid-config url="full URL of the configuration endpoint, for example, https://login.constoso.com/openid-configuration" /> <issuer-signing-keys>- <key>Base64 encoded signing key | certificate-id="mycertificate" | n="modulus" e="exponent"</key> + <key id="kid-claim" certificate-id="mycertificate" n="modulus" e="exponent">Base64 encoded signing key</key> <!-- if there are multiple keys, then add additional key elements --> </issuer-signing-keys> <decryption-keys>- <key>Base64 encoded signing key | certificate-id="mycertificate" | n="modulus" e="exponent" </key> + <key certificate-id="mycertificate">Base64 encoded signing key</key> <!-- if there are multiple keys, then add additional key elements --> </decryption-keys> <audiences> The `validate-jwt` policy enforces existence and validity of a supported JSON we | - | -- | -- | | openid-config |Add one or more of these elements to specify a compliant OpenID configuration endpoint URL from which signing keys and issuer can be obtained.<br/><br/>Configuration including the JSON Web Key Set (JWKS) is pulled from the endpoint every 1 hour and cached. If the token being validated references a validation key (using `kid` claim) that is missing in cached configuration, or if retrieval fails, API Management pulls from the endpoint at most once per 5 min. These intervals are subject to change without notice. <br/><br/>The response should be according to specs as defined at URL: `https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata`. <br/><br/>For Microsoft Entra ID use the OpenID Connect [metadata endpoint](../active-directory/develop/v2-protocols-oidc.md#find-your-apps-openid-configuration-document-uri) configured in your app registration such as:<br/>- v2 `https://login.microsoftonline.com/{tenant-name}/v2.0/.well-known/openid-configuration`<br/>- v2 Multi-Tenant ` https://login.microsoftonline.com/organizations/v2.0/.well-known/openid-configuration`<br/>- v1 `https://login.microsoftonline.com/{tenant-name}/.well-known/openid-configuration` <br/>- Customer tenant (preview) `https://{tenant-name}.ciamlogin.com/{tenant-id}/v2.0/.well-known/openid-configuration` <br/><br/> Substituting your directory tenant name or ID, for example `contoso.onmicrosoft.com`, for `{tenant-name}`. | No | | issuer-signing-keys | A list of Base64-encoded security keys, in [`key`](#key-attributes) subelements, used to validate signed tokens. If multiple security keys are present, then each key is tried until either all are exhausted (in which case validation fails) or one succeeds (useful for token rollover). <br/><br/>Optionally specify a key by using the `id` attribute to match a `kid` claim. To validate a token signed with an asymmetric key, optionally specify the public key using a `certificate-id` attribute with value set to the identifier of a certificate uploaded to API Management, or the RSA modulus `n` and exponent `e` pair of the signing key in Base64url-encoded format. | No |-| decryption-keys | A list of Base64-encoded keys, in [`key`](#key-attributes) subelements, used to decrypt the tokens. If multiple security keys are present, then each key is tried until either all keys are exhausted (in which case validation fails) or a key succeeds.<br/><br/>Optionally specify a key by using the `id` attribute to match a `kid` claim. To decrypt a token encrypted with an asymmetric key, optionally specify the public key using a `certificate-id` attribute with value set to the identifier of a certificate uploaded to API Management, or the RSA modulus `n` and exponent `e` pair of the key in Base64url-encoded format. | No | +| decryption-keys | A list of Base64-encoded keys, in [`key`](#key-attributes) subelements, used to decrypt the tokens. If multiple security keys are present, then each key is tried until either all keys are exhausted (in which case validation fails) or a key succeeds.<br/><br/> To decrypt a token encrypted with an asymmetric key, optionally specify the public key using a `certificate-id` attribute with value set to the identifier of a certificate uploaded to API Management. | No | | audiences | A list of acceptable audience claims, in `audience` subelements, that can be present on the token. If multiple audience values are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. At least one audience must be specified. | No | | issuers | A list of acceptable principals, in `issuer` subelements, that issued the token. If multiple issuer values are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. | No | | required-claims | A list of claims, in [`claim`](#claim-attributes) subelements, expected to be present on the token for it to be considered valid. When multiple claims are present, the token must match claim values according to the value of the `match` attribute. | No | The `validate-jwt` policy enforces existence and validity of a supported JSON we ### key attributes | Attribute | Description | Required | Default | | - | | -- | |-| id | String. Identifier used to match `kid` claim presented in JWT. | No | N/A | +| id | (Issuer signing key only) String. Identifier used to match `kid` claim presented in JWT. | No | N/A | | certificate-id | Identifier of a certificate entity [uploaded](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-certificate-entity#Add) to API Management, used to specify the public key to verify a token signed with an asymmetric key. | No | N/A |-| n | Modulus of the public key used to verify the issuer of a token signed with an asymmetric key. Must be specified with the value of the exponent `e`. Policy expressions aren't allowed. | No | N/A| -| e | Exponent of the public key used to verify the issuer of a token signed with an asymmetric key. Must be specified with the value of the modulus `n`. Policy expressions aren't allowed. | No | N/A| +| n | (Issuer signing key only) Modulus of the public key used to verify the issuer of a token signed with an asymmetric key. Must be specified with the value of the exponent `e`. Policy expressions aren't allowed. | No | N/A| +| e | (Issuer signing key only) Exponent of the public key used to verify the issuer of a token signed with an asymmetric key. Must be specified with the value of the modulus `n`. Policy expressions aren't allowed. | No | N/A| |
app-service | Configure Authentication Oauth Tokens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-oauth-tokens.md | If a user revokes the permissions granted to your app, your call to `/.auth/me` ## Extend session token expiration grace period -The authenticated session expires after 8 hours. After an authenticated session expires, there is a 72-hour grace period by default. Within this grace period, you're allowed to refresh the session token with App Service without reauthenticating the user. You can just call `/.auth/refresh` when your session token becomes invalid, and you don't need to track token expiration yourself. Once the 72-hour grace period is lapses, the user must sign in again to get a valid session token. +The authenticated session expires after 8 hours. After an authenticated session expires, there is a 72-hour grace period by default. Within this grace period, you're allowed to refresh the session token with App Service without reauthenticating the user. You can just call `/.auth/refresh` when your session token becomes invalid, and you don't need to track token expiration yourself. Once the 72-hour grace period lapses, the user must sign in again to get a valid session token. If 72 hours isn't enough time for you, you can extend this expiration window. Extending the expiration over a long period could have significant security implications (such as when an authentication token is leaked or stolen). So you should leave it at the default 72 hours or set the extension period to the smallest value. |
app-service | Configure Language Java Apm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java-apm.md | To enable via the Azure CLI, you need to create an Application Insights resource 3. Set the instrumentation key, connection string, and monitoring agent version as app settings on the web app. Replace `<instrumentationKey>` and `<connectionString>` with the values from the previous step. - # [Windows](#tab/windows) -- ```azurecli - az webapp config appsettings set -n <webapp-name> -g <resource-group> --settings "APPINSIGHTS_INSTRUMENTATIONKEY=<instrumentationKey>" "APPLICATIONINSIGHTS_CONNECTION_STRING=<connectionString>" "ApplicationInsightsAgent_EXTENSION_VERSION=~3" "XDT_MicrosoftApplicationInsights_Mode=default" "XDT_MicrosoftApplicationInsights_Java=1" - ``` - # [Linux](#tab/linux) ```azurecli az webapp config appsettings set -n <webapp-name> -g <resource-group> --settings "APPINSIGHTS_INSTRUMENTATIONKEY=<instrumentationKey>" "APPLICATIONINSIGHTS_CONNECTION_STRING=<connectionString>" "ApplicationInsightsAgent_EXTENSION_VERSION=~3" "XDT_MicrosoftApplicationInsights_Mode=default" ``` + # [Windows](#tab/windows) ++ ```azurecli + az webapp config appsettings set -n <webapp-name> -g <resource-group> --settings "APPINSIGHTS_INSTRUMENTATIONKEY=<instrumentationKey>" "APPLICATIONINSIGHTS_CONNECTION_STRING=<connectionString>" "ApplicationInsightsAgent_EXTENSION_VERSION=~3" "XDT_MicrosoftApplicationInsights_Mode=default" "XDT_MicrosoftApplicationInsights_Java=1" + ``` + ## Configure New Relic -# [Windows](#tab/windows) +# [Linux](#tab/linux) 1. Create a NewRelic account at [NewRelic.com](https://newrelic.com/signup) 2. Download the Java agent from NewRelic. It has a file name similar to *newrelic-java-x.x.x.zip*. To enable via the Azure CLI, you need to create an Application Insights resource ::: zone-end -# [Linux](#tab/linux) +# [Windows](#tab/windows) 1. Create a NewRelic account at [NewRelic.com](https://newrelic.com/signup) 2. Download the Java agent from NewRelic. It has a file name similar to *newrelic-java-x.x.x.zip*. To enable via the Azure CLI, you need to create an Application Insights resource ## Configure AppDynamics -# [Windows](#tab/windows) +# [Linux](#tab/linux) 1. Create an AppDynamics account at [AppDynamics.com](https://www.appdynamics.com/community/register/) 2. Download the Java agent from the AppDynamics website. The file name is similar to *AppServerAgent-x.x.x.xxxxx.zip*-3. Use the [Kudu console](https://github.com/projectkudu/kudu/wiki/Kudu-console) to create a new directory */home/site/wwwroot/apm*. +3. [SSH into your App Service instance](configure-linux-open-ssh-session.md) and create a new directory */home/site/wwwroot/apm*. 4. Upload the Java agent files into a directory under */home/site/wwwroot/apm*. The files for your agent should be in */home/site/wwwroot/apm/appdynamics*. 5. In the Azure portal, browse to your application in App Service and create a new Application Setting. ::: zone pivot="java-javase"- + Create an environment variable named `JAVA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/appdynamics/javaagent.jar -Dappdynamics.agent.applicationName=<app-name>` where `<app-name>` is your App Service name. If you already have an environment variable for `JAVA_OPTS`, append the `-javaagent:/...` option to the end of the current value. ::: zone-end To enable via the Azure CLI, you need to create an Application Insights resource ::: zone-end -# [Linux](#tab/linux) +# [Windows](#tab/windows) 1. Create an AppDynamics account at [AppDynamics.com](https://www.appdynamics.com/community/register/) 2. Download the Java agent from the AppDynamics website. The file name is similar to *AppServerAgent-x.x.x.xxxxx.zip*-3. [SSH into your App Service instance](configure-linux-open-ssh-session.md) and create a new directory */home/site/wwwroot/apm*. +3. Use the [Kudu console](https://github.com/projectkudu/kudu/wiki/Kudu-console) to create a new directory */home/site/wwwroot/apm*. 4. Upload the Java agent files into a directory under */home/site/wwwroot/apm*. The files for your agent should be in */home/site/wwwroot/apm/appdynamics*. 5. In the Azure portal, browse to your application in App Service and create a new Application Setting. ::: zone pivot="java-javase"-+ Create an environment variable named `JAVA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/appdynamics/javaagent.jar -Dappdynamics.agent.applicationName=<app-name>` where `<app-name>` is your App Service name. If you already have an environment variable for `JAVA_OPTS`, append the `-javaagent:/...` option to the end of the current value. ::: zone-end To enable via the Azure CLI, you need to create an Application Insights resource ## Configure Datadog -# [Windows](#tab/windows) +# [Linux](#tab/linux) * The configuration options are different depending on which Datadog site your organization is using. See the official [Datadog Integration for Azure Documentation](https://docs.datadoghq.com/integrations/azure/) -# [Linux](#tab/linux) +# [Windows](#tab/windows) * The configuration options are different depending on which Datadog site your organization is using. See the official [Datadog Integration for Azure Documentation](https://docs.datadoghq.com/integrations/azure/) ## Configure Dynatrace -# [Windows](#tab/windows) +# [Linux](#tab/linux) * Dynatrace provides an [Azure Native Dynatrace Service](https://www.dynatrace.com/monitoring/technologies/azure-monitoring/). To monitor Azure App Services using Dynatrace, see the official [Dynatrace for Azure documentation](https://docs.datadoghq.com/integrations/azure/) -# [Linux](#tab/linux) +# [Windows](#tab/windows) * Dynatrace provides an [Azure Native Dynatrace Service](https://www.dynatrace.com/monitoring/technologies/azure-monitoring/). To monitor Azure App Services using Dynatrace, see the official [Dynatrace for Azure documentation](https://docs.datadoghq.com/integrations/azure/) |
app-service | Configure Language Java Data Sources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java-data-sources.md | Next, determine if the data source should be available to one application or to ### Shared server-level resources +# [Linux](#tab/linux) ++Adding a shared, server-level data source requires you to edit Tomcat's server.xml. The most reliable way to do this is as follows: ++1. Upload a [startup script](./faq-app-service-linux.yml) and set the path to the script in **Configuration** > **Startup Command**. You can upload the startup script using [FTP](deploy-ftp.md). ++Your startup script makes an [xsl transform](https://www.w3schools.com/xml/xsl_intro.asp) to the server.xml file and output the resulting xml file to `/usr/local/tomcat/conf/server.xml`. The startup script should install libxslt via apk. Your xsl file and startup script can be uploaded via FTP. Below is an example startup script. ++```sh +# Install libxslt. Also copy the transform file to /home/tomcat/conf/ +apk add --update libxslt ++# Usage: xsltproc --output output.xml style.xsl input.xml +xsltproc --output /home/tomcat/conf/server.xml /home/tomcat/conf/transform.xsl /usr/local/tomcat/conf/server.xml +``` ++The following example XSL file adds a new connector node to the Tomcat server.xml. ++```xml +<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> + <xsl:output method="xml" indent="yes"/> ++ <xsl:template match="@* | node()" name="Copy"> + <xsl:copy> + <xsl:apply-templates select="@* | node()"/> + </xsl:copy> + </xsl:template> ++ <xsl:template match="@* | node()" mode="insertConnector"> + <xsl:call-template name="Copy" /> + </xsl:template> ++ <xsl:template match="comment()[not(../Connector[@scheme = 'https']) and + contains(., '<Connector') and + (contains(., 'scheme="https"') or + contains(., "scheme='https'"))]"> + <xsl:value-of select="." disable-output-escaping="yes" /> + </xsl:template> ++ <xsl:template match="Service[not(Connector[@scheme = 'https'] or + comment()[contains(., '<Connector') and + (contains(., 'scheme="https"') or + contains(., "scheme='https'"))] + )] + "> + <xsl:copy> + <xsl:apply-templates select="@* | node()" mode="insertConnector" /> + </xsl:copy> + </xsl:template> ++ <!-- Add the new connector after the last existing Connnector if there's one --> + <xsl:template match="Connector[last()]" mode="insertConnector"> + <xsl:call-template name="Copy" /> ++ <xsl:call-template name="AddConnector" /> + </xsl:template> ++ <!-- ... or before the first Engine if there's no existing Connector --> + <xsl:template match="Engine[1][not(preceding-sibling::Connector)]" + mode="insertConnector"> + <xsl:call-template name="AddConnector" /> ++ <xsl:call-template name="Copy" /> + </xsl:template> ++ <xsl:template name="AddConnector"> + <!-- Add new line --> + <xsl:text>
</xsl:text> + <!-- This is the new connector --> + <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" + maxThreads="150" scheme="https" secure="true" + keystoreFile="${{user.home}}/.keystore" keystorePass="changeit" + clientAuth="false" sslProtocol="TLS" /> + </xsl:template> + +</xsl:stylesheet> +``` ++#### Finalize configuration ++Finally, place the driver JARs in the Tomcat classpath and restart your App Service. ++1. Ensure that the JDBC driver files are available to the Tomcat classloader by placing them in the */home/site/lib* directory. In the [Cloud Shell](https://shell.azure.com), run `az webapp deploy --type=lib` for each driver JAR: ++```azurecli-interactive +az webapp deploy --resource-group <group-name> --name <app-name> --src-path <jar-name>.jar --type=lib --path <jar-name>.jar +``` ++If you created a server-level data source, restart the App Service Linux application. Tomcat resets `CATALINA_BASE` to `/home/tomcat` and uses the updated configuration. + # [Windows](#tab/windows) You can't directly modify a Tomcat installation for server-wide configuration because the installation location is read-only. To make server-level configuration changes to your Windows Tomcat installation, the simplest way is to do the following on app start: Finally, you place the driver JARs in the Tomcat classpath and restart your App az webapp deploy --resource-group <group-name> --name <app-name> --src-path <jar-name>.jar --type=lib --target-path <jar-name>.jar ``` -# [Linux](#tab/linux) --Adding a shared, server-level data source requires you to edit Tomcat's server.xml. The most reliable way to do this is as follows: --1. Upload a [startup script](./faq-app-service-linux.yml) and set the path to the script in **Configuration** > **Startup Command**. You can upload the startup script using [FTP](deploy-ftp.md). --Your startup script makes an [xsl transform](https://www.w3schools.com/xml/xsl_intro.asp) to the server.xml file and output the resulting xml file to `/usr/local/tomcat/conf/server.xml`. The startup script should install libxslt via apk. Your xsl file and startup script can be uploaded via FTP. Below is an example startup script. --```sh -# Install libxslt. Also copy the transform file to /home/tomcat/conf/ -apk add --update libxslt --# Usage: xsltproc --output output.xml style.xsl input.xml -xsltproc --output /home/tomcat/conf/server.xml /home/tomcat/conf/transform.xsl /usr/local/tomcat/conf/server.xml -``` --The following example XSL file adds a new connector node to the Tomcat server.xml. --```xml -<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> - <xsl:output method="xml" indent="yes"/> -- <xsl:template match="@* | node()" name="Copy"> - <xsl:copy> - <xsl:apply-templates select="@* | node()"/> - </xsl:copy> - </xsl:template> -- <xsl:template match="@* | node()" mode="insertConnector"> - <xsl:call-template name="Copy" /> - </xsl:template> -- <xsl:template match="comment()[not(../Connector[@scheme = 'https']) and - contains(., '<Connector') and - (contains(., 'scheme="https"') or - contains(., "scheme='https'"))]"> - <xsl:value-of select="." disable-output-escaping="yes" /> - </xsl:template> -- <xsl:template match="Service[not(Connector[@scheme = 'https'] or - comment()[contains(., '<Connector') and - (contains(., 'scheme="https"') or - contains(., "scheme='https'"))] - )] - "> - <xsl:copy> - <xsl:apply-templates select="@* | node()" mode="insertConnector" /> - </xsl:copy> - </xsl:template> -- <!-- Add the new connector after the last existing Connnector if there's one --> - <xsl:template match="Connector[last()]" mode="insertConnector"> - <xsl:call-template name="Copy" /> -- <xsl:call-template name="AddConnector" /> - </xsl:template> -- <!-- ... or before the first Engine if there's no existing Connector --> - <xsl:template match="Engine[1][not(preceding-sibling::Connector)]" - mode="insertConnector"> - <xsl:call-template name="AddConnector" /> -- <xsl:call-template name="Copy" /> - </xsl:template> -- <xsl:template name="AddConnector"> - <!-- Add new line --> - <xsl:text>
</xsl:text> - <!-- This is the new connector --> - <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" - maxThreads="150" scheme="https" secure="true" - keystoreFile="${{user.home}}/.keystore" keystorePass="changeit" - clientAuth="false" sslProtocol="TLS" /> - </xsl:template> - -</xsl:stylesheet> -``` --#### Finalize configuration --Finally, place the driver JARs in the Tomcat classpath and restart your App Service. --1. Ensure that the JDBC driver files are available to the Tomcat classloader by placing them in the */home/site/lib* directory. In the [Cloud Shell](https://shell.azure.com), run `az webapp deploy --type=lib` for each driver JAR: --```azurecli-interactive -az webapp deploy --resource-group <group-name> --name <app-name> --src-path <jar-name>.jar --type=lib --path <jar-name>.jar -``` --If you created a server-level data source, restart the App Service Linux application. Tomcat resets `CATALINA_BASE` to `/home/tomcat` and uses the updated configuration. - ::: zone-end |
app-service | Configure Language Java Deploy Run | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java-deploy-run.md | This article shows you the most common deployment and runtime configuration for ## Show Java version -# [Windows](#tab/windows) +# [Linux](#tab/linux) To show the current Java version, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive-az webapp config show --name <app-name> --resource-group <resource-group-name> --query "[javaVersion, javaContainer, javaContainerVersion]" +az webapp config show --resource-group <resource-group-name> --name <app-name> --query linuxFxVersion ``` To show all supported Java versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive-az webapp list-runtimes --os windows | grep java +az webapp list-runtimes --os linux | grep "JAVA\|TOMCAT\|JBOSSEAP" ``` -# [Linux](#tab/linux) +# [Windows](#tab/windows) To show the current Java version, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive-az webapp config show --resource-group <resource-group-name> --name <app-name> --query linuxFxVersion +az webapp config show --name <app-name> --resource-group <resource-group-name> --query "[javaVersion, javaContainer, javaContainerVersion]" ``` To show all supported Java versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive-az webapp list-runtimes --os linux | grep "JAVA\|TOMCAT\|JBOSSEAP" +az webapp list-runtimes --os windows | grep java ``` Performance reports, traffic visualizations, and health checkups are available f ### Stream diagnostic logs -# [Windows](#tab/windows) -- # [Linux](#tab/linux) [!INCLUDE [Access diagnostic logs](../../includes/app-service-web-logs-access-linux-no-h.md)] +# [Windows](#tab/windows) ++ For more information, see [Stream logs in Cloud Shell](troubleshoot-diagnostic-logs.md#in-cloud-shell). To learn more about the Java Profiler, visit the [Azure Application Insights doc All Java runtimes on App Service come with the Java Flight Recorder. You can use it to record JVM, system, and application events and troubleshoot problems in your Java applications. -# [Windows](#tab/windows) --#### Timed Recording --To take a timed recording, you need the PID (Process ID) of the Java application. To find the PID, open a browser to your web app's SCM site at `https://<your-site-name>.scm.azurewebsites.net/ProcessExplorer/`. This page shows the running processes in your web app. Find the process named "java" in the table and copy the corresponding PID (Process ID). --Next, open the **Debug Console** in the top toolbar of the SCM site and run the following command. Replace `<pid>` with the process ID you copied earlier. This command starts a 30-second profiler recording of your Java application and generates a file named `timed_recording_example.jfr` in the `C:\home` directory. --``` -jcmd <pid> JFR.start name=TimedRecording settings=profile duration=30s filename="C:\home\timed_recording_example.JFR" -``` - # [Linux](#tab/linux) SSH into your App Service and run the `jcmd` command to see a list of all the Java processes running. In addition to jcmd itself, you should see your Java application running with a process ID number (pid). Once the recording starts, you can dump the current recording data at any time u jcmd <pid> JFR.dump name=continuous_recording filename="/home/recording1.jfr" ``` +# [Windows](#tab/windows) ++#### Timed Recording ++To take a timed recording, you need the PID (Process ID) of the Java application. To find the PID, open a browser to your web app's SCM site at `https://<your-site-name>.scm.azurewebsites.net/ProcessExplorer/`. This page shows the running processes in your web app. Find the process named "java" in the table and copy the corresponding PID (Process ID). ++Next, open the **Debug Console** in the top toolbar of the SCM site and run the following command. Replace `<pid>` with the process ID you copied earlier. This command starts a 30-second profiler recording of your Java application and generates a file named `timed_recording_example.jfr` in the `C:\home` directory. ++``` +jcmd <pid> JFR.start name=TimedRecording settings=profile duration=30s filename="C:\home\timed_recording_example.JFR" +``` + #### Analyze `.jfr` files Use [FTPS](deploy-ftp.md) to download your JFR file to your local machine. To an ### App logging -# [Windows](#tab/windows) +# [Linux](#tab/linux) -Enable [application logging](troubleshoot-diagnostic-logs.md#enable-application-logging-windows) through the Azure portal or [Azure CLI](/cli/azure/webapp/log#az-webapp-log-config) to configure App Service to write your application's standard console output and standard console error streams to the local filesystem or Azure Blob Storage. Logging to the local App Service filesystem instance is disabled 12 hours after you enable it. If you need longer retention, configure the application to write output to a Blob storage container. +Enable [application logging](troubleshoot-diagnostic-logs.md#enable-application-logging-linuxcontainer) through the Azure portal or [Azure CLI](/cli/azure/webapp/log#az-webapp-log-config) to configure App Service to write your application's standard console output and standard console error streams to the local filesystem or Azure Blob Storage. If you need longer retention, configure the application to write output to a Blob storage container. ::: zone pivot="java-javase,java-tomcat" Your Java and Tomcat app logs can be found in the */home/LogFiles/Application/* ::: zone-end -# [Linux](#tab/linux) +Azure Blob Storage logging for Linux based apps can only be configured using [Azure Monitor](./troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor). -Enable [application logging](troubleshoot-diagnostic-logs.md#enable-application-logging-linuxcontainer) through the Azure portal or [Azure CLI](/cli/azure/webapp/log#az-webapp-log-config) to configure App Service to write your application's standard console output and standard console error streams to the local filesystem or Azure Blob Storage. If you need longer retention, configure the application to write output to a Blob storage container. +# [Windows](#tab/windows) ++Enable [application logging](troubleshoot-diagnostic-logs.md#enable-application-logging-windows) through the Azure portal or [Azure CLI](/cli/azure/webapp/log#az-webapp-log-config) to configure App Service to write your application's standard console output and standard console error streams to the local filesystem or Azure Blob Storage. Logging to the local App Service filesystem instance is disabled 12 hours after you enable it. If you need longer retention, configure the application to write output to a Blob storage container. ::: zone pivot="java-javase,java-tomcat" Your Java and Tomcat app logs can be found in the */home/LogFiles/Application/* ::: zone-end -Azure Blob Storage logging for Linux based apps can only be configured using [Azure Monitor](./troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor). - If your application uses [Logback](https://logback.qos.ch/) or [Log4j](https://logging.apache.org/log4j) for tracing, you can forward these traces for review into Azure Application Insights using the logging framework configuration instructions in [Explore Java trace logs in Application Insights](/previous-versions/azure/azure-monitor/app/deprecated-java-2x#explore-java-trace-logs-in-application-insights). |
automation | Modules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/shared-resources/modules.md | Title: Manage modules in Azure Automation description: This article tells how to use PowerShell modules to enable cmdlets in runbooks and DSC resources in DSC configurations. Previously updated : 09/10/2024 Last updated : 10/01/2024 -> The AzureRM PowerShell module has been officially deprecated as of **February 29, 2024**. We recommend that you migrate from AzureRM module to the Az PowerShell module to ensure continued support and updates. While the AzureRM module may still work, it is no longer maintained or supported and continued use of AzureRM is at the user's own risk. For more information, see [migration resources](https://aka.ms/azpsmigrate) for guidance on transitioning to the Az module. +> Starting **February 1, 2025**, Azure Automation will *discontinue* the execution of all the runbooks that use AzureRM modules. Starting **November 1, 2024**, you won't be able to create new runbooks using AzureRM modules. The AzureRM PowerShell module has been officially deprecated as of **February 29, 2024**. We recommend you to migrate from the AzureRM module to the Az PowerShell module to ensure continued support and updates. While the AzureRM module may still work, it is no longer maintained or supported, and continued use of the AzureRM module is at the user's own risk. For more information, see [migration resources](https://aka.ms/azpsmigrate) for guidance on transitioning to the Az module. Azure Automation uses a number of PowerShell modules to enable cmdlets in runbooks and DSC resources in DSC configurations. Supported modules include: |
azure-netapp-files | Control Plane Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/control-plane-security.md | + + Title: Understand Azure NetApp Files control plane security +description: Learn about the different control plane security features in Azure NetApp Files ++++ Last updated : 09/30/2024+++# Understand Azure NetApp Files control plane security ++Learn about the different control plane security features in Azure NetApp Files to understand what is available to best serve your needs. ++## Control plane security concepts ++Azure NetApp Files operates within the Azure control plane, utilizing Azure Resource Manager (ARM) to manage resources efficiently. This integration allows for centralized management of all Azure resources, including Azure NetApp Files, through interfaces including APIs, PowerShell, CLI, or the Azure portal. With ARM, you can automate and script tasks, enhancing operational efficiency, and reducing the likelihood of manual errors. ++The control plane also integrates with AzureΓÇÖs security features, such as [identity and access management (IAM)](/entra/fundamentals/introduction-identity-access-management), to enforce access controls and compliance requirements. This integration ensures that only authorized users can access and manage resources, maintaining a secure environment. ++The control plane also provides tools for monitoring and auditing resource usage and changes, helping maintain visibility and compliance across the Azure environment. This comprehensive integration within the Azure control plane ensures that Azure NetApp Files can be managed effectively, securely, and consistently, providing a robust solution for data management and storage needs. ++## Identity and access management ++A set of operations and services used to manage and control access to Azure NetApp Files resources. Utilize either built-in or custom role-based access control (RBAC) roles to ensure each user receives only the necessary access. Tailor individual permissions to create a custom RBAC role that suits both users and administrators. ++- Use either [built-in](../role-based-access-control/built-in-roles.md) or [custom RBAC](../role-based-access-control/custom-roles.md) roles to ensure only required access is given to each user. +- Use [individual permissions](../role-based-access-control/permissions/storage.md#microsoftnetapp) to create an appropriate custom RBAC role for users and administrators. ++## Encryption key management ++Managing Microsoft platform-managed keys or customer-managed keys involves control plane operations that affect: ++- **Key management:** The control plane allows you to manage the lifecycle of your encryption keys, including creation, rotation, and deletion. This ensures that you have full control over your data encryption keys. +- **Access control:** Through the control plane, you can define and enforce access policies using Azure RBAC, ensuring only authorized users and services can access or manage your keys. +- **Integration with Azure Key Vault:** The control plane facilitates the integration of Azure NetApp Files with Azure Key Vault, where your customer-managed keys are stored. This integration ensures secure key storage and management. +- **Encryption operations:** For encryption and decryption operations, the control plane handles Azure Key Vault requests to unwrap the account encryption key so your data is securely encrypted and decrypted as needed. +- **Auditing and monitoring:** The control plane provides capabilities for auditing and monitoring key usage. This helps you track who accessed your keys and when, enhancing security and compliance. +For more information, see [Configure customer-managed keys](configure-customer-managed-keys.md). ++## Network Security Groups management ++Managing network security groups (NSGs) in Azure NetApp Files relies on the control plane to oversee and secure network traffic. Benefits include: ++- **Traffic management:** The control plane allows you to define and enforce NSG rules, which control the flow of network traffic to and from your Azure NetApp Files. Controlling network traffic ensures that only authorized traffic is allowed, enhancing security. +- **Configuration and deployment:** Through the control plane, you can configure NSGs on the subnets where your Azure NetApp Files volumes are deployed, including establishing rules for inbound and outbound traffic based on IP addresses, ports, and protocols. +- **Integration with Azure +- **Monitoring and auditing:** The control plane provides tools for monitoring and auditing network traffic. You can track which rules are being applied and adjust them as needed to ensure optimal security and performance. +- **Policy Enforcement:** By using the control plane, you can enforce network policies across your Azure environment. This includes applying custom policies to meet specific security requirements and ensuring consistent policy enforcement. ++For more information, see [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) and [Azure NSGs](../virtual-network/network-security-groups-overview.md). ++## Routing management ++The control plane enables the configuration of User-Defined Routes (UDRs) on the subnets where Azure NetApp Files volumes are deployed. UDRs allow for precise control over the routing of network traffic, ensuring data packets are directed through specific paths such as Network Virtual Appliances (NVAs) for traffic inspection. By defining these routes, network performance can be optimized, and security can be enhanced by controlling how traffic flows within the Azure environment. ++For more information, see [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) and the [UDR overview](../virtual-network/virtual-networks-udr-overview.md#user-defined). ++## Resource lock management ++Resource locking at the control plane layer ensures that your Azure NetApp Files resources are protected from accidental or malicious deletions and modifications. Locking is important for maintaining the integrity and stability of your storage environment. ++[Resource locking](../azure-resource-manager/management/lock-resources.md) protects subscriptions, resource groups, or resources from accidental or malicious user deletions and modifications. The lock overrides any user permissions. Unlike RBAC, management locks apply a restriction across _all_ users and roles. Take careful consideration when locking any necessary resources to prevent changes after all configuration is in place. ++## Monitoring and audit logging ++Monitoring, auditing, and logging are critical for maintaining security and compliance in your Azure NetApp Files environment. The control plane logs events related to storage operations, providing a comprehensive record of activities. Logging allows administrators to monitor and detect any suspicious activity, investigate security incidents, and establish accountability. ++### Monitoring capabilities ++- **Azure Activity log:** + - **Function:** Provides insights into subscription-level events, such as resource modifications or virtual machine startups. These insights aid in tracking changes and identifying unauthorized activities. To understand how Activity log works, see [Azure Activity log](/azure/azure-monitor/essentials/activity-log). + - **Use case:** Useful for auditing and compliance, ensuring that all actions within your Azure NetApp Files environment are logged and traceable. +- **Azure NetApp Files metrics:** + - **Function:** Azure NetApp Files offers metrics on allocated storage, actual storage usage, volume I/OPS, and latency. These metrics help you understand usage patterns and volume performance. For more information, see [Metrics for Azure NetApp Files](azure-netapp-files-metrics.md). + - **Use case:** Metrics are essential for performance tuning and capacity planning, allowing you to optimize your storage resources effectively. +- **Azure Service Health:** + - **Function:** Azure Service Health keeps you informed about the health of your Azure services, providing a personalized view of the status of your environment. For more information, see [Service Health portal classic experience overview](/azure/service-health/service-health-overview). + - **Use case:** Azure Service Health helps you stay updated on planned maintenance and health advisories, ensuring minimal disruption to your operations. +- **Audit logging:** + - **Scope:** The control plane logs all PUT, POST, and DELETE API operations against Azure NetApp Files. These logs include actions such as creating snapshots, modifying volumes, and deleting resources. For more information, see [Are Azure activity logs supported in Azure NetApp Files?](faq-security.md#are-azure-activity-logs-supported-on-azure-netapp-files) + - **Details:** Logs capture detailed information about each operation, including who performed the action, when it was performed, and what changes were made. This level of detail is crucial for auditing and forensic investigations. For a complete list of API operations, see [Azure NetApp Files REST API](/rest/api/netapp). ++## Azure Policy ++When you use Azure Policy, the control plane ensures that your policies are enforced consistently across your environment. Azure Policy helps maintain compliance with organizational standards and regulatory requirements. ++### Azure Policy integration ++* **Enforcing standards:** + - **Custom policies:** You can create custom Azure Policy definitions tailored to your specific needs for Azure NetApp Files. These policies can enforce rules such as ensuring certain configurations, restricting the use of insecure protocols, or mandating encryption. For more information about custom policy definitions, see [Built-in policy definitions for Azure NetApp Files](azure-policy-definitions.md#custom-policy-definitions). + - **Built-in policies:** Azure provides built-in policy definitions that you can use to enforce common standards. For example, you can restrict the creation of unsecure volumes or audit existing volumes to ensure they meet your security requirements. For more information about built-in policies, see [Custom policy definitions for Azure NetApp Files](azure-policy-definitions.md#built-in-policy-definitions). +* **Policy evaluation:** + * **Continuous assessment:** The control plane continuously evaluates your resources against the defined policies. If a resource doesn't comply, the control plane can take actions such as denying resource creation, auditing it, or applying specific configurations. + - **Real-time enforcement:** Policies are enforced in real-time, ensuring any noncompliant actions are immediately addressed to maintain the integrity and security of your environment. ++## More information ++- [Security FAQs for Azure NetApp Files](faq-security.md) +- [Azure security baseline for Azure NetApp Files](/security/benchmark/azure/baselines/azure-netapp-files-security-baseline?toc=/azure/azure-netapp-files/TOC.json) +- [Configure customer-managed keys for Azure NetApp Files volume encryption](configure-customer-managed-keys.md) +- [Understand Azure NetApp Files data plane security](data-plane-security.md) |
azure-netapp-files | Cross Region Replication Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-introduction.md | Azure NetApp Files volume replication is supported between various [Azure region | Germany/Europe | Germany West Central | West Europe | | Germany/France | Germany West Central | France Central | | Spain/Sweden | Spain Central | Sweden Central |+| Sweden/Germany | Sweden Central | Germany West Central | | Qatar/Europe | Qatar Central | West Europe | | North America | East US | East US 2 | | North America | East US 2| West US 2 | |
azure-netapp-files | Data Plane Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/data-plane-security.md | + + Title: Understand Azure NetApp Files data plane security +description: Learn about the different data plane security features in Azure NetApp Files ++++ Last updated : 09/30/2024++++# Understand Azure NetApp Files data plane security ++Learn about the different data plane security features in Azure NetApp Files to understand what is available to best serve your needs. ++## Data plane security concepts ++Understanding the data plane is crucial when working with Azure NetApp Files. The data plane is responsible for data storage and management operations, playing a vital role in maintaining both security and efficiency. Azure NetApp Files provides a comprehensive suite of data plane security features, including permissions management, data encryption (in-flight and at-rest), LDAP (Lightweight Directory Access Protocol) encryption, and network security to ensure secure data handling and storage. ++### Managing permissions ++Azure NetApp Files secures network attached storage (NAS) data through permissions, categorized into Network File System (NFS) and Server Message Block (SMB) types. The first security layer is share access, limited to necessary users and groups. Share permissions, being the least restrictive, should follow a funnel logic, allowing broader access at the share level and more granular controls for underlying files and folders. ++Securing your NAS data in Azure NetApp Files involves managing permissions effectively. Permissions are categorized into two main types: ++* **Share access permissions**: These permissions control who can mount a NAS volume and basic permissions for read/write. + - NFS exports: Uses IP addresses or host names to control access. + - SMB shares: Uses user and group access control lists (ACLs). ++* **File access permissions:** These determine what users and groups can do once a NAS volume is mounted. They are: + - applied to individual files and folders. + - more granular than share permissions. ++#### Share access permissions ++**NFS export policies:** ++- Volumes are shared out to NFS clients by exporting a path accessible to a client or set of clients. +- Export policies control access. Export policies are containers for a set of access rules listed in order of desired access. Higher priority rules get read and applied first and subsequent rules for a client are ignored. +- Rules use client IP addresses or subnets to control access. If a client isn't listed in an export policy rule, it can't mount the NFS export. +- Export policies control how the root user is presented to a client. If the root user is ΓÇ£squashedΓÇ¥ (Root Access = Off), the root for clients in that rule is resolved to anonymous UID 65534. ++**SMB Shares:** +- Access is controlled via user and group ACLs. +- Permissions can include read, change, and full control. ++For more information, see [Understand NAS share permissions](network-attached-storage-permissions.md). ++#### File access permissions ++**SMB file permissions:** +- Attributes include read, write, delete, change permissions, and take ownership and more granular permissions supported by Windows. +- Permissions can be inherited from parent folders to child objects. + +**NFS file permissions:** +- NFSv3 and NFSv4.x use traditional UNIX file permissions that are represented by mode bits. +- NFSv4.1 also supports advanced permissions using NFSv4.1 ACLs. ++For more information on file access permissions, see [Understand NAS file permissions](network-attached-file-permissions.md) and [Understand SMB file permissions](network-attached-file-permissions-smb.md). ++### Permission inheritance ++Permission inheritance allows a parent folder to automatically apply its permissions to all its child objects including files and subdirectories. When you set permissions on a parent directory, those same permissions are applied to any new files and subdirectories created within it. ++**SMB:** +- Controlled in the advanced permission view. +- Inheritance flags can be set to propagate permissions from parent folders to child objects. ++**NFS:** +- NFSv3 uses `umask` and `setgid` flags to mimic inheritance. +- NFSv4.1 uses inheritance flags on ACLs. +- +For more details on permission inheritance, see [Understand NAS file permissions](network-attached-file-permissions.md), [Understand NFS mode bits](network-attached-file-permissions-nfs.md), and [Understand NFSv4.x ACLs](nfs-access-control-lists.md). ++#### Considerations ++- **Most restrictive permissions apply:** When conflicting permissions are present, the most restrictive permission takes precedence. For instance, if a user has read-only access at the share level but full control at the file level, the user will only have read-only access. +- **Funnel logic:** Share permissions should be more permissive than file and folder permissions. Apply more granular and restrictive controls at the file level. ++## Data encryption in transit ++Azure NetApp Files encryption in transit refers to the protection of data as it moves between your client and the Azure NetApp Files service. Encryption ensures that data is secure and can't be intercepted or read by unauthorized parties during transmission. ++### Protocols and encryption methods ++NFSv4.1 supports encryption using Kerberos with AES-256 encryption, ensuring data transferred between NFS clients and Azure NetApp Files volume is secure. ++- Kerberos modes: Azure NetApp Files supports Kerberos encryption modes krb5, krb5i, and krb5p. These modes provide various levels of security, with krb5p offering the highest level of protection by encrypting both the data and the integrity checks. ++For more information on NFSv4.1 encryption, see [Understand data encryption](understand-data-encryption.md#understand-data-in-transit-encryption) and [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md). ++SMB3 supports encryption using AES-CCM and AES-GCM algorithms, providing secure data transfer over the network. ++- **End-to-end encryption**: SMB encryption is conducted end-to-end. The entire SMB conversation--encompassing all data packets exchanged between the client and the server--is encrypted. +- **Encryption algorithms**: Azure NetApp Files supports AES-256-GCM, AES-128-CCM cryptographic suites for SMB encryption. These algorithms provide robust security for data in transit. +- **Protocol versions**: SMB encryption is available with SMB 3.x protocol versions. This ensures compatibility with modern encryption standards and provides enhanced security features. ++For more information on SMB encryption, see [Understand data encryption](understand-data-encryption.md). ++## Data Encryption at rest ++Encryption at rest protects your data while it's stored on disk. Even if the physical storage media is accessed by unauthorized individuals, the data remains unreadable without the proper decryption keys. ++There are two types of encryption at rest in Azure NetApp Files: ++* **Single encryption** uses software-based encryption to protect data at rest. Azure NetApp Files employs AES-256 encryption keys, which are compliant with FIPS (Federal Information Processing Standards) 140-2 standard. ++* **Double encryption** provides two levels of encryption protection: a hardware-based encryption layer (encrypted SSD drives) and a software-encryption layer. The hardware-based encryption layer resides at the physical storage level, using FIPS 140-2 certified drives. The software-based encryption layer is at the volume level, completing the second level of encryption protection. ++For more information on data encryption at rest, see [Understand data encryption](understand-data-encryption.md) and [Double encryption at rest](double-encryption-at-rest.md). ++## Key management ++The data plane manages the encryption keys used to encrypt and decrypt data. These keys can be either platform-managed or customer-managed: ++- **Platform-managed keys** are automatically managed by Azure, ensuring secure storage and rotation of keys. +- **Customer-managed keys** are stored in Azure Key Vault, allowing you to manage the lifecycle, usage permissions, and auditing of your encryption keys. ++For more information about Azure NetApp Files key management, see [How are encryption keys managed](faq-security.md#how-are-encryption-keys-managed) or [Configure customer-managed keys](configure-customer-managed-keys.md). ++## Lightweight directory access protocol (LDAP) encryption ++Lightweight directory access protocol (LDAP) encryption at the data plane layer ensures secure communication between clients and the LDAP server. LDAP encryption operates in Azure NetApp Files with ++* **Encryption methods:** LDAP traffic can be encrypted using Transport Layer Security (TLS) or LDAP signing. TLS encrypts the entire communications channels, while LDAP signing ensures the integrity of the messages by adding a digital signature. +* **TLS configuration:** LDAP over StartTLS uses port 389 for the LDAP connection. After the initial LDAP connection is made, a StartTLS OID is exchanged, and certificates are compared. Then, all LDAP traffic is encrypted using TLS. + **LDAP signing:** This method adds a layer of security by signing LDAP messages with AES encryption, which helps in verifying the authenticity and integrity of the data being transmitted. +* Integration with Active Directory: Azure NetApp Files supports integration with Active Directory, which can be configured to use these encryption methods to secure LDAP communications. Currently, only Active Directory can be used for LDAP services. ++For more information on LDAP, see [Understand the use of LDAP](lightweight-directory-access-protocol.md). ++## Network security ++Securing your data with Azure NetApp Files involves employing multiple layers of protection. Leveraging private endpoints and network security groups (NSGs) is essential to ensuring that your data remains secure within your virtual network and is accessible only to authorized traffic. This combined approach offers a comprehensive security strategy to safeguard your data against potential threats. ++### Private endpoints ++Private endpoints are specialized network interfaces that facilitate a secure and private connection to Azure services via Azure Private Link. They utilize a private IP address within your virtual network, effectively integrating the service into your network's internal structure. ++#### Security benefits ++- **Isolation:** Private endpoints ensure that Azure NetApp Files traffic stays within your virtual network, away from the public internet. This isolation minimizes the risk of exposure to external threats. +- **Access control:** You can enforce access policies for your Azure NetApp Files volumes by configuring network security rules on the subnet associated with the private endpoint. This control ensures that only authorized traffic can interact with your data. +- **Compliance:** private endpoints support regulatory compliance by preventing data traffic from traversing the public internet, adhering to requirements for the secure handling of sensitive data. ++### Network security groups (NSGs) ++NSGs are collections of security rules that govern inbound and outbound traffic to network interfaces, virtual machines (VMs), and subnets within Azure. These rules are instrumental in defining the access controls and traffic patterns within your network. NSGs are only supported when using the Standard network feature in Azure NetApp Files. ++#### Security benefits ++- **Traffic filtering:** NSGs enable the creation of granular traffic filtering rules based on source and destination IP addresses, ports, and protocols. This ensures that only permitted traffic can reach your Azure NetApp Files volumes. +- **Segmentation:** By applying NSGs to the subnets housing your Azure NetApp Files volumes, you can segment and isolate network traffic. Segmentation effectively reduces the attack surface and enhances overall security. +- **Monitoring and logging:** NSGs offer monitoring and logging capabilities through Network Security Group Flow Logs. These logs are critical for tracking traffic patterns, detecting potential security threats, and ensuring compliance with security policies. ++For more information, see [Network Security Groups](../virtual-network/security-overview.md) and [What is a private endpoint?](../private-link/private-endpoint-overview.md) ++## More information ++- [Understand NAS share permissions in Azure NetApp Files](network-attached-storage-permissions.md) +- [Understand NAS protocols in Azure NetApp Files](network-attached-storage-protocols.md) +- [Understand data encryption in Azure NetApp Files](understand-data-encryption.md) +- [Security FAQs for Azure NetApp Files](faq-security.md) +- [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) |
azure-vmware | Self Service Maintenance Orchestration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/self-service-maintenance-orchestration.md | + + Title: Self service maintenance orchestration (public preview) +description: Learn how to enable self service maintenance orchestration. ++++ Last updated : 06/25/2024++++# Self service maintenance orchestration (public preview) ++In this article, you learn about one of the advantages of Azure VMware Solution private cloud. The advantage is the managed platform where Microsoft handles the lifecycle management of VMware software (ESXi, vCenter Server, and vSAN) and NSX appliances. Microsoft also takes care of applying any patches, updates, or upgrades to ESXi, vCenter Server, vSAN, and NSX within your private cloud. +Regular upgrades of the Azure VMware Solution private cloud and VMware software ensure the latest security, stability, and feature sets are running in your private cloud. For more information, see [Host maintenance and lifecycle management](architecture-private-clouds.md). ++Microsoft schedules maintenance and notifies customers through Service Health notifications. The details of the planned maintenance are available under the planned maintenance section. Currently, customers must raise a support ticket if they wish to change a scheduled maintenance window. +The Self-Service Maintenance orchestration feature provides customers with the flexibility to reschedule their planned maintenance directly from the Azure portal. ++## Prerequisites ++- An existing Azure VMware Solution private cloud. +- A registered subscription to the Microsoft Azure VMware Solution AFEC flags named Early Access and Self Serve for Maintenance. You can find these flags under **Preview Features** on the Azure portal. + +## Reschedule maintenance through Azure VMware Solution maintenance ++1. Sign in to your Azure VMware Solution private cloud. + + >[!Note] + > At least a contributor level access on the Azure VMware Solution private cloud is required. ++1. From the left navigation, locate **Operations** and select **Maintenance** from the drop-down list. ++ :::image type="content" source="media/self-service-orchestration/operations-maintenance.png" alt-text="Screenshot that shows how to set up self-service maintenance." lightbox="media/self-service-orchestration/operations-maintenance.png"::: + +1. Under the **Upcoming maintenance** tab, select the **Reschedule** option located on the right side. ++ :::image type="content" source="media/self-service-orchestration/reschedule-maintenance.png" alt-text="Screenshot that shows how to reschedule maintenance." lightbox="media/self-service-orchestration/reschedule-maintenance.png"::: ++ + +1. Input the revised date and time, then select **Reschedule**. + + :::image type="content" source="media/self-service-orchestration/upcoming-maintenance.png" alt-text="Screenshot that shows how to review upcoming maintenance schedule." lightbox="media/self-service-orchestration/upcoming-maintenance.png"::: + + After youΓÇÖve selected **Reschedule**, the system modifies the schedule to the new date and the new schedule is displayed to the portal. ++## Additional Information +The following system error or warning messages appear while trying to reschedule maintenance tasks: ++ - Users aren't allowed to reschedule maintenance after the upgrade deadline and on freeze days. + - Users will be allowed to reschedule up to 1 hour before and after the start of the maintenance. + - Each maintenance task is assigned an internal deadline. Dates that exceed this deadline appear greyed out on the portal. If a customer needs to reschedule maintenance beyond this point, they should raise a support ticket. + - Maintenance that is critical or carries fix for a critical security vulnerability, might have the reschedule option greyed out. + - This feature is only enabled for a selected set of maintenance, therefore not all the Azure VMware Solution maintenance shows up in this navigation or have the reschedule option. |
confidential-computing | Confidential Vm Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-vm-overview.md | Confidential VMs *don't support*: - Azure Batch - Azure Backup - Azure Site Recovery-- Azure Dedicated Host -- Microsoft Azure Virtual Machine Scale Sets with Confidential OS disk encryption enabled - Limited Azure Compute Gallery support - Shared disks-- Ultra disks - Accelerated Networking - Live migration - Screenshots under boot diagnostics |
data-factory | Connector Snowflake | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-snowflake.md | For more information about the properties, see [Lookup activity](control-flow-lo ## Upgrade the Snowflake linked service -To upgrade the Snowflake linked service, create a new Snowflake linked service and configure it by referring to [Linked service properties](#linked-service-properties). +To upgrade the Snowflake linked service, you can do a side-by-side upgrade, or an in-place upgrade. ++### Side-by-side upgrade ++To perform a side-by-side upgrade, complete the following steps: ++1. Create a new Snowflake linked service and configure it by referring to the linked service properties. +1. Create a dataset based on the newly created Snowflake linked service. +1. Replace the new linked service and dataset with the existing ones in the pipelines that targets the legacy objects. ++### In-place upgrade ++To perform an in-place upgrade, you need to edit the existing linked service payload. ++1. Update the type from ‘Snowflake’ to ‘SnowflakeV2’. +2. Modify the linked service payload from its legacy format to the new pattern. You can either fill in each field from the user interface after changing the type mentioned above, or update the payload directly through the JSON Editor. Refer to the [Linked service properties](#linked-service-properties) section in this article for the supported connection properties. The following examples show the differences in payload for the legacy and new Snowflake connectors: ++ **Legacy Snowflake connector JSON payload:** + ```json + { +    "name": "Snowflake1", +    "type": "Microsoft.DataFactory/factories/linkedservices", +     "properties": { +        "annotations": [], +        "type": "Snowflake", +        "typeProperties": { +            "authenticationType": "Basic", +            "connectionString": "jdbc:snowflake://<fake_account>.snowflakecomputing.com/?user=FAKE_USER&db=FAKE_DB&warehouse=FAKE_DW&schema=PUBLIC", +            "encryptedCredential": "<placeholder>" +         }, +        "connectVia": { +            "referenceName": "AzureIntegrationRuntime", +            "type": "IntegrationRuntimeReference" +        } +    } + } + ``` ++ **New Snowflake connector JSON payload:** + ```json + { +    "name": "Snowflake2", +    "type": "Microsoft.DataFactory/factories/linkedservices", +     "properties": { +        "parameters": { +           "schema": { +                "type": "string", +                "defaultValue": "PUBLIC" +            } +        }, +         "annotations": [], +        "type": "SnowflakeV2", +        "typeProperties": { +            "authenticationType": "Basic", +            "accountIdentifier": "<FAKE_Account", +            "user": "FAKE_USER", +            "database": "FAKE_DB", +             "warehouse": "FAKE_DW", +            "encryptedCredential": "<placeholder>" +        }, +        "connectVia": { +            "referenceName": "AutoResolveIntegrationRuntime", +            "type": "IntegrationRuntimeReference" +         } +    } + } + ``` ++1. Update dataset to use the new linked service. You can either create a new dataset based on the newly created linked service, or update an existing dataset's type property from _SnowflakeTable_ to _SnowflakeV2Table_. ## Differences between Snowflake and Snowflake (legacy) |
databox-gateway | Data Box Gateway 2409 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-2409-release-notes.md | + + Title: Azure Data Box Gateway 2409 release notes | Microsoft Docs +description: Describes critical open issues and resolutions for the Azure Data Box Gateway running 2409 release. +++ +++ Last updated : 09/26/2024++++# Azure Data Box Gateway 2409 release notes ++The following release notes identify the critical open issues and the resolved issues for the 2409 release of Azure Data Box Gateway. ++The release notes are continuously updated. Critical issues that require a workaround are added when they're discovered. Carefully review the information in the release notes before deploying Azure Data Box Gateway. ++This release corresponds to the software version: ++- **Data Box Gateway 2409 (1.7.2816.3390)** - KB 5043357 ++> [!NOTE] +> Update 2409 can be applied only to devices that are running 2301 versions of the software or later. If you are running a version earlier than 2105, update your device to 2301 and then update to 2409. ++## What's new ++This release contains the following bug fixes: ++- **Web UI certificate format** - Implemented bug fixes pertaining to the web UI certificate format, potentially causing compatibility issues when using web UI. ++This release contains the following updates: ++- **Migration to a newer OS version** - Provides better long term security and vulnerability management. +- **Defense in depth:** + - Malware protection on OS disk + - Defender-based Device Guard support for more stringent checks on the binary running within the system. +- **Utilizing a newer .NET framework** - Provides better security. +- **Improved Hypervisor support** - Support added for Hyper-V 2022. ++## Known issues in this release ++Because this release uses a new operating system (OS) version and includes filesystem metadata schema differences, rollback or downgrade isn't allowed. Any upgrade failure might result in downtime and the need for data recovery. The following precautions should be taken before initiating an upgrade: ++- Plan for an appropriate downtime window. +- Ensure that your data is stored in Azure before disconnecting any clients writing to Data Box Gateway. You can validate that data transfer is complete by ensuring that all top-level directories in the Data Box Gateway's share have the 'offline' attribute enabled. ++All the release noted issues are carried forward from the previous releases. For a list of known issues, see [Known issues in the GA release](data-box-gateway-release-notes.md#known-issues-in-ga-release). ++## Next steps ++- [Prepare to deploy Azure Data Box Gateway](data-box-gateway-deploy-prep.md) |
databox-gateway | Data Box Gateway Deploy Prep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-deploy-prep.md | |
databox-gateway | Data Box Gateway Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-security.md | The Data Box Gateway device is a virtual device that's provisioned in the hyperv - Needs an activation key to access the Azure Stack Edge Pro/Data Box Gateway service. - Is protected at all times by a device password.++The Data Box Gateway device has the following capabilities which offer defense in depth: ++- Defender based malware protection on OS disk +- Defender based Device Guard support for more stringent checks on the binary running in the system. <! secure boot enabled. - Runs Windows Defender Device Guard. Device Guard allows you to run only trusted applications that you define in your code integrity policies.--> |
defender-for-iot | Concept Supported Protocols | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-supported-protocols.md | OT network sensors can detect the following protocols when identifying assets an |**DNP. org** | DNP3 | |**Emerson** | DeltaV<br> DeltaV - Discovery<br> Emerson OpenBSI/BSAP<br> Ovation DCS ADMD<br>Ovation DCS DPUSTAT<br> Ovation DCS SSRPC | |**Emerson Fischer** | ROC |+|**EVRoaming Foundation** | OCPI | |**FANUC** | FANUC FOCUS | |**FieldComm Group**| HART-IP | |**GE** | ADL (MarkVIe) <br>Bentley Nevada (System 1 / BN3500)<br>ClassicSDI (MarkVle) <br> EGD<br> GSM (GE MarkVI and MarkVIe)<br> InterSite<br> SDI (MarkVle) <br> SRTP (GE)<br> GE_CMP | |
defender-for-iot | Detect Windows Endpoints Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/detect-windows-endpoints-script.md | Before performing the procedures in this article, you must have: The script described in this article is supported for the following Windows operating systems: - Windows XP-- Windows 2000-- Windows NT - Windows 7 - Windows 10-- Windows Server 2003/2008/2012/2016/2019+- Windows Server 2003/2008/2012 ## Download and run the script The script detects enriched Windows data, and is run as a utility and not an ins 1. Sign into your OT sensor console, and select **System Settings** > **Import Settings** > **Windows Information**. -1. Select **Download script**. For example: +1. Select **Download script**. Your browser might ask you if you want to keep the file, select **Keep** or any similar options. :::image type="content" source="media/detect-windows-endpoints-script/download-wmi-script.png" alt-text="Screenshot of where to download WMI script." lightbox="media/detect-windows-endpoints-script/download-wmi-script.png"::: -1. Copy the script to a local drive and unzip it. The following files appear: +1. Copy the file to a local drive and unzip it. The following file appears: - - `start.bat` - - `settings.json` - - `data.bin` - - `run.bat` + - `Extract_system_info.bat` -1. Run the `run.bat` file. +1. Run the `Extract_system_info.bat` file. - After the script runs to probe the registry, a CX-snapshot file appears with the registry information. The filename indicates the machine name and the current date and time of the snapshot with the following syntax: `cx_snapshot_[machinename]_[current date time]`. +1. You'll be asked whether you want to display errors on screen or not. Make you own selection. -Files generated by the script include: +After the script runs to probe the registry, an output file appears with the registry information. The filename indicates the current date and time of the snapshot with the following syntax: `[current date time]_system_info_extractor`. ++Files generated by the script: - Remain on the local drive until you delete them.-- Must remain in the same location. Don't separate the generated files.-- Are overwritten if you run the script again.+- Are overwritten if you run the script again on the same day. +- Include an errorOutput file that is empty if no errors occurred during the running of the script. ## Import device details |
defender-for-iot | How To Accelerate Alert Incident Response | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-accelerate-alert-incident-response.md | If you're currently using an on-premises management console with cloud-connected Your rules are added to the list of suppression rules on the **Suppression rules (Preview)** page. Select a rule to edit or delete it as needed. -### Create alert exclusion rules on an on-premises management console --We recommend creating alert exclusion rules on an on-premises management console only for locally managed sensors. For cloud-connected sensors, any suppression rules created on the Azure portal will override exclusion rules created on the on-premises management console for that sensor. --**To create an alert exclusion rule**: --1. Sign into your on-premises management console and select **Alert Exclusion** on the left-hand menu. --1. On the **Alert Exclusion** page, select the **+** button at the top-right to add a new rule. --1. In the **Create Exclusion Rule** dialog, enter the following details: -- |Name |Description | - ||| - |**Name** | Enter a meaningful name for your rule. The name can't contain quotes (`"`). | - |**By Time Period** | Select a time zone and the specific time period you want the exclusion rule to be active, and then select **ADD**. <br><br>Use this option to create separate rules for different time zones. For example, you might need to apply an exclusion rule between 8:00 AM and 10:00 AM in three different time zones. In this case, create three separate exclusion rules that use the same time period and the relevant time zone. | - |**By Device Address** | Select and enter the following values, and then select **ADD**: <br><br>- Select whether the designated device is a source, destination, or both a source and destination device. <br>- Select whether the address is an IP address, MAC address, or subnet <br>- Enter the value of the IP address, MAC address, or subnet. | - |**By Alert Title** | Select one or more alerts to add to the exclusion rule and then select **ADD**. To find alert titles, enter all, or part of an alert title and select the one you want from the dropdown list. | - |**By Sensor Name** | Select one or more sensors to add to the exclusion rule and then select **ADD**. To find sensor names, enter all or part of the sensor name and select the one you want from the dropdown list. | -- > [!IMPORTANT] - > Alert exclusion rules are `AND` based, which means that alerts are only excluded when all rule conditions are met. - > If a rule condition is not defined, all options are included. For example, if you don't include the name of a sensor in the rule, the rule is applied to all sensors. -- A summary of the rule parameters is shown at the bottom of the dialog. --1. Check the rule summary shown at the bottom of the **Create Exclusion Rule** dialog and then select **SAVE** --**To create alert exclusion rules via API**: --Use the [Defender for IoT API](references-work-with-defender-for-iot-apis.md) to create on-premises management console alert exclusion rules from an external ticketing system or other system that manage network maintenance processes. --Use the [maintenanceWindow (Create alert exclusions)](api/management-alert-apis.md#maintenancewindow-create-alert-exclusions) API to define the sensors, analytics engines, start time, and end time to apply the rule. Exclusion rules created via API are shown in the on-premises management console as read-only. --For more information, see [Defender for IoT API reference](references-work-with-defender-for-iot-apis.md). -- ## Allow internet connections on an OT network Decrease the number of unauthorized internet alerts by creating an allowlist of domain names on your OT sensor. When a DNS allowlist is configured, the sensor checks each unauthorized internet connectivity attempt against the list before triggering an alert. If the domain's FQDN is included in the allowlist, the sensor doesnΓÇÖt trigger the alert and allows the traffic automatically. All OT sensor users can view a currently configured list of domains in a [data mining report](how-to-create-data-mining-queries.md), including the FQDNs, resolved IP addresses, and the last resolution time. - **To define a DNS allowlist:** 1. Sign into your OT sensor as the *admin* user and select the **Support** page. All OT sensor users can view a currently configured list of domains in a [data m 1. In the **Edit configuration** pane > **Fqdn allowlist** field, enter one or more domain names. Separate multiple domain names with commas. Your sensor won't generate alerts for unauthorized internet connectivity attempts on the configured domains. -1. Select **Submit** to save your changes. + You can use the `*` wildcard at any place in the domain name to easily add subdomains to the allowlist without having to input each one, for example, `*.microsoft.com` or `teams.microsoft.*`. +1. Select **Submit** to save your changes. **To view the current allowlist in a data mining report:** The generated data mining report shows a list of the allowed domains and each IP :::image type="content" source="media/how-to-accelerate-alert-incident-response/data-mining-report-allowlist.png" alt-text="Screenshot of data mining report of allowlist in the sensor console." lightbox="media/how-to-accelerate-alert-incident-response/data-mining-report-allowlist.png"::: - ## Create alert comments on an OT sensor 1. Sign into your OT sensor and select **System Settings** > **Network Monitoring** > **Alert Comments**. Use the [Defender for IoT API](references-work-with-defender-for-iot-apis.md) to Use the [maintenanceWindow (Create alert exclusions)](api/management-alert-apis.md#maintenancewindow-create-alert-exclusions) API to define the sensors, analytics engines, start time, and end time to apply the rule. Exclusion rules created via API are shown in the on-premises management console as read-only. -For more information, see -[Defender for IoT API reference](references-work-with-defender-for-iot-apis.md). +For more information, see [Defender for IoT API reference](references-work-with-defender-for-iot-apis.md). ## Next steps |
defender-for-iot | Activate Deploy Sensor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/activate-deploy-sensor.md | This article is one in a series of articles describing the [deployment path](ot- Several initial setup steps can be performed in the browser or via CLI. -- Use the browser if you can connect physical cables from your switch to the sensor to identify your interfaces correctly. Make sure to reconfigure your network adapter to match the default settings on the sensor.-- Use the CLI if you know your networking details without needing to connect physical cables. Use the CLI if you can only connect to the sensor via iLo / iDrac+- Use the browser if you can connect physical cables from your switch to the sensor to identify your interfaces correctly. Make sure to reconfigure your network adapter to match the default settings on the sensor. +- Use the CLI if you know your networking details without needing to connect physical cables. Use the CLI if you can only connect to the sensor via iLo / iDrac Configuring your setup via the CLI still requires you to complete the last few steps in the browser. If you've configured the initial settings [via the CLI](#configure-setup-via-the 1. Select the terms and conditions option and then select **Activate**. 1. Select **Next: Certificates**. +If you have a connection problem between the cloud-based sensor and the Azure portal during the activation process that causes the activation to fail, a message appears below the Activate button. To solve the connectivity problem select **Learn more** and the **Cloud connectivity** pane opens. The pane lists the causes for the problem and recommendations to solve it. ++Even without solving the problem you're able to continue to the next stage, by selecting **Next: Certificates**. ++The only connection problem that must be fixed before moving to the next stage, is when a time drift is detected and the sensor isn't synchronized to the cloud. In this case the sensor must be correctly synchronized, as described in the recommendations, before moving to the next stage. + ### Define SSL/TLS certificate settings Use the **Certificates** tab to deploy an SSL/TLS certificate on your OT sensor. We recommend that you use a [CA-signed certificate](create-ssl-certificates.md) for all production environments. Use the **Certificates** tab to deploy an SSL/TLS certificate on your OT sensor. > > For more information, see [Manage SSL/TLS certificates](../how-to-manage-individual-sensors.md#manage-ssltls-certificates). -1. In the **Validation of on-premises management console certificate** area, select **Mandatory** to validate an on-premises management console's certificate against a certificate revocation list (CRL), as [configured in your certificate](../best-practices/certificate-requirements.md#crt-file-requirements). +1. In the **Validation of on-premises management console certificate** area, select **Mandatory** to validate an on-premises management console's certificate against a certificate revocation list (CRL), as [configured in your certificate](../best-practices/certificate-requirements.md#crt-file-requirements). For more information, see [SSL/TLS certificate requirements for on-premises resources](../best-practices/certificate-requirements.md) and [Create SSL/TLS certificates for OT appliances](create-ssl-certificates.md). Use this procedure to configure the following initial setup settings via CLI: Continue with [activating](#activate-your-ot-sensor) and [configuring SSL/TLS certificate settings](#define-ssltls-certificate-settings) in the browser. +> [!NOTE] +> The information in this article applies to the sensor version 24.1.5. If you are running an earlier version, see [configure ERSPAN mirroring](../traffic-mirroring/configure-mirror-erspan.md). +> + **To configure initial setup settings via CLI**: 1. In the installation screen, after the default networking details are shown, press **ENTER** to continue. Continue with [activating](#activate-your-ot-sensor) and [configuring SSL/TLS ce When prompted to confirm your password, enter your new password again. For more information, see [Default privileged users](../manage-users-sensor.md#default-privileged-users). - The `Package configuration` Linux configuration wizard opens. In this wizard, use the up or down arrows to navigate, and the **SPACE** bar to select an option. Press **ENTER** to advance to the next screen. +1. After changing the password, the `Sensor Config` wizard automatically starts. Continue to step 5. -1. In the wizard's `Select monitor interfaces` screen, select any of the interfaces you want to monitor with this sensor. + If you're logging in on subsequent occasions continue to step 4. - The system selects the first interface it finds as the management interface, and we recommend that you leave the default selection. If you decide to use a different port as the management interface, the change is implemented only after the sensor restarts. In such cases, make sure that the sensor is connected as needed. +1. To start the `Sensor Config` wizard, at the prompt type `network reconfigure`. If you are using the cyberx user, type `ERSPAN=1 python3 -m cyberx.config.configure`. - For example: +1. The `Sensor Config` screen shows the present setup of the interfaces. Ensure that one interface is set as the management interface. In this wizard, use the up or down arrows to navigate, and the **SPACE** bar to select an option. Press **ENTER** to advance to the next screen. - :::image type="content" source="../media/install-software-ot-sensor/select-monitor-interfaces.png" alt-text="Screenshot of the Select monitor interfaces screen."::: + Select the interface you want to configure, for example: - > [!IMPORTANT] - > Make sure that you select only interfaces that are connected. - > - > If you select interfaces that are enabled but not connected, the sensor will show a *No traffic monitored* health notification in the Azure portal. If you connect more traffic sources after installation and want to monitor them with Defender for IoT, you can add them later via the [CLI](../references-work-with-defender-for-iot-cli-commands.md). + :::image type="content" source="media/activate-deploy-sensor/ersp-cli-settings.png" alt-text="Screenshot of the Select monitor interfaces screen."::: -1. In the `Select management interface` screen, select the interface you want to use to connect to the Azure portal or an on-premises management console. +1. In the `Select type` screen select the new configuration type for this interface. - For example: -- :::image type="content" source="../media/install-software-ot-sensor/select-management-interface.png" alt-text="Screenshot of the Select management interface screen."::: --1. In the `Enter sensor IP address` screen, enter the IP address you want to use for this sensor. Use this IP address to connect to the sensor via CLI or the browser. For example: -- :::image type="content" source="../media/install-software-ot-sensor/enter-sensor-ip-address.png" alt-text="Screenshot of the Enter sensor IP address screen."::: +> [!IMPORTANT] +> Make sure that you select only interfaces that are connected. +> +> If you select interfaces that are enabled but not connected, the sensor will show a *No traffic monitored* health notification in the Azure portal. If you connect more traffic sources after installation and want to monitor them with Defender for IoT, you can add them later via the [CLI](../references-work-with-defender-for-iot-cli-commands.md). +> -1. In the `Enter path to the mounted backups folder` screen, enter the path to the sensor's mounted backups. We recommend using the default path of `/opt/sensor/persist/backups`. For example: +An interface can be set as either **Management**, **Monitor**, **Tunnel** or **Unused**. You may wish to set an interface as **Unused** as a temporary setting, to reset it, or if a mistake was made in the original setup. - :::image type="content" source="../media/install-software-ot-sensor/mounted-backups.png" alt-text="Screenshot of the mounted backups folder configuration."::: +1. To configure a **Management** interface: -1. In the `Enter Subnet Mask` screen, enter the IP address for the sensor's subnet mask. For example: + 1. Select the interface. + 1. Select **Management**. + 1. Type the sensor's **IP address**, **DNS server** IP address and the default **Gateway** IP address. - :::image type="content" source="../media/install-software-ot-sensor/subnet-mask.png" alt-text="Screenshot of the Enter Subnet Mask screen."::: + :::image type="content" source="media/activate-deploy-sensor/ersp-cli-management-settings.png" alt-text="Screenshot of the interface Management screen."::: -1. In the `Enter Gateway` screen, enter the sensor's default gateway IP address. For example: + 1. Select **Back**. - :::image type="content" source="../media/install-software-ot-sensor/enter-gateway.png" alt-text="Screenshot of the Enter Gateway screen."::: +1. To configure a **Monitor** interface: -1. In the `Enter DNS server` screen, enter the sensor's DNS server IP address. For example: + 1. Select the interface. + 1. Select **Monitor**. The **Sensor Config** screen updates. - :::image type="content" source="../media/install-software-ot-sensor/enter-dns-server.png" alt-text="Screenshot of the Enter DNS server screen."::: +1. To configure an **ERSPAN** interface: -1. In the `Enter hostname` screen, enter a name you want to use as the sensor hostname. Make sure that you use the same hostname as is defined in the DNS server. For example: + 1. Select **Type**. + 1. Select **ERSPAN**. + 1. Select **Confirm**. - :::image type="content" source="../media/install-software-ot-sensor/enter-hostname.png" alt-text="Screenshot of the Enter hostname screen."::: +1. To configure an interface as **Unused**: -1. In the `Run this sensor as a proxy server (Preview)` screen, select `<Yes>` only if you want to configure a proxy, and then enter the proxy credentials as prompted. For more information, see [Configure proxy settings on an OT sensor](../connect-sensors.md). + 1. Select the interface. + 1. Select the existing status. + 1. Select **Unused**. The **Sensor Config** screen updates. - The default configuration is without a proxy. +1. After configuring all of the interfaces, select **Save**. -1. The configuration process starts running, reboots, and then prompts you to sign in again. For example: +### Automatic backup folder location - :::image type="content" source="../media/install-software-ot-sensor/final-cli-sign-in.png" alt-text="Screenshot of the final sign-in prompt at the end of the initial CLI configuration."::: +The sensor automatically creates a backup folder. To change the location of the mounted backups you must: -At this point, open a browser to the IP address you'd defined for your sensor and continue the setup in the browser. For more information, see [Activate your OT sensor](#activate-your-ot-sensor). +1. Log in to the sensor using the **admin** user. +1. Type the following code in the CLI interface: `system backup path` and then add the path location, for example `/opt/sensor/backup`. +1. The backup runs automatically and might take up to one minute. > [!NOTE] > During initial setup, options for ERSPAN monitoring ports are available only in the browser-based procedure. At this point, open a browser to the IP address you'd defined for your sensor an > [!div class="step-by-step"] > [« Validate an OT sensor software installation](post-install-validation-ot-software.md)- > [!div class="step-by-step"] > [Configure proxy settings on an OT sensor »](../connect-sensors.md) |
defender-for-iot | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md | Features released earlier than nine months ago are described in the [What's new > [!NOTE] > Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.+> [!INCLUDE [defender-iot-defender-reference](../includes/defender-for-iot-defender-reference.md)] +## October 2024 ++|Service area |Updates | +||| +| **OT networks** | - [Add wildcards to allowlist domain names](#add-wildcards-allowlist-domain-names)<br> - [Added protocol](#added-protocol) <br> - [Improved OT sensor onboarding](#improved-ot-sensor-onboarding) | ++### Add wildcards allowlist domain names ++When adding domain names to the FQDN allowlist use the `*` wildcard to include all sub-domains. For more information, see [allow internet connections on an OT network](how-to-accelerate-alert-incident-response.md#allow-internet-connections-on-an-ot-network). ++### Added protocol ++We now support the OCPI protocol. See [the updated protocol list](concept-supported-protocols.md#supported-protocols-for-ot-device-discovery). ++### Improved OT sensor onboarding ++If there are connection problems, during sensor onboarding, between the OT sensor and the Azure portal at the configuration stage, the process can't be completed until the connection problem is solved. ++We now support completing the configuration process without the need to solve the communication problem, allowing you to continue the onboarding of your OT sensor quickly and solve the problem at a later time. For more information, see [activate your OT sensor](ot-deploy/activate-deploy-sensor.md#activate-your-ot-sensor). + ## July 2024 |Service area |Updates | |||-| **OT networks** | - [Security update](#security-update) | +| **OT networks** | - [Security update](#security-update) | ### Security update |
devtest-labs | Connect Environment Lab Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/connect-environment-lab-virtual-network.md | -Azure DevTest Labs makes it easy to create VMs in a lab with [built-in networking](devtest-lab-configure-vnet.md). It has a great deal of flexibility with the ability to [create multi-VM environments](devtest-lab-test-env.md). This article shows you how to connect VMs in an environment to the lab virtual network. One scenario where you use this feature is setting up an N-tier app with a SQL Server data tier that is connected to the lab VNet allowing test VMs in the lab to access it. +Azure DevTest Labs makes it easy to create VMs in a lab with [built-in networking](devtest-lab-configure-vnet.md). It has a great deal of flexibility with the ability to [create multi-VM environments](devtest-lab-test-env.md). This article shows you how to connect VMs in an environment to the lab virtual network. One scenario where you use this feature is setting up an N-tier app with a SQL Server data tier that is connected to the lab VNet allowing test VMs in the lab to access it. + ## Sample environment that uses lab VNet Here is a simple environment template that connects the lab's subnet. In this sample, the `DTLSubnetId` parameter represents the ID of the subnet in which the lab exists. It's assigned to: `$(LabSubnetId)`, which is automatically resolved by DevTest Labs to the ID of the lab's subnet. The **subnet** property of the **network interface** of the VM in this definition is set to `DTLSubnetId` so that it joins the same subnet. |
devtest-labs | Create Environment Service Fabric Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/create-environment-service-fabric-cluster.md | This article provides information on how to create an environment with a self-co ## Overview DevTest Labs can create self-contained test environments as defined by Azure Resource Manager templates. These environments contain both IaaS resources, like virtual machines, and PaaS resources, like Service Fabric. DevTest Labs allows you to manage virtual machines in an environment by providing commands to control the virtual machines. These commands give you the ability to start or stop a virtual machine on a schedule. Similarly, DevTest Labs can also help you manage Service Fabric clusters in an environment. You can start or stop a Service Fabric cluster in an environment either manually or via a schedule. + ## Create a Service Fabric cluster Service Fabric clusters are created using environments in DevTest Labs. Each environment is defined by an Azure Resource Manager template in a Git repository. The [public Git repository](https://github.com/Azure/azure-devtestlab/tree/master/Environments/) for DevTest Labs contains the Resource Manager template to create a Service Fabric cluster in the [ServiceFabric-Cluster](https://github.com/Azure/azure-devtestlab/tree/master/Environments/ServiceFabric-LabCluster) folder. |
devtest-labs | Deploy Nested Template Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/deploy-nested-template-environments.md | A nested deployment runs secondary Azure Resource Manager (ARM) templates from w Decomposing a deployment into a set of targeted, purpose-specific templates provides testing, reuse, and readability benefits. For general information about nested templates, including code samples, see [Using linked and nested templates when deploying Azure resources](../azure-resource-manager/templates/linked-templates.md). ++ ## Deploy nested templates with Visual Studio The Azure Resource Group project template in Visual Studio makes it easy to develop and debug ARM templates. When you add a nested template to the main *azuredeploy.json* template file, Visual Studio adds the following items to make the template more flexible: |
devtest-labs | Devtest Lab Create Environment From Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-environment-from-arm.md | You can configure Azure DevTest Labs to use ARM templates from a public or priva :::image type="content" source="./media/devtest-lab-create-environment-from-arm/devtest-labs-create-environment-with-arm.png" alt-text="Diagram that shows how to create an environment with DevTest Labs by using an ARM template in a template repository." border="false" lightbox="./media/devtest-lab-create-environment-from-arm/devtest-labs-create-environment-with-arm.png"::: + ## Prerequisites - It's helpful to have experience configuring lab environments in DevTest Labs. If you're new to working with labs, start by reviewing the instructions in the [Configure public environment settings](#configure-public-environment-settings) section. You need to understand how to configure template repositories, enable or disable public environments, and select templates to create labs. |
devtest-labs | Devtest Lab Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-overview.md | Title: What is Azure DevTest Labs? -description: Learn how DevTest Labs makes it easy to create, manage, and monitor Azure virtual machines and environments. +description: Learn how DevTest Labs makes it easy to create, manage, and monitor Azure virtual machines. -[Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab) is a service for easily creating, using, and managing infrastructure-as-a-service (IaaS) virtual machines (VMs) and platform-as-a-service (PaaS) environments in labs. Labs offer preconfigured bases and artifacts for creating VMs, and Azure Resource Manager (ARM) templates for creating environments like Azure Web Apps or SharePoint farms. +[Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab) is a service for easily creating, using, and managing infrastructure-as-a-service (IaaS) virtual machines (VMs) in labs. Labs offer preconfigured bases and artifacts for creating VMs. -Lab owners can create preconfigured VMs that have tools and software lab users need. Lab users can claim preconfigured VMs, or create and configure their own VMs and environments. Lab policies and other methods track and control lab usage and costs. +Lab owners can create preconfigured VMs that have tools and software lab users need. Lab users can claim preconfigured VMs, or create and configure their own VMs. Lab policies and other methods track and control lab usage and costs. ### Common DevTest Labs scenarios -Common [DevTest Labs scenarios](devtest-lab-guidance-get-started.md) include development VMs, test environments, and classroom or training labs. DevTest Labs promotes efficiency, consistency, and cost control by keeping all resource usage within the lab context. +Common [DevTest Labs scenarios](devtest-lab-guidance-get-started.md) include VMs for development, testing, and classroom or training labs. DevTest Labs promotes efficiency, consistency, and cost control by keeping all resource usage within the lab context. ## Custom VM bases, artifacts, and templates -DevTest Labs can use custom images, formulas, artifacts, and templates to create and manage labs, VMs, and environments. The [DevTest Labs public GitHub repository](https://github.com/Azure/azure-devtestlab) has many ready-to-use VM artifacts and ARM templates for creating labs, environments, or sandbox resource groups. Lab owners can also create [custom images](devtest-lab-create-custom-image-from-vm-using-portal.md), [formulas](devtest-lab-manage-formulas.md), and ARM templates to use for creating and managing labs, [VMs](devtest-lab-use-resource-manager-template.md#view-edit-and-save-arm-templates-for-vms), and [environments](devtest-lab-create-environment-from-arm.md). +DevTest Labs can use custom images, formulas, artifacts, and templates to create and manage labs, and VMs. The [DevTest Labs public GitHub repository](https://github.com/Azure/azure-devtestlab) has many ready-to-use VM artifacts and ARM templates for creating labs, or sandbox resource groups. Lab owners can also create [custom images](devtest-lab-create-custom-image-from-vm-using-portal.md), [formulas](devtest-lab-manage-formulas.md), and ARM templates to use for creating and managing labs, [VMs](devtest-lab-use-resource-manager-template.md#view-edit-and-save-arm-templates-for-vms). Lab owners can store artifacts and ARM templates in private Git repositories, and connect the [artifact repositories](add-artifact-repository.md) and [template repositories](devtest-lab-use-resource-manager-template.md#add-template-repositories-to-labs) to their labs so lab users can access them directly from the Azure portal. Add the same repositories to multiple labs in your organization to promote consistency, reuse, and sharing. ## Development, test, and training scenarios -DevTest Labs users can quickly and easily create [IaaS VMs](devtest-lab-add-vm.md) and [PaaS environments](devtest-lab-create-environment-from-arm.md) from preconfigured bases, artifacts, and templates. Developers, testers, and trainers can: +DevTest Labs users can quickly and easily create [IaaS VMs](devtest-lab-add-vm.md) from preconfigured bases, artifacts, and templates. Developers, testers, and trainers can: - Create Windows and Linux training and demo environments, or sandbox resource groups for exploring Azure, by using reusable ARM templates and artifacts.-- Test app versions and scale up load testing by creating multiple test agents and environments.-- Create development or testing environments from [continuous integration and deployment (CI/CD)](devtest-lab-integrate-ci-cd.md) tools, integrated development environments (IDEs), or automated release pipelines. Integrate deployment pipelines with DevTest Labs to create environments on demand.-- Use the [Azure CLI](devtest-lab-vmcli.md) command-line tool to manage VMs and environments.+- Test app versions and scale up load testing by creating multiple test agents. +- Use the [Azure CLI](devtest-lab-vmcli.md) command-line tool to manage VMs. ## Lab policies and procedures to control costs |
devtest-labs | Environment Security Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/environment-security-alerts.md | As a lab user, you can view Microsoft Defender for Cloud alerts for your labs. D [Learn more about security alerts in Microsoft Defender for Cloud](../security-center//security-center-alerts-overview.md). ++ ## Prerequisites Currently, you can view security alerts only for platform as a service (PaaS) environments deployed into your lab. To test or use this feature, [deploy an environment into your lab](devtest-lab-create-environment-from-arm.md). |
devtest-labs | Use Managed Identities Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/use-managed-identities-environments.md | -As a lab owner, you can use a managed identity to deploy environments in a lab. This feature helps in scenarios where the environment contains or has references to Azure resources that are outside the environmentΓÇÖs resource group. These resources include key vaults, shared image galleries, and networks. Managed identities enable creation of sandbox environments that aren't limited to the resource group of that environment. +As a lab owner, you can use a managed identity to deploy environments in a lab. This feature helps in scenarios where the environment contains or has references to Azure resources that are outside the environment's resource group. These resources include key vaults, shared image galleries, and networks. Managed identities enable creation of sandbox environments that aren't limited to the resource group of that environment. -By default, when you create an environment, the lab creates a system-assigned identity while deploying the Azure Resource Manager template (ARM template). The system-assigned identity accesses Azure resources and services on a lab userΓÇÖs behalf. DevTest Labs creates a system-assigned identity by default the first time it creates the lab environment. Learn more about [why a lab creates a system-assigned identity](configure-lab-identity.md#scenarios-for-using-labs-system-assigned-identity). +By default, when you create an environment, the lab creates a system-assigned identity while deploying the Azure Resource Manager template (ARM template). The system-assigned identity accesses Azure resources and services on a lab user's behalf. DevTest Labs creates a system-assigned identity by default the first time it creates the lab environment. Learn more about [why a lab creates a system-assigned identity](configure-lab-identity.md#scenarios-for-using-labs-system-assigned-identity). -As a lab owner, you can choose to grant the labΓÇÖs system-assigned identity permissions to access Azure resources outside the lab. You can also use your user-assigned identity for the scenario. The labΓÇÖs system-assigned identity is valid only for the life of the lab. The system-assigned identify is deleted when you delete the lab. When you have environments in multiple labs that need to use an identity, consider using a user-assigned identity. +As a lab owner, you can choose to grant the lab's system-assigned identity permissions to access Azure resources outside the lab. You can also use your user-assigned identity for the scenario. The lab's system-assigned identity is valid only for the life of the lab. The system-assigned identify is deleted when you delete the lab. When you have environments in multiple labs that need to use an identity, consider using a user-assigned identity. -> [!NOTE] +> [!IMPORTANT] > Currently, a single user-assigned identity is supported per lab. ++ ## Prerequisites - [Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md). |
devtest-labs | Use Paas Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/use-paas-services.md | The following image shows a SharePoint farm created as an environment in a lab. ![Screenshot of a SharePoint environment in a lab.](media/use-paas-services/environments.png) + ## PaaS scenarios DevTest Labs PaaS environments support the following scenarios: |
digital-twins | Concepts Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-security.md | You'll need to complete the following steps to enable the trusted Microsoft serv Similarly, you can use private access endpoints for your Azure Digital Twins instance to allow clients located in your virtual network to have secure REST API access to the instance over Private Link. Configuring a private access endpoint for your Azure Digital Twins instance enables you to secure your Azure Digital Twins instance and eliminate public exposure. Additionally, it helps avoid data exfiltration from your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md). -The private access endpoint uses an IP address from your Azure VNet address space. Network traffic between a client on your private network and the Azure Digital Twins instance traverses over the VNet and a Private Link on the Microsoft backbone network, eliminating exposure to the public internet. Here's a visual representation of this system: +The private access endpoint uses an IP address from your Azure VNet address space. Network traffic between a client on your private network and the Azure Digital Twins instance traverses over the VNet and a Private Link on the Microsoft backbone network, eliminating exposure to the public internet. :::image type="content" source="media/concepts-security/private-link.png" alt-text="Diagram showing a network that is a protected VNET with no public cloud access, connecting through Private Link to an Azure Digital Twins instance."::: |
event-hubs | Event Hubs Kafka Connect Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-connect-tutorial.md | consumer.security.protocol=SASL_SSL consumer.sasl.mechanism=PLAIN consumer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}"; -plugin.path={KAFKA.DIRECTORY}/libs # path to the libs directory within the Kafka release +# path to the libs directory within the Kafka release +plugin.path={KAFKA.DIRECTORY}/libs ``` > [!IMPORTANT] |
expressroute | Expressroute Howto Linkvnet Portal Resource Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-portal-resource-manager.md | -zone_pivot_groups: expressroute-experience # Connect a virtual network to ExpressRoute circuits using the Azure portal zone_pivot_groups: expressroute-experience > * [PowerShell](expressroute-howto-linkvnet-arm.md) > * [Azure CLI](expressroute-howto-linkvnet-cli.md) > * [PowerShell (classic)](expressroute-howto-linkvnet-classic.md)-> This article helps you create a connection to link a virtual network (virtual network) to Azure ExpressRoute circuits using the Azure portal. The virtual networks that you connect to your Azure ExpressRoute circuit can either be in the same subscription or part of another subscription. This article helps you create a connection to link a virtual network (virtual ne > [!NOTE] > BGP configuration information will not appear if the layer 3 provider configured your peerings. If your circuit is in a provisioned state, you should be able to create connections.-> ### To create a connection --1. Sign in to the Azure portal with this [Preview link](https://aka.ms/expressrouteguidedportal). This link is required to access the new preview connection create experience to an ExpressRoute circuit. -+1. Sign in to the [Azure portal](https://portal.azure.com). 2. Ensure that your ExpressRoute circuit and Azure private peering have been configured successfully. Follow the instructions in [Create an ExpressRoute circuit](expressroute-howto-circuit-arm.md) and [Create and modify peering for an ExpressRoute circuit](expressroute-howto-routing-arm.md). Your ExpressRoute circuit should look like the following image: This article helps you create a connection to link a virtual network (virtual ne :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/add-connection.png" alt-text="Add connection screenshot"::: - 4. Select the **Connection type** as **ExpressRoute** and then select **Next: Settings >**. - :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/create-connection-basic-new.png" alt-text="Screenshot of create a connection basic page."::: + :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/create-connection-basic.png" alt-text="Screenshot of create a connection basic page."::: 5. Select the resiliency type for your connection. You can choose **Maximum resiliency** or **Standard resiliency**. This article helps you create a connection to link a virtual network (virtual ne :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/connection-object.png" alt-text="Screenshot of a created connection resource."::: ---4. Enter a name for the connection and then select **Next: Settings >**. -- :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/create-connection-basic.png" alt-text="Create connection basic page"::: --5. Select the gateway that belongs to the virtual network that you want to link to the circuit and select **Review + create**. Then select **Create** after validation completes. -- :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/create-connection-settings.png" alt-text="Create connection settings page"::: --6. After your connection has been successfully configured, your connection object will show the information for the connection. -- :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/connection-object.png" alt-text="Connection object screenshot"::: - ## Connect a virtual network to a circuit - different subscription You can share an ExpressRoute circuit across multiple subscriptions. The following figure shows a simple schematic of how sharing works for ExpressRoute circuits across multiple subscriptions. If you want to delete the connection but retain the authorization key, you can d > [!NOTE] > To view your Gateway connections, go to your ExpressRoute circuit in Azure portal. From there, navigate to *Connections* underneath *Settings* for your ExpressRoute circuit. This will show you each ExpressRoute gateway that your circuit is connected to. If the gateway is under a different subscription than the circuit, the *Peer* field will display the circuit authorization key. -- :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/delete-connection-owning-circuit.png" alt-text="Delete connection owning circuit"::: ### Circuit user operations The circuit user needs the resource ID and an authorization key from the circuit :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/create-new-resources.png" alt-text="Create new resources"::: -1. Make sure the *Connection type* is set to **ExpressRoute**. Select the *Resource group* and *Location*, then select **OK** in the Basics page. +1. In the **Basics** page, make sure the *Connection type* is set to **ExpressRoute**. Select the *Resource group*, and then select **Next: Settings>**. - > [!NOTE] - > The location *must* match the virtual network gateway location you're creating the connection for. :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/connection-basics.png" alt-text="Basics page"::: -1. In the **Settings** page, Select the *Virtual network gateway* and check the **Redeem authorization** check box. Enter the *Authorization key* and the *Peer circuit URI* and give the connection a name. Select **OK**. +1. In the **Settings** page, select **High Resiliency** or **Standard Resiliency**, and then select the *Virtual network gateway*. Check the **Redeem authorization** check box. Enter the *Authorization key* and the *Peer circuit URI* and give the connection a name. > [!NOTE]- > The *Peer Circuit URI* is the Resource ID of the ExpressRoute circuit (which you can find under the Properties Setting pane of the ExpressRoute Circuit). + > - Connecting to circuits in a different subscription isn't supported for Maximum Resiliency. + > - You can connect a virtual network to a Metro circuit in a different subscription when choosing High Resiliency. + > - You can connect a virtual network to a regular (non-metro) circuit in a different subscription when choosing Standard Resiliency. + > - The *Peer Circuit URI* is the Resource ID of the ExpressRoute circuit (which you can find under the Properties Setting pane of the ExpressRoute Circuit). :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/connection-settings.png" alt-text="Settings page"::: -1. Review the information in the **Summary** page and select **OK**. +1. Select **Review + create**. ++1. Review the information in the **Summary** page and select **Create**. :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/connection-summary.png" alt-text="Summary page"::: ## Configure ExpressRoute FastPath -You can enable [ExpressRoute FastPath](expressroute-about-virtual-network-gateways.md) if your virtual network gateway is Ultra Performance or ErGw3AZ. FastPath improves data path performance such as packets per second and connections per second between your on-premises network and your virtual network. -+[FastPath](expressroute-about-virtual-network-gateways.md) improves data path performance such as packets per second and connections per second between your on-premises network and your virtual network. -**Configure FastPath on a new connection** +### Configure FastPath on a new connection When adding a new connection for your ExpressRoute gateway, select the checkbox for **FastPath**. When adding a new connection for your ExpressRoute gateway, select the checkbox > [!NOTE] > Enabling FastPath for a new connection is only available through creating a connection from the gateway resource. New connections created from the ExpressRoute circuit or from the Connection resource page is not supported.-> -**Configure FastPath on an existing connection** ++### Configure FastPath on an existing connection 1. Go to the existing connection resource either from the ExpressRoute gateway, the ExpressRoute circuit, or the Connection resource page. When adding a new connection for your ExpressRoute gateway, select the checkbox > [!NOTE] > You can use [Connection Monitor](how-to-configure-connection-monitor.md) to verify that your traffic is reaching the destination using FastPath.-> ## Enroll in ExpressRoute FastPath features (preview) You can delete a connection and unlink your virtual network to an ExpressRoute c :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/delete-connection.png" alt-text="Delete connection"::: -## Next steps +## Next step In this tutorial, you learned how to connect a virtual network to a circuit in the same subscription and in a different subscription. For more information about ExpressRoute gateways, see: [ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md). |
firewall | Dns Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/dns-settings.md | If all DNS servers are unavailable, there's no fallback to another DNS server. DNS proxy performs five-second health check loops for as long as the upstream servers report as unhealthy. The health checks are a recursive DNS query to the root name server. Once an upstream server is considered healthy, the firewall stops health checks until the next error. When a healthy proxy returns an error, the firewall selects another DNS server in the list. +## Azure Firewall with Azure Private DNS Zones ++When you use an Azure Private DNS zone with Azure Firewall, make sure you donΓÇÖt create domain mappings that override the default domain names of the storage accounts and other endpoints created by Microsoft. If you override the default domain names, it breaks Azure Firewall management traffic access to Azure storage accounts and other endpoints. This breaks firewall updates, logging, and/or monitoring. ++For example, firewall management traffic requires access to the storage account with the domain name blob.core.windows.net and the firewall relies on Azure DNS for FQDN to IP address resolutions. ++DonΓÇÖt create a Private DNS Zone with the domain name `*.blob.core.windows.net` and associate it with the Azure Firewall virtual network. If you override the default domain names, all the DNS queries are directed to the private DNS zone, and this breaks firewall operations. Instead, create a unique domain name such as `*.<unique-domain-name>.blob.core.windows.net` for the private DNS zone. ++Alternatively, you can enable a private link for a storage account and integrate it with a private DNS zone, see [Inspect private endpoint traffic with Azure Firewall](../private-link/tutorial-inspect-traffic-azure-firewall.md). + ## Next steps - [Azure Firewall DNS Proxy details](dns-details.md) |
firewall | Firewall Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-known-issues.md | Azure Firewall Standard has the following known issues: | Error encountered when creating more than 2,000 rule collections. | The maximal number of NAT/Application or Network rule collections is 2000 (Resource Manager limit). | This is a current limitation. | |XFF header in HTTP/S|XFF headers are overwritten with the original source IP address as seen by the firewall. This is applicable for the following use cases:<br>- HTTP requests<br>- HTTPS requests with TLS termination|A fix is being investigated.| |CanΓÇÖt deploy Firewall with Availability Zones with a newly created Public IP address|When you deploy a Firewall with Availability Zones, you canΓÇÖt use a newly created Public IP address.|First create a new zone redundant Public IP address, then assign this previously created IP address during the Firewall deployment.|-|Azure private DNS zone isn't supported with Azure Firewall|Azure private DNS zone doesn't work with Azure Firewall regardless of Azure Firewall DNS settings.|To achieve the desire state of using a private DNS server, use Azure Firewall DNS proxy instead of an Azure private DNS zone.| |Physical zone 2 in Japan East is unavailable for firewall deployments.|You canΓÇÖt deploy a new firewall with physical zone 2. Additionally, if you stop an existing firewall that is deployed in physical zone 2, it can't be restarted. For more information, see [Physical and logical availability zones](../reliability/availability-zones-overview.md#physical-and-logical-availability-zones).|For new firewalls, deploy with the remaining availability zones or use a different region. To configure an existing firewall, see [How can I configure availability zones after deployment?](firewall-faq.yml#how-can-i-configure-availability-zones-after-deployment). ## Azure Firewall Premium |
firewall | Protect Azure Kubernetes Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-kubernetes-service.md | You can now start exposing services and deploying applications to this cluster. ![Public Service DNAT](~/reusable-content/ce-skilling/azure/media/aks/aks-create-svc.png) -Deploy the Azure voting app application by copying the following yaml to a file named `example.yaml`. --```yaml -# voting-storage-deployment.yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: voting-storage -spec: - replicas: 1 - selector: - matchLabels: - app: voting-storage - template: - metadata: - labels: - app: voting-storage - spec: - containers: - - name: voting-storage - image: mcr.microsoft.com/azuredocs/voting/storage:2.0 - args: ["--ignore-db-dir=lost+found"] - resources: - requests: - cpu: 100m - memory: 128Mi - limits: - cpu: 250m - memory: 256Mi - ports: - - containerPort: 3306 - name: mysql - volumeMounts: - - name: mysql-persistent-storage - mountPath: /var/lib/mysql - env: - - name: MYSQL_ROOT_PASSWORD - valueFrom: - secretKeyRef: - name: voting-storage-secret - key: MYSQL_ROOT_PASSWORD - - name: MYSQL_USER - valueFrom: - secretKeyRef: - name: voting-storage-secret - key: MYSQL_USER - - name: MYSQL_PASSWORD - valueFrom: - secretKeyRef: - name: voting-storage-secret - key: MYSQL_PASSWORD - - name: MYSQL_DATABASE - valueFrom: - secretKeyRef: - name: voting-storage-secret - key: MYSQL_DATABASE - volumes: - - name: mysql-persistent-storage - persistentVolumeClaim: - claimName: mysql-pv-claim --# voting-storage-secret.yaml -apiVersion: v1 -kind: Secret -metadata: - name: voting-storage-secret -type: Opaque -data: - MYSQL_USER: ZGJ1c2Vy - MYSQL_PASSWORD: UGFzc3dvcmQxMg== - MYSQL_DATABASE: YXp1cmV2b3Rl - MYSQL_ROOT_PASSWORD: UGFzc3dvcmQxMg== --# voting-storage-pv-claim.yaml -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: mysql-pv-claim -spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi --# voting-storage-service.yaml -apiVersion: v1 -kind: Service -metadata: - name: voting-storage - labels: - app: voting-storage -spec: - ports: - - port: 3306 - name: mysql - selector: - app: voting-storage --# voting-app-deployment.yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: voting-app -spec: - replicas: 1 - selector: - matchLabels: - app: voting-app - template: - metadata: - labels: - app: voting-app - spec: - containers: - - name: voting-app - image: mcr.microsoft.com/azuredocs/voting/app:2.0 - imagePullPolicy: Always - ports: - - containerPort: 8080 - name: http - env: - - name: MYSQL_HOST - value: "voting-storage" - - name: MYSQL_USER - valueFrom: - secretKeyRef: - name: voting-storage-secret - key: MYSQL_USER - - name: MYSQL_PASSWORD - valueFrom: - secretKeyRef: - name: voting-storage-secret - key: MYSQL_PASSWORD - - name: MYSQL_DATABASE - valueFrom: - secretKeyRef: - name: voting-storage-secret - key: MYSQL_DATABASE - - name: ANALYTICS_HOST - value: "voting-analytics" --# voting-app-service.yaml -apiVersion: v1 -kind: Service -metadata: - name: voting-app - labels: - app: voting-app -spec: - type: LoadBalancer - ports: - - port: 80 - targetPort: 8080 - name: http - selector: - app: voting-app --# voting-analytics-deployment.yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: voting-analytics -spec: - replicas: 1 - selector: - matchLabels: - app: voting-analytics - version: "2.0" - template: - metadata: - labels: - app: voting-analytics - version: "2.0" - spec: - containers: - - name: voting-analytics - image: mcr.microsoft.com/azuredocs/voting/analytics:2.0 - imagePullPolicy: Always - ports: - - containerPort: 8080 - name: http - env: - - name: MYSQL_HOST - value: "voting-storage" - - name: MYSQL_USER - valueFrom: - secretKeyRef: - name: voting-storage-secret - key: MYSQL_USER - - name: MYSQL_PASSWORD - valueFrom: - secretKeyRef: - name: voting-storage-secret - key: MYSQL_PASSWORD - - name: MYSQL_DATABASE - valueFrom: - secretKeyRef: - name: voting-storage-secret - key: MYSQL_DATABASE --# voting-analytics-service.yaml -apiVersion: v1 -kind: Service -metadata: - name: voting-analytics - labels: - app: voting-analytics -spec: - ports: - - port: 8080 - name: http - selector: - app: voting-analytics -``` +1. Review the [AKS Store Demo quickstart](https://github.com/Azure-Samples/aks-store-demo/blob/main/aks-store-quickstart.yaml) manifest to see all the resources that will be created. -Deploy the service by running: +2. Deploy the service using the `kubectl apply` command. -```bash -kubectl apply -f example.yaml -``` + ```azurecli-interactive + kubectl apply -f https://raw.githubusercontent.com/Azure-Samples/aks-store-demo/main/aks-store-quickstart.yaml + ``` ### Add a DNAT rule to Azure Firewall kubectl get services The IP address needed is listed in the EXTERNAL-IP column, similar to the following. ```bash-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -kubernetes ClusterIP 10.41.0.1 <none> 443/TCP 10h -voting-analytics ClusterIP 10.41.88.129 <none> 8080/TCP 9m -voting-app LoadBalancer 10.41.185.82 20.39.18.6 80:32718/TCP 9m -voting-storage ClusterIP 10.41.221.201 <none> 3306/TCP 9m +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kubernetes ClusterIP 10.41.0.1 <none> 443/TCP 10h +store-front LoadBalancer 10.41.185.82 203.0.113.254 80:32718/TCP 9m +order-service ClusterIP 10.0.104.144 <none> 3000/TCP 11s +product-service ClusterIP 10.0.237.60 <none> 3002/TCP 10s +rabbitmq ClusterIP 10.0.161.128 <none> 5672/TCP,15672/TCP 11s ``` Get the service IP by running: ```bash-SERVICE_IP=$(kubectl get svc voting-app -o jsonpath='{.status.loadBalancer.ingress[*].ip}') +SERVICE_IP=$(kubectl get svc store-front -o jsonpath='{.status.loadBalancer.ingress[*].ip}') ``` Add the NAT rule by running: az network firewall nat-rule create --collection-name exampleset --destination-a Navigate to the Azure Firewall frontend IP address in a browser to validate connectivity. -You should see the AKS voting app. In this example, the Firewall public IP was `203.0.113.32`. +You should see the AKS store app. In this example, the Firewall public IP was `203.0.113.32`. + +On this page, you can view products, add them to your cart, and then place an order. ## Clean up resources |
frontdoor | Front Door Route Matching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-route-matching.md | The decision of how to process the request depends on whether caching is enabled ## Route matching -This section focuses on how Front Door matches to a routing rule. The basic concept is that Front Door always matches to the **most-specific request** looking only at the "left-hand side". Front Door first match based on protocol, then domain, and last the path. +This section focuses on how Front Door matches to a routing rule. The basic concept is that Front Door always matches to the **most-specific request** looking only at the "left-hand side". Front Door first matches based on protocol, then domain, and last the path. ### Frontend host matching |
governance | Policy For Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md | Title: Learn Azure Policy for Kubernetes description: Learn how Azure Policy uses Rego and Open Policy Agent to manage clusters running Kubernetes in Azure or on-premises. Previously updated : 06/20/2024 Last updated : 09/30/2024 Azure Policy for Kubernetes also support custom definition creation at the compo With a [Resource Provider mode](./definition-structure.md#resource-provider-modes) of `Microsoft.Kubernetes.Data`, the effects [audit](./effects.md#audit), [deny](./effects.md#deny), [disabled](./effects.md#disabled), and [mutate](./effects.md#mutate-preview) are used to manage your Kubernetes clusters. -_Audit_ and _deny_ must provide **details** properties +_Audit_ and _deny_ must provide `details` properties specific to working with [OPA Constraint Framework](https://github.com/open-policy-agent/frameworks/tree/master/constraint) and Gatekeeper v3. Some other considerations: - If the cluster subscription is registered with Microsoft Defender for Cloud, then Microsoft Defender for Cloud Kubernetes policies are applied on the cluster automatically. -- When a deny policy is applied on cluster with existing Kubernetes resources, any pre-existing+- When a deny policy is applied on cluster with existing Kubernetes resources, any preexisting resource that isn't compliant with the new policy continues to run. When the non-compliant resource gets rescheduled on a different node the Gatekeeper blocks the resource creation. - When a cluster has a deny policy that validates resources, the user doesn't get a rejection message when creating a deployment. For example, consider a Kubernetes deployment that contains- replicasets and pods. When a user executes `kubectl describe deployment $MY_DEPLOYMENT`, it doesn't return a rejection message as part of events. However, + `replicasets` and pods. When a user executes `kubectl describe deployment $MY_DEPLOYMENT`, it doesn't return a rejection message as part of events. However, `kubectl describe replicasets.apps $MY_DEPLOYMENT` returns the events associated with rejection. > [!NOTE] Finally, to identify the AKS cluster version that you're using, follow the linke ### Add-on versions available per each AKS cluster version #### 1.7.0-Introducing expansion, a shift left feature that lets you know up front whether your workload resources (Deployments, ReplicaSets, Jobs, etc.) will produce admissible pods. Expansion shouldn't change the behavior of your policies; rather, it just shifts Gatekeeper's evaluation of pod-scoped policies to occur at workload admission time rather than pod admission time. However, to perform this evaluation it must generate and evaluate a what-if pod that is based on the pod spec defined in the workload, which may have incomplete metadata. For instance, the what-if pod will not contain the proper owner references. Because of this small risk of policy behavior changing, we're introducing expansion as disabled by default. To enable expansion for a given policy definition, set `.policyRule.then.details.source` to `All`. Built-ins will be updated soon to enable parameterization of this field. If you test your policy definition and find that the what-if pod being generated for evaluation purposes is incomplete, you can also use a mutation with source `Generated` to mutate the what-if pods. For more information on this option, view the [Gatekeeper documentation](https://open-policy-agent.github.io/gatekeeper/website/docs/expansion#mutating-example). ++Introducing expansion, a shift left feature that lets you know up front whether your workload resources (Deployments, ReplicaSets, Jobs, etc.) will produce admissible pods. Expansion shouldn't change the behavior of your policies; rather, it just shifts Gatekeeper's evaluation of pod-scoped policies to occur at workload admission time rather than pod admission time. However, to perform this evaluation it must generate and evaluate a what-if pod that is based on the pod spec defined in the workload, which might have incomplete metadata. For instance, the what-if pod won't contain the proper owner references. Because of this small risk of policy behavior changing, we're introducing expansion as disabled by default. To enable expansion for a given policy definition, set `.policyRule.then.details.source` to `All`. Built-ins will be updated soon to enable parameterization of this field. If you test your policy definition and find that the what-if pod being generated for evaluation purposes is incomplete, you can also use a mutation with source `Generated` to mutate the what-if pods. For more information on this option, view the [Gatekeeper documentation](https://open-policy-agent.github.io/gatekeeper/website/docs/expansion#mutating-example). Security improvements. - Released July 2024 Security improvements. - Gatekeeper 3.16.3 #### 1.6.1+ Security improvements. - Released May 2024 - Gatekeeper 3.14.2 #### 1.5.0+ Security improvements. - Released May 2024 - Kubernetes 1.27+ aligns with how the add-on was installed: _azure-policy_ to false: ```json- "addons": [{ + "addons": [ + { "name": "azure-policy", "enabled": false- }] + } + ] ``` For more information, see aligns with how the add-on was installed: - Using the `metadata.gatekeeper.sh/requires-sync-data` annotation in a constraint template to configure the [replication of data](https://open-policy-agent.github.io/gatekeeper/website/docs/sync) from your cluster into the OPA cache is currently only allowed for built-in policies. The reason is because it can dramatically increase the Gatekeeper pods resource usage if not used carefully. ### Configuring the Gatekeeper Config-Changing the Gatekeeper config is unsupported, as it contains critical security settings. Edits to the config will be reconciled. ++Changing the Gatekeeper config is unsupported, as it contains critical security settings. Edits to the config are reconciled. ### Using data.inventory in constraint templates-Currently, several built-in policies make use of [data replication](https://open-policy-agent.github.io/gatekeeper/website/docs/sync), which enables users to sync existing on-cluster resources to the OPA cache and reference them during evaluation of an AdmissionReview request. Data replication policies can be differentiated by the presence of `data.inventory` in the Rego, as well as the presence of the `metadata.gatekeeper.sh/requires-sync-data` annotation, which informs the Azure Policy addon what resources need to be cached for policy evaluation to work properly. (Note that this differs from standalone Gatekeeper, where this annotation is descriptive, not prescriptive.) -Data replication is currently blocked for use in custom policy definitions, because replicating resources with high instance counts can dramatically increase the Gatekeeper pods\' resource usage if not used carefully. You will see a `ConstraintTemplateInstallFailed` error when attempting to create a custom policy definition containing a constraint template with this annotation. -> Removing the annotation may appear to mitigate the error you see, but then the policy addon will not sync any required resources for that constraint template into the cache. Thus, your policies will be evaluated against an empty `data.inventory` (assuming that no built-in is assigned that replicates the requisite resources). This will lead to misleading compliance results. As noted [previously](#configuring-the-gatekeeper-config), manually editing the config to cache the required resources is also not permitted. +Currently, several built-in policies make use of [data replication](https://open-policy-agent.github.io/gatekeeper/website/docs/sync), which enables users to sync existing on-cluster resources to the OPA cache and reference them during evaluation of an `AdmissionReview` request. Data replication policies can be differentiated by the presence of `data.inventory` in the Rego, and the presence of the `metadata.gatekeeper.sh/requires-sync-data` annotation, which informs the Azure Policy addon which resources need to be cached for policy evaluation to work properly. This process differs from standalone Gatekeeper, where this annotation is descriptive, not prescriptive. ++Data replication is currently blocked for use in custom policy definitions, because replicating resources with high instance counts can dramatically increase the Gatekeeper pods\' resource usage if not used carefully. You'll see a `ConstraintTemplateInstallFailed` error when attempting to create a custom policy definition containing a constraint template with this annotation. ++Removing the annotation may appear to mitigate the error you see, but then the policy addon will not sync any required resources for that constraint template into the cache. Thus, your policies will be evaluated against an empty `data.inventory` (assuming that no built-in is assigned that replicates the requisite resources). This will lead to misleading compliance results. As noted [previously](#configuring-the-gatekeeper-config), manually editing the config to cache the required resources is also not permitted. The following limitations apply only to the Azure Policy Add-on for AKS: - [AKS Pod security policy](/azure/aks/use-pod-security-policies) and the Azure Policy Add-on for AKS can't both be enabled. For more information, see [AKS pod security limitation](/azure/aks/use-azure-policy). The Azure Policy Add-on requires three Gatekeeper components to run: One audit p ### How much resource consumption should I expect the Azure Policy Add-on / Extension to use on each cluster? The Azure Policy for Kubernetes components that run on your cluster consume more resources as the count of Kubernetes resources and policy assignments increases in the cluster, which requires audit and enforcement operations.+ The following are estimates to help you plan:- - For fewer than 500 pods in a single cluster with a max of 20 constraints: two vCPUs and 350 MB of memory per component. - - For more than 500 pods in a single cluster with a max of 40 constraints: three vCPUs and 600 MB of memory per component. ++- For fewer than 500 pods in a single cluster with a max of 20 constraints: two vCPUs and 350 MB of memory per component. +- For more than 500 pods in a single cluster with a max of 40 constraints: three vCPUs and 600 MB of memory per component. ### Can Azure Policy for Kubernetes definitions be applied on Windows pods? collected: - Use system node pool with `CriticalAddonsOnly` taint to schedule Gatekeeper pods. For more information, see [Using system node pools](/azure/aks/use-system-pools#system-and-user-node-pools). - Secure outbound traffic from your AKS clusters. For more information, see [Control egress traffic](/azure/aks/limit-egress-traffic) for cluster nodes.- - If the cluster has `aad-pod-identity` enabled, Node Managed Identity (NMI) pods modify the nodes' iptables to intercept calls to the Azure Instance Metadata endpoint. This configuration means any request made to the Metadata endpoint is intercepted by NMI even if the pod doesn't use `aad-pod-identity`. - - AzurePodIdentityException CRD can be configured to inform `aad-pod-identity` that any requests to the Metadata endpoint originating from a pod that matches labels defined in CRD should be proxied without any processing in NMI. The system pods with kubernetes.azure.com/managedby: aks label in kube-system namespace should be excluded in `aad-pod-identity` by configuring the AzurePodIdentityException CRD. For more information, see [Disable aad-pod-identity for a specific pod or application](https://azure.github.io/aad-pod-identity/docs/configure/application_exception). To configure an exception, install the [mic-exception YAML](https://github.com/Azure/aad-pod-identity/blob/master/deploy/infra/mic-exception.yaml). + - If the cluster has `aad-pod-identity` enabled, Node Managed Identity (NMI) pods modify the nodes `iptables` to intercept calls to the Azure Instance Metadata endpoint. This configuration means any request made to the Metadata endpoint is intercepted by NMI even if the pod doesn't use `aad-pod-identity`. + - `AzurePodIdentityException` CRD can be configured to inform `aad-pod-identity` that any requests to the Metadata endpoint originating from a pod that matches labels defined in CRD should be proxied without any processing in NMI. The system pods with `kubernetes.azure.com/managedby: aks` label in kube-system namespace should be excluded in `aad-pod-identity` by configuring the `AzurePodIdentityException` CRD. For more information, see [Disable aad-pod-identity for a specific pod or application](https://azure.github.io/aad-pod-identity/docs/configure/application_exception). To configure an exception, install the [mic-exception YAML](https://github.com/Azure/aad-pod-identity/blob/master/deploy/infra/mic-exception.yaml). ## Next steps - Review examples at [Azure Policy samples](../samples/index.md).-- Review the [Policy definition structure](definition-structure.md).-- Review [Understanding policy effects](effects.md).+- Review the [Policy definition structure](definition-structure-basics.md). +- Review [Understanding policy effects](effect-basics.md). - Understand how to [programmatically create policies](../how-to/programmatically-create.md). - Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). |
governance | Determine Non Compliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/determine-non-compliance.md | Title: Determine causes of non-compliance description: When a resource is non-compliant, there are many possible reasons. Discover what caused the non-compliance with the policy. Previously updated : 08/28/2024 Last updated : 09/30/2024 |
governance | Get Compliance Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/get-compliance-data.md | Title: Get policy compliance data description: Azure Policy evaluations and effects determine compliance. Learn how to get the compliance details of your Azure resources. Previously updated : 11/03/2022 Last updated : 09/30/2024 + # Get compliance data of Azure resources -One of the largest benefits of Azure Policy is the insight and controls it provides over resources -in a subscription or [management group](../../management-groups/overview.md) of subscriptions. This -control can be used to prevent resources from being created in the -wrong location, enforce common and consistent tag usage, or audit existing resources for -appropriate configurations and settings. In all cases, data is generated by Azure Policy to enable -you to understand the compliance state of your environment. +One of the largest benefits of Azure Policy is the insight and controls it provides over resources in a subscription or [management group](../../management-groups/overview.md) of subscriptions. This control can be used to prevent resources from being created in the wrong location, enforce common and consistent tag usage, or audit existing resources for appropriate configurations and settings. In all cases, data is generated by Azure Policy to enable you to understand the compliance state of your environment. -Before reviewing compliance data, it is important to [understand compliance states](../concepts/compliance-states.md) in Azure Policy. +Before reviewing compliance data, it's important to [understand compliance states](../concepts/compliance-states.md) in Azure Policy. -There are several ways to access the compliance information generated by your policy and initiative -assignments: +There are several ways to access the compliance information generated by your policy and initiative assignments: - Using the [Azure portal](#portal) - Through [command line](#command-line) scripting - By viewing [Azure Monitor logs](#azure-monitor-logs) - Through [Azure Resource Graph](#azure-resource-graph) queries -Before looking at the methods to report on compliance, let's look at when compliance information is -updated and the frequency and events that trigger an evaluation cycle. +Before looking at the methods to report on compliance, let's look at when compliance information is updated and the frequency and events that trigger an evaluation cycle. ## Evaluation triggers -The results of a completed evaluation cycle are available in the `Microsoft.PolicyInsights` Resource -Provider through `PolicyStates` and `PolicyEvents` operations. For more information about the -operations of the Azure Policy Insights REST API, see -[Azure Policy Insights](/rest/api/policy/). +The results of a completed evaluation cycle are available in the `Microsoft.PolicyInsights` Resource Provider through `PolicyStates` and `PolicyEvents` operations. For more information about the operations of the Azure Policy Insights REST API, see [Azure Policy Insights](/rest/api/policy/). Evaluations of assigned policies and initiatives happen as the result of various events: -- A policy or initiative is newly assigned to a scope. It takes around five minutes for the assignment- to be applied to the defined scope, then the evaluation cycle begins for applicable resources against the newly assigned policy or initiative. Depending on the effects - used, resources are marked as compliant, non-compliant, exempt, or unknown. A - large policy or initiative evaluated against a large scope of resources can take time, so - there's no pre-defined expectation of when the evaluation cycle completes. Once it completes, - updated compliance results are available in the portal and SDKs. +- A policy or initiative is newly assigned to a scope. It takes around five minutes for the assignment to be applied to the defined scope, then the evaluation cycle begins for applicable resources against the newly assigned policy or initiative. Depending on the effects used, resources are marked as compliant, non-compliant, exempt, or unknown. A large policy or initiative evaluated against a large scope of resources can take time, so there's no predefined expectation of when the evaluation cycle completes. After it completes, updated compliance results are available in the portal and SDKs. -- A policy or initiative already assigned to a scope is updated. The evaluation cycle and timing for- this scenario is the same as for a new assignment to a scope. +- A policy or initiative already assigned to a scope is updated. The evaluation cycle and timing for this scenario is the same as for a new assignment to a scope. -- A resource is deployed to or updated within a scope with an assignment via Azure Resource Manager,- REST API, or a supported SDK. In this scenario, the effect event (append, audit, deny, deploy) and - compliant status information for the individual resource becomes available in the portal and SDKs - around 15 minutes later. This event doesn't cause an evaluation of other resources. +- A resource is deployed to or updated within a scope with an assignment via Azure Resource Manager, REST API, or a supported SDK. In this scenario, the effect event (append, audit, deny, deploy) and compliant status information for the individual resource becomes available in the portal and SDKs around 15 minutes later. This event doesn't cause an evaluation of other resources. -- A subscription (resource type `Microsoft.Resources/subscriptions`) is created or moved within a- [management group hierarchy](../../management-groups/overview.md) with an assigned policy - definition targeting the subscription resource type. Evaluation of the subscription supported - effects (audit, auditIfNotExist, deployIfNotExists, modify), logging, and any remediation actions - takes around 30 minutes. +- A subscription (resource type `Microsoft.Resources/subscriptions`) is created or moved within a [management group hierarchy](../../management-groups/overview.md) with an assigned policy definition targeting the subscription resource type. Evaluation of the subscription supported effects (audit, auditIfNotExist, deployIfNotExists, modify), logging, and any remediation actions takes around 30 minutes. -- A [policy exemption](../concepts/exemption-structure.md) is created, updated, or deleted. In this- scenario, the corresponding assignment is evaluated for the defined exemption scope. +- A [policy exemption](../concepts/exemption-structure.md) is created, updated, or deleted. In this scenario, the corresponding assignment is evaluated for the defined exemption scope. -- Standard compliance evaluation cycle. Once every 24 hours, assignments are automatically- reevaluated. A large policy or initiative of many resources can take time, so there's no - pre-defined expectation of when the evaluation cycle completes. Once it completes, updated - compliance results are available in the portal and SDKs. +- Standard compliance evaluation cycle. Once every 24 hours, assignments are automatically reevaluated. A large policy or initiative of many resources can take time, so there's no pre-defined expectation of when the evaluation cycle completes. Once it completes, updated compliance results are available in the portal and SDKs. -- The [machine configuration](../../machine-configuration/overview.md) resource provider is updated with- compliance details by a managed resource. +- The [machine configuration](../../machine-configuration/overview.md) resource provider is updated with compliance details by a managed resource. - On-demand scan > [!NOTE]-> By design, Azure Policy exempts all resources under the `Microsoft.Resources` resource provider (RP) from -policy evaluation with the exception of subscriptions and resource groups, which can be evaluated. +> By design, Azure Policy exempts all resources under the `Microsoft.Resources` resource provider (RP) from policy evaluation with the exception of subscriptions and resource groups, which can be evaluated. ### On-demand evaluation scan -An evaluation scan for a subscription or a resource group can be started with Azure CLI, Azure -PowerShell, a call to the REST API, or by using the -[Azure Policy Compliance Scan GitHub Action](https://github.com/marketplace/actions/azure-policy-compliance-scan). -This scan is an asynchronous process. +An evaluation scan for a subscription or a resource group can be started with Azure CLI, Azure PowerShell, a call to the REST API, or by using the [Azure Policy Compliance Scan GitHub Action](https://github.com/marketplace/actions/azure-policy-compliance-scan). This scan is an asynchronous process. > [!NOTE] > Not all Azure resource providers support on-demand evaluation scans. For example, [Azure Virtual Network Manager (AVNM)](../../../virtual-network-manager/overview.md) currently doesn't support either manual triggers or the standard policy compliance evaluation cycle (daily scans). #### On-demand evaluation scan - GitHub Action -Use the -[Azure Policy Compliance Scan action](https://github.com/marketplace/actions/azure-policy-compliance-scan) -to trigger an on-demand evaluation scan from your -[GitHub workflow](https://docs.github.com/actions/writing-workflows/about-workflows) -on one or multiple resources, resource groups, or subscriptions, and gate the workflow based on the -compliance state of resources. You can also configure the workflow to run at a scheduled time so -that you get the latest compliance status at a convenient time. Optionally, GitHub Actions can -generate a report on the compliance state of scanned resources for further analysis or for -archiving. +Use the [Azure Policy Compliance Scan action](https://github.com/marketplace/actions/azure-policy-compliance-scan) to trigger an on-demand evaluation scan from your [GitHub workflow](https://docs.github.com/actions/writing-workflows/about-workflows) on one or multiple resources, resource groups, or subscriptions, and gate the workflow based on the compliance state of resources. You can also configure the workflow to run at a scheduled time so that you get the latest compliance status at a convenient time. Optionally, GitHub Actions can generate a report on the compliance state of scanned resources for further analysis or for archiving. The following example runs a compliance scan for a subscription. jobs: /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx ``` -For more information and workflow samples, see the -[GitHub Actions for Azure Policy Compliance Scan repo](https://github.com/Azure/policy-compliance-scan). +For more information and workflow samples, see the [GitHub Actions for Azure Policy Compliance Scan repo](https://github.com/Azure/policy-compliance-scan). #### On-demand evaluation scan - Azure CLI -The compliance scan is started with the -[az policy state trigger-scan](/cli/azure/policy/state#az-policy-state-trigger-scan) command. +The compliance scan is started with the [az policy state trigger-scan](/cli/azure/policy/state#az-policy-state-trigger-scan) command. -By default, `az policy state trigger-scan` starts an evaluation for all resources in the current -subscription. To start an evaluation on a specific resource group, use the **resource-group** -parameter. The following example starts a compliance scan in the current subscription for the _MyRG_ -resource group: +By default, `az policy state trigger-scan` starts an evaluation for all resources in the current subscription. To start an evaluation on a specific resource group, use the `resource-group` parameter. The following example starts a compliance scan in the current subscription for the _MyRG_ resource group: ```azurecli-interactive az policy state trigger-scan --resource-group "MyRG" ``` -You can choose not to wait for the asynchronous process to complete before continuing with the -**no-wait** parameter. +You can choose not to wait for the asynchronous process to complete before continuing with the `no-wait` parameter. #### On-demand evaluation scan - Azure PowerShell -The compliance scan is started with the -[Start-AzPolicyComplianceScan](/powershell/module/az.policyinsights/start-azpolicycompliancescan) -cmdlet. +The compliance scan is started with the [Start-AzPolicyComplianceScan](/powershell/module/az.policyinsights/start-azpolicycompliancescan) cmdlet. -By default, `Start-AzPolicyComplianceScan` starts an evaluation for all resources in the current -subscription. To start an evaluation on a specific resource group, use the **ResourceGroupName** -parameter. The following example starts a compliance scan in the current subscription for the _MyRG_ -resource group: +By default, `Start-AzPolicyComplianceScan` starts an evaluation for all resources in the current subscription. To start an evaluation on a specific resource group, use the `ResourceGroupName` parameter. The following example starts a compliance scan in the current subscription for the _MyRG_ resource group: ```azurepowershell-interactive Start-AzPolicyComplianceScan -ResourceGroupName 'MyRG' ``` -You can have PowerShell wait for the asynchronous call to complete before providing the results -output or have it run in the background as a -[job](/powershell/module/microsoft.powershell.core/about/about_jobs). To use a PowerShell job to run -the compliance scan in the background, use the **AsJob** parameter and set the value to an object, -such as `$job` in this example: +You can have PowerShell wait for the asynchronous call to complete before providing the results output or have it run in the background as a [job](/powershell/module/microsoft.powershell.core/about/about_jobs). To use a PowerShell job to run the compliance scan in the background, use the `AsJob` parameter and set the value to an object, such as `$job` in this example: ```azurepowershell-interactive $job = Start-AzPolicyComplianceScan -AsJob ``` -You can check on the status of the job by checking on the `$job` object. The job is of the type -`Microsoft.Azure.Commands.Common.AzureLongRunningJob`. Use `Get-Member` on the `$job` object to see -available properties and methods. +You can check on the status of the job by checking on the `$job` object. The job is of the type `Microsoft.Azure.Commands.Common.AzureLongRunningJob`. Use `Get-Member` on the `$job` object to see available properties and methods. While the compliance scan is running, checking the `$job` object outputs results such as these: Id Name PSJobTypeName State HasMoreData Locatio 2 Long Running O... AzureLongRunni... Running True localhost Start-AzPolicyCompliance... ``` -When the compliance scan completes, the **State** property changes to _Completed_. +When the compliance scan completes, the `State` property changes to _Completed_. #### On-demand evaluation scan - REST -As an asynchronous process, the REST endpoint to start the scan doesn't wait until the scan is -complete to respond. Instead, it provides a URI to query the status of the requested evaluation. +As an asynchronous process, the REST endpoint to start the scan doesn't wait until the scan is complete to respond. Instead, it provides a URI to query the status of the requested evaluation. -In each REST API URI, there are variables that are used that you need to replace with your own -values: +In each REST API URI, there are variables that are used that you need to replace with your own values: - `{YourRG}` - Replace with the name of your resource group - `{subscriptionId}` - Replace with your subscription ID -The scan supports evaluation of resources in a subscription or in a resource group. Start a scan by -scope with a REST API **POST** command using the following URI structures: +The scan supports evaluation of resources in a subscription or in a resource group. Start a scan by scope with a REST API POST command using the following URI structures: - Subscription scope with a REST API **POST** command using the following URI structures: POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{YourRG}/providers/Microsoft.PolicyInsights/policyStates/latest/triggerEvaluation?api-version=2019-10-01 ``` -The call returns a **202 Accepted** status. Included in the response header is a **Location** +The call returns a **202 Accepted** status. Included in the response header is a `location` property with the following format: ```http https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/asyncOperationResults/{ResourceContainerGUID}?api-version=2019-10-01 ``` -`{ResourceContainerGUID}` is statically generated for the scope requested. If a scope is already -running an on-demand scan, a new scan isn't started. Instead, the new request is provided the same -`{ResourceContainerGUID}` **location** URI for status. A REST API **GET** command to the -**Location** URI returns a **202 Accepted** while the evaluation is ongoing. When the evaluation -scan has completed, it returns a **200 OK** status. The body of a completed scan is a JSON response -with the status: +`{ResourceContainerGUID}` is statically generated for the scope requested. If a scope is already running an on-demand scan, a new scan isn't started. Instead, the new request is provided the same `{ResourceContainerGUID}` `location` URI for status. A REST API GET command to the `location` URI returns a **202 Accepted** while the evaluation is ongoing. When the evaluation scan is complete, it returns a **200 OK** status. The body of a completed scan is a JSON response with the status: ```json {- "status": "Succeeded" + "status": "Succeeded" } ``` #### On-demand evaluation scan - Visual Studio Code -The Azure Policy extension for Visual Studio Code is capable of running an evaluation scan for a -specific resource. This scan is a synchronous process, unlike the Azure PowerShell and REST methods. -For details and steps, see -[On-demand evaluation with the VS Code extension](./extension-for-vscode.md#on-demand-evaluation-scan). +The Azure Policy extension for Visual Studio Code is capable of running an evaluation scan for a specific resource. This scan is a synchronous process, unlike the Azure PowerShell and REST methods. For details and steps, see [On-demand evaluation with the VS Code extension](./extension-for-vscode.md#on-demand-evaluation-scan). ## Portal -The Azure portal showcases a graphical experience of visualizing and understanding the state of -compliance in your environment. On the **Policy** page, the **Overview** option provides details for -available scopes on the compliance of both policies and initiatives. Along with the compliance state -and count per assignment, it contains a chart showing compliance over the last seven days. The -**Compliance** page contains much of this same information (except the chart), but provide -additional filtering and sorting options. +The Azure portal showcases a graphical experience of visualizing and understanding the state of compliance in your environment. On the **Policy** page, the **Overview** option provides details for available scopes on the compliance of both policies and initiatives. Along with the compliance state and count per assignment, it contains a chart showing compliance over the last seven days. The **Compliance** page contains much of this same information (except the chart), but provides more filtering and sorting options. :::image type="content" source="../media/getting-compliance-data/compliance-page.png" alt-text="Screenshot of Compliance page, filtering options, and details." border="false"::: -Since a policy or initiative can be assigned to different scopes, the table includes the scope for -each assignment and the type of definition that was assigned. The number of non-compliant resources -and non-compliant policies for each assignment are also provided. Selecting on a policy or -initiative in the table provides a deeper look at the compliance for that particular assignment. +Since a policy or initiative can be assigned to different scopes, the table includes the scope for each assignment and the type of definition that was assigned. The number of non-compliant resources and non-compliant policies for each assignment are also provided. Selecting on a policy or initiative in the table provides a deeper look at the compliance for that particular assignment. :::image type="content" source="../media/getting-compliance-data/compliance-details.png" alt-text="Screenshot of Compliance Details page, including counts and resource compliant details." border="false"::: -The list of resources on the **Resource compliance** tab shows the evaluation status of existing -resources for the current assignment. The tab defaults to **Non-compliant**, but can be filtered. -Events (append, audit, deny, deploy, modify) triggered by the request to create a resource are shown -under the **Events** tab. +The list of resources on the **Resource compliance** tab shows the evaluation status of existing resources for the current assignment. The tab defaults to **Non-compliant**, but can be filtered. Events (append, audit, deny, deploy, modify) triggered by the request to create a resource are shown under the **Events** tab. :::image type="content" source="../media/getting-compliance-data/compliance-events.png" alt-text="Screenshot of the Events tab on Compliance Details page." border="false"::: <a name="component-compliance"></a>-For [Resource Provider mode](../concepts/definition-structure.md#resource-provider-modes) resources, -on the **Resource compliance** tab, selecting the resource or right-clicking on the row and -selecting **View compliance details** opens the component compliance details. This page also offers -tabs to see the policies that are assigned to this resource, events, component events, and change -history. +For [Resource Provider mode](../concepts/definition-structure.md#resource-provider-modes) resources, on the **Resource compliance** tab, selecting the resource or right-clicking on the row and selecting **View compliance details** opens the component compliance details. This page also offers tabs to see the policies that are assigned to this resource, events, component events, and change history. :::image type="content" source="../media/getting-compliance-data/compliance-components.png" alt-text="Screenshot of Component Compliance tab and compliance details for a Resource Provider mode assignment." border="false"::: -Back on the resource compliance page, select and hold (or right-click) on the row of the event you -would like to gather more details on and select **Show activity logs**. The activity log page opens -and is pre-filtered to the search showing details for the assignment and the events. The activity -log provides more context and information about those events. +Back on the resource compliance page, select and hold (or right-click) on the row of the event you would like to gather more details on and select **Show activity logs**. The activity log page opens and is prefiltered to the search showing details for the assignment and the events. The activity log provides more context and information about those events. :::image type="content" source="../media/getting-compliance-data/compliance-activitylog.png" alt-text="Screenshot of the Activity Log for Azure Policy activities and evaluations." border="false"::: log provides more context and information about those events. ## Command line -The same information available in the portal can be retrieved with the REST API (including with -[ARMClient](https://github.com/projectkudu/ARMClient)), Azure PowerShell, and Azure CLI. For full -details on the REST API, see the [Azure Policy](/rest/api/policy/) reference. The REST API reference -pages have a green 'Try It' button on each operation that allows you to try it right in the browser. +The same information available in the portal can be retrieved with the REST API (including with [ARMClient](https://github.com/projectkudu/ARMClient)), Azure PowerShell, and Azure CLI. For full details on the REST API, see the [Azure Policy](/rest/api/policy/) reference. The REST API reference pages have a green **Try It** button on each operation that allows you to try it right in the browser. Use ARMClient or a similar tool to handle authentication to Azure for the REST API examples. ### Summarize results -With the REST API, summarization can be performed by container, definition, or assignment. Here's -an example of summarization at the subscription level using Azure Policy Insight's [Summarize For -Subscription](/rest/api/policy/policy-states/summarize-for-subscription): +With the REST API, summarization can be performed by container, definition, or assignment. Here's an example of summarization at the subscription level using Azure Policy Insight's [Summarize For Subscription](/rest/api/policy/policy-states/summarize-for-subscription): ```http POST https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/latest/summarize?api-version=2019-10-01 ``` -The output summarizes the subscription. In the example output below, the summarized compliance are -under **value.results.nonCompliantResources** and **value.results.nonCompliantPolicies**. This -request provides further details, including each assignment that made up the non-compliant numbers -and the definition information for each assignment. Each policy object in the hierarchy provides a -**queryResultsUri** that can be used to get more detail at that level. +The output summarizes the subscription. In the following example output, the summarized compliance are under `value.results.nonCompliantResources` and `value.results.nonCompliantPolicies`. This request provides further details, including each assignment that made up the non-compliant numbers and the definition information for each assignment. Each policy object in the hierarchy provides a `queryResultsUri` that can be used to get more detail at that level. ```json {- "@odata.context": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/$metadata#summary", - "@odata.count": 1, - "value": [{ - "@odata.id": null, - "@odata.context": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/$metadata#summary/$entity", - "results": { - "queryResultsUri": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/latest/queryResults?api-version=2019-10-01&$from=2018-05-18 04:28:22Z&$to=2018-05-19 04:28:22Z&$filter=ComplianceState eq 'NonCompliant'", + "@odata.context": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/$metadata#summary", + "@odata.count": 1, + "value": [ + { + "@odata.id": null, + "@odata.context": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/$metadata#summary/$entity", + "results": { + "queryResultsUri": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/latest/queryResults?api-version=2019-10-01&$from=2018-05-18 04:28:22Z&$to=2018-05-19 04:28:22Z&$filter=ComplianceState eq 'NonCompliant'", + "nonCompliantResources": 15, + "nonCompliantPolicies": 1 + }, + "policyAssignments": [ + { + "policyAssignmentId": "/subscriptions/{subscriptionId}/resourcegroups/rg-tags/providers/microsoft.authorization/policyassignments/37ce239ae4304622914f0c77", + "policySetDefinitionId": "", + "results": { + "queryResultsUri": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/latest/queryResults?api-version=2019-10-01&$from=2018-05-18 04:28:22Z&$to=2018-05-19 04:28:22Z&$filter=ComplianceState eq 'NonCompliant' and PolicyAssignmentId eq '/subscriptions/{subscriptionId}/resourcegroups/rg-tags/providers/microsoft.authorization/policyassignments/37ce239ae4304622914f0c77'", "nonCompliantResources": 15, "nonCompliantPolicies": 1- }, - "policyAssignments": [{ - "policyAssignmentId": "/subscriptions/{subscriptionId}/resourcegroups/rg-tags/providers/microsoft.authorization/policyassignments/37ce239ae4304622914f0c77", - "policySetDefinitionId": "", - "results": { - "queryResultsUri": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/latest/queryResults?api-version=2019-10-01&$from=2018-05-18 04:28:22Z&$to=2018-05-19 04:28:22Z&$filter=ComplianceState eq 'NonCompliant' and PolicyAssignmentId eq '/subscriptions/{subscriptionId}/resourcegroups/rg-tags/providers/microsoft.authorization/policyassignments/37ce239ae4304622914f0c77'", - "nonCompliantResources": 15, - "nonCompliantPolicies": 1 - }, - "policyDefinitions": [{ - "policyDefinitionReferenceId": "", - "policyDefinitionId": "/providers/microsoft.authorization/policydefinitions/1e30110a-5ceb-460c-a204-c1c3969c6d62", - "effect": "deny", - "results": { - "queryResultsUri": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/latest/queryResults?api-version=2019-10-01&$from=2018-05-18 04:28:22Z&$to=2018-05-19 04:28:22Z&$filter=ComplianceState eq 'NonCompliant' and PolicyAssignmentId eq '/subscriptions/{subscriptionId}/resourcegroups/rg-tags/providers/microsoft.authorization/policyassignments/37ce239ae4304622914f0c77' and PolicyDefinitionId eq '/providers/microsoft.authorization/policydefinitions/1e30110a-5ceb-460c-a204-c1c3969c6d62'", - "nonCompliantResources": 15 - } - }] - }] - }] + }, + "policyDefinitions": [ + { + "policyDefinitionReferenceId": "", + "policyDefinitionId": "/providers/microsoft.authorization/policydefinitions/1e30110a-5ceb-460c-a204-c1c3969c6d62", + "effect": "deny", + "results": { + "queryResultsUri": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/latest/queryResults?api-version=2019-10-01&$from=2018-05-18 04:28:22Z&$to=2018-05-19 04:28:22Z&$filter=ComplianceState eq 'NonCompliant' and PolicyAssignmentId eq '/subscriptions/{subscriptionId}/resourcegroups/rg-tags/providers/microsoft.authorization/policyassignments/37ce239ae4304622914f0c77' and PolicyDefinitionId eq '/providers/microsoft.authorization/policydefinitions/1e30110a-5ceb-460c-a204-c1c3969c6d62'", + "nonCompliantResources": 15 + } + } + ] + } + ] + } + ] } ``` ### Query for resources -In the example above, **value.policyAssignments.policyDefinitions.results.queryResultsUri** provides -a sample URI for all non-compliant resources for a specific policy definition. In the -**$filter** value, ComplianceState is equal (eq) to 'NonCompliant', PolicyAssignmentId is specified -for the policy definition, and then the PolicyDefinitionId itself. The reason for including the -PolicyAssignmentId in the filter is because the PolicyDefinitionId could exist in several policy or -initiative assignments with different scopes. By specifying both the PolicyAssignmentId and the -PolicyDefinitionId, we can be explicit in the results we're looking for. Previously, for -PolicyStates we used **latest**, which automatically sets a **from** and **to** time window of the -last 24-hours. +In the previous example, `value.policyAssignments.policyDefinitions.results.queryResultsUri` provides a sample URI for all non-compliant resources for a specific policy definition. In the `$filter` value, ComplianceState is equal (eq) to `NonCompliant`, `PolicyAssignmentId` is specified for the policy definition, and then the PolicyDefinitionId itself. The reason for including the `PolicyAssignmentId` in the filter is because the `PolicyDefinitionId` could exist in several policy or initiative assignments with different scopes. By specifying both the `PolicyAssignmentId` and the `PolicyDefinitionId`, we can be explicit in the results we're looking for. Previously, for `PolicyStates` we used `latest`, which automatically sets a `from` and `to` time window of the last 24-hours. ```http https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/latest/queryResults?api-version=2019-10-01&$from=2018-05-18 04:28:22Z&$to=2018-05-19 04:28:22Z&$filter=ComplianceState eq 'NonCompliant' and PolicyAssignmentId eq '/subscriptions/{subscriptionId}/resourcegroups/rg-tags/providers/microsoft.authorization/policyassignments/37ce239ae4304622914f0c77' and PolicyDefinitionId eq '/providers/microsoft.authorization/policydefinitions/1e30110a-5ceb-460c-a204-c1c3969c6d62' ``` -The example response below has been trimmed to a single non-compliant resource for brevity. The -detailed response has several pieces of data about the resource, the policy or initiative, and the -assignment. Notice that you can also see what assignment parameters were passed to the policy -definition. +The following example response was trimmed to a single non-compliant resource for brevity. The detailed response has several pieces of data about the resource, the policy or initiative, and the assignment. Notice that you can also see what assignment parameters were passed to the policy definition. ```json {- "@odata.context": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/$metadata#latest", - "@odata.count": 15, - "value": [{ - "@odata.id": null, - "@odata.context": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/$metadata#latest/$entity", - "timestamp": "2018-05-19T04:41:09Z", - "resourceId": "/subscriptions/{subscriptionId}/resourceGroups/rg-tags/providers/Microsoft.Compute/virtualMachines/linux", - "policyAssignmentId": "/subscriptions/{subscriptionId}/resourceGroups/rg-tags/providers/Microsoft.Authorization/policyAssignments/37ce239ae4304622914f0c77", - "policyDefinitionId": "/providers/Microsoft.Authorization/policyDefinitions/1e30110a-5ceb-460c-a204-c1c3969c6d62", - "effectiveParameters": "", - "ComplianceState": "NonCompliant", - "subscriptionId": "{subscriptionId}", - "resourceType": "/Microsoft.Compute/virtualMachines", - "resourceLocation": "westus2", - "resourceGroup": "RG-Tags", - "resourceTags": "tbd", - "policyAssignmentName": "37ce239ae4304622914f0c77", - "policyAssignmentOwner": "tbd", - "policyAssignmentParameters": "{\"tagName\":{\"value\":\"costCenter\"},\"tagValue\":{\"value\":\"Contoso-Test\"}}", - "policyAssignmentScope": "/subscriptions/{subscriptionId}/resourceGroups/RG-Tags", - "policyDefinitionName": "1e30110a-5ceb-460c-a204-c1c3969c6d62", - "policyDefinitionAction": "deny", - "policyDefinitionCategory": "tbd", - "policySetDefinitionId": "", - "policySetDefinitionName": "", - "policySetDefinitionOwner": "", - "policySetDefinitionCategory": "", - "policySetDefinitionParameters": "", - "managementGroupIds": "", - "policyDefinitionReferenceId": "" - }] + "@odata.context": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/$metadata#latest", + "@odata.count": 15, + "value": [ + { + "@odata.id": null, + "@odata.context": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/$metadata#latest/$entity", + "timestamp": "2018-05-19T04:41:09Z", + "resourceId": "/subscriptions/{subscriptionId}/resourceGroups/rg-tags/providers/Microsoft.Compute/virtualMachines/linux", + "policyAssignmentId": "/subscriptions/{subscriptionId}/resourceGroups/rg-tags/providers/Microsoft.Authorization/policyAssignments/37ce239ae4304622914f0c77", + "policyDefinitionId": "/providers/Microsoft.Authorization/policyDefinitions/1e30110a-5ceb-460c-a204-c1c3969c6d62", + "effectiveParameters": "", + "ComplianceState": "NonCompliant", + "subscriptionId": "{subscriptionId}", + "resourceType": "/Microsoft.Compute/virtualMachines", + "resourceLocation": "westus2", + "resourceGroup": "RG-Tags", + "resourceTags": "tbd", + "policyAssignmentName": "37ce239ae4304622914f0c77", + "policyAssignmentOwner": "tbd", + "policyAssignmentParameters": "{\"tagName\":{\"value\":\"costCenter\"},\"tagValue\":{\"value\":\"Contoso-Test\"}}", + "policyAssignmentScope": "/subscriptions/{subscriptionId}/resourceGroups/RG-Tags", + "policyDefinitionName": "1e30110a-5ceb-460c-a204-c1c3969c6d62", + "policyDefinitionAction": "deny", + "policyDefinitionCategory": "tbd", + "policySetDefinitionId": "", + "policySetDefinitionName": "", + "policySetDefinitionOwner": "", + "policySetDefinitionCategory": "", + "policySetDefinitionParameters": "", + "managementGroupIds": "", + "policyDefinitionReferenceId": "" + } + ] } ``` ### View events -When a resource is created or updated, a policy evaluation result is generated. Results are called -_policy events_. Use the following URI to view recent policy events associated with the -subscription. +When a resource is created or updated, a policy evaluation result is generated. Results are called _policy events_. Use the following URI to view recent policy events associated with the subscription. ```http https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyEvents/default/queryResults?api-version=2019-10-01 Your results resemble the following example: ```json {- "@odata.context": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyEvents/$metadata#default", - "@odata.count": 1, - "value": [{ - "@odata.id": null, - "@odata.context": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyEvents/$metadata#default/$entity", - "NumAuditEvents": 16 - }] + "@odata.context": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyEvents/$metadata#default", + "@odata.count": 1, + "value": [ + { + "@odata.id": null, + "@odata.context": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyEvents/$metadata#default/$entity", + "NumAuditEvents": 16 + } + ] } ``` -For more information about querying policy events, see the -[Azure Policy Events](/rest/api/policy/policy-events) reference article. +For more information about querying policy events, see the [Azure Policy Events](/rest/api/policy/policy-events) reference article. ### Azure CLI -The [Azure CLI](/cli/azure/what-is-azure-cli) command group for Azure Policy covers most operations -that are available in REST or Azure PowerShell. For the full list of available commands, see -[Azure CLI - Azure Policy Overview](/cli/azure/policy). +The [Azure CLI](/cli/azure/what-is-azure-cli) command group for Azure Policy covers most operations that are available in REST or Azure PowerShell. For the full list of available commands, see [Azure CLI - Azure Policy Overview](/cli/azure/policy). -Example: Getting the state summary for the topmost assigned policy with the highest number of -non-compliant resources. +Example: Getting the state summary for the topmost assigned policy with the highest number of non-compliant resources. ```azurecli-interactive az policy state summarize --top 1 The top portion of the response looks like this example: ```json {- "odatacontext": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/$metadata#summary/$entity", - "odataid": null, - "policyAssignments": [{ - "policyAssignmentId": "/subscriptions/{subscriptionId}/providers/microsoft.authorization/policyassignments/e0704696df5e4c3c81c873e8", - "policyDefinitions": [{ - "effect": "audit", - "policyDefinitionGroupNames": [ - "" - ], - "policyDefinitionId": "/subscriptions/{subscriptionId}/providers/microsoft.authorization/policydefinitions/2e3197b6-1f5b-4b01-920c-b2f0a7e9b18a", - "policyDefinitionReferenceId": "", - "results": { - "nonCompliantPolicies": null, - "nonCompliantResources": 398, - "policyDetails": [{ - "complianceState": "noncompliant", - "count": 1 - }], - "policyGroupDetails": [{ - "complianceState": "noncompliant", - "count": 1 - }], - "queryResultsUri": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/latest/queryResults?api-version=2019-10-01&$from=2020-07-14 14:01:22Z&$to=2020-07-15 14:01:22Z and PolicyAssignmentId eq '/subscriptions/{subscriptionId}/providers/microsoft.authorization/policyassignments/e0704696df5e4c3c81c873e8' and PolicyDefinitionId eq '/subscriptions/{subscriptionId}/providers/microsoft.authorization/policydefinitions/2e3197b6-1f5b-4b01-920c-b2f0a7e9b18a'", - "resourceDetails": [{ - "complianceState": "noncompliant", - "count": 398 - }, - { - "complianceState": "compliant", - "count": 4 - } - ] - } - }], - ... + "odatacontext": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/$metadata#summary/$entity", + "odataid": null, + "policyAssignments": [ + { + "policyAssignmentId": "/subscriptions/{subscriptionId}/providers/microsoft.authorization/policyassignments/e0704696df5e4c3c81c873e8", + "policyDefinitions": [ + { + "effect": "audit", + "policyDefinitionGroupNames": [ + "" + ], + "policyDefinitionId": "/subscriptions/{subscriptionId}/providers/microsoft.authorization/policydefinitions/2e3197b6-1f5b-4b01-920c-b2f0a7e9b18a", + "policyDefinitionReferenceId": "", + "results": { + "nonCompliantPolicies": null, + "nonCompliantResources": 398, + "policyDetails": [ + { + "complianceState": "noncompliant", + "count": 1 + } + ], + "policyGroupDetails": [ + { + "complianceState": "noncompliant", + "count": 1 + } + ], + "queryResultsUri": "https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/latest/queryResults?api-version=2019-10-01&$from=2020-07-14 14:01:22Z&$to=2020-07-15 14:01:22Z and PolicyAssignmentId eq '/subscriptions/{subscriptionId}/providers/microsoft.authorization/policyassignments/e0704696df5e4c3c81c873e8' and PolicyDefinitionId eq '/subscriptions/{subscriptionId}/providers/microsoft.authorization/policydefinitions/2e3197b6-1f5b-4b01-920c-b2f0a7e9b18a'", + "resourceDetails": [ + { + "complianceState": "noncompliant", + "count": 398 + }, + { + "complianceState": "compliant", + "count": 4 + } + ] + } + } + ], + ... ``` -Example: Getting the state record for the most recently evaluated resource (default is by timestamp -in descending order). +Example: Getting the state record for the most recently evaluated resource (default is by timestamp in descending order). ```azurecli-interactive az policy state list --top 1 az policy state list --filter "ResourceType eq 'Microsoft.Network/virtualNetwork ] ``` -Example: Getting events related to non-compliant virtual network resources that occurred after a -specific date. +Example: Getting events related to non-compliant virtual network resources that occurred after a specific date. ```azurecli-interactive az policy state list --filter "ResourceType eq 'Microsoft.Network/virtualNetworks'" --from '2020-07-14T00:00:00Z' az policy state list --filter "ResourceType eq 'Microsoft.Network/virtualNetwork ### Azure PowerShell -The Azure PowerShell module for Azure Policy is available on the PowerShell Gallery as -[Az.PolicyInsights](https://www.powershellgallery.com/packages/Az.PolicyInsights). Using -PowerShellGet, you can install the module using `Install-Module -Name Az.PolicyInsights` (make sure -you have the latest [Azure PowerShell](/powershell/azure/install-azure-powershell) installed): +The Azure PowerShell module for Azure Policy is available on the PowerShell Gallery as [Az.PolicyInsights](https://www.powershellgallery.com/packages/Az.PolicyInsights). Using PowerShellGet, you can install the module using `Install-Module -Name Az.PolicyInsights` (make sure you have the latest [Azure PowerShell](/powershell/azure/install-azure-powershell) installed): ```azurepowershell-interactive # Install from PowerShell Gallery via PowerShellGet The module has the following cmdlets: - `Start-AzPolicyRemediation` - `Stop-AzPolicyRemediation` -Example: Getting the state summary for the topmost assigned policy with the highest number of -non-compliant resources. +Example: Getting the state summary for the topmost assigned policy with the highest number of non-compliant resources. ```azurepowershell-interactive PS> Get-AzPolicyStateSummary -Top 1 PolicyAssignments : {/subscriptions/{subscriptionId}/resourcegroups/RG-Tags/ oft.authorization/policyassignments/37ce239ae4304622914f0c77} ``` -Example: Getting the state record for the most recently evaluated resource (default is by timestamp -in descending order). +Example: Getting the state record for the most recently evaluated resource (default is by timestamp in descending order). ```azurepowershell-interactive PS> Get-AzPolicyState -Top 1 PolicyDefinitionAction : deny PolicyDefinitionCategory : tbd ``` -Example: Getting events related to non-compliant virtual network resources that occurred after a -specific date, converting to a CSV object, and exporting to a file. +Example: Getting events related to non-compliant virtual network resources that occurred after a specific date, converting to a CSV object, and exporting to a file. ```azurepowershell-interactive $policyEvents = Get-AzPolicyEvent -Filter "ResourceType eq '/Microsoft.Network/virtualNetworks'" -From '2020-09-19' TenantId : {tenantId} PrincipalOid : {principalOid} ``` -The **PrincipalOid** field can be used to get a specific user with the Azure PowerShell cmdlet -`Get-AzADUser`. Replace **{principalOid}** with the response you get from the previous -example. +The `PrincipalOid` field can be used to get a specific user with the Azure PowerShell cmdlet `Get-AzADUser`. Replace `{principalOid}` with the response you get from the previous example. ```azurepowershell-interactive PS> (Get-AzADUser -ObjectId {principalOid}).DisplayName Trent Baker ## Azure Monitor logs -If you have a [Log Analytics workspace](/azure/azure-monitor/logs/log-query-overview) with -`AzureActivity` from the -[Activity Log Analytics solution](/azure/azure-monitor/essentials/activity-log) tied to your -subscription, you can also view non-compliance results from the evaluation of new and updated -resources using simple Kusto queries and the `AzureActivity` table. With details in Azure Monitor -logs, alerts can be configured to watch for non-compliance. +If you have a [Log Analytics workspace](/azure/azure-monitor/logs/log-query-overview) with `AzureActivity` from the [Activity Log Analytics solution](/azure/azure-monitor/essentials/activity-log) tied to your subscription, you can also view non-compliance results from the evaluation of new and updated resources using simple Kusto queries and the `AzureActivity` table. With details in Azure Monitor logs, alerts can be configured to watch for non-compliance. :::image type="content" source="../media/getting-compliance-data/compliance-loganalytics.png" alt-text="Screenshot of Azure Monitor logs showing Azure Policy actions in the AzureActivity table." border="false"::: Compliance records are stored in Azure Resource Graph (ARG). Data can be exporte ## Next steps - Review examples at [Azure Policy samples](../samples/index.md).-- Review the [Azure Policy definition structure](../concepts/definition-structure.md).-- Review [Understanding policy effects](../concepts/effects.md).+- Review the [Azure Policy definition structure](../concepts/definition-structure-basics.md). +- Review [Understanding policy effects](../concepts/effect-basics.md). - Understand how to [programmatically create policies](programmatically-create.md). - Learn how to [remediate non-compliant resources](remediate-resources.md). - Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md). |
governance | Remediate Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/remediate-resources.md | Title: Remediate non-compliant resources description: This guide walks you through the remediation of resources that are non-compliant to policies in Azure Policy. Previously updated : 07/29/2022 Last updated : 09/30/2024 + # Remediate non-compliant resources with Azure Policy -Resources that are non-compliant to policies with **deployIfNotExists** or **modify** effects can be put into a -compliant state through **Remediation**. Remediation is accomplished through **remediation tasks** that deploy the **deployIfNotExists** template or the **modify** operations of the assigned policy on your existing resources and subscriptions, whether that assignment is on a management group, -subscription, resource group, or individual resource. This article shows the steps needed to -understand and accomplish remediation with Azure Policy. +Resources that are non-compliant to policies with `deployIfNotExists` or `modify` effects can be put into a compliant state through **Remediation**. Remediation is accomplished through **remediation tasks** that deploy the `deployIfNotExists` template or the `modify` operations of the assigned policy on your existing resources and subscriptions, whether that assignment is on a management group, subscription, resource group, or individual resource. This article shows the steps needed to understand and accomplish remediation with Azure Policy. ## How remediation access control works -When Azure Policy starts a template deployment when evaluating **deployIfNotExists** policies or modifies a resource when evaluating **modify** policies, it does so using -a [managed identity](../../../active-directory/managed-identities-azure-resources/overview.md) that is associated with the policy assignment. -Policy assignments use [managed identities](../../../active-directory/managed-identities-azure-resources/overview.md) for Azure resource authorization. You can use either a system-assigned managed identity that is created by the policy service or a user-assigned identity provided by the user. The managed identity needs to be assigned the minimum role-based access control (RBAC) role(s) required to remediate resources. -If the managed identity is missing roles, an error is displayed in the portal -during the assignment of the policy or an initiative. When using the portal, Azure Policy -automatically grants the managed identity the listed roles once assignment starts. When using an Azure software development kit (SDK), -the roles must manually be granted to the managed identity. The _location_ of the managed identity -doesn't impact its operation with Azure Policy. +When Azure Policy starts a template deployment when evaluating `deployIfNotExists` policies or modifies a resource when evaluating `modify` policies, it does so using a [managed identity](/entra/identity/managed-identities-azure-resources/overview) associated with the policy assignment. Policy assignments use managed identities for Azure resource authorization. You can use either a system-assigned managed identity created by the policy service or a user-assigned identity provided by the user. The managed identity needs to be assigned the minimum Azure role-based access control (Azure RBAC) role required to remediate resources. If the managed identity is missing roles, an error is displayed in the portal during the assignment of the policy or an initiative. When you use the portal, Azure Policy automatically grants the managed identity the listed roles once assignment starts. When you use an Azure software development kit (SDK), the roles must manually be granted to the managed identity. The _location_ of the managed identity doesn't affect its operation with Azure Policy. - > [!NOTE] - > Changing a policy definition does not automatically update the assignment or the associated managed identity. +> [!NOTE] +> Changing a policy definition does not automatically update the assignment or the associated managed identity. Remediation security can be configured through the following steps: - [Configure the policy definition](#configure-the-policy-definition) Remediation security can be configured through the following steps: ## Configure the policy definition -As a prerequisite, the policy definition must define the roles that **deployIfNotExists** and **modify** need to successfully deploy the content of the included template. No action is required for a built-in policy definition because these roles are prepopulated. For a custom policy definition, under the **details** -property, add a **roleDefinitionIds** property. This property is an array of strings that match -roles in your environment. For a full example, see the [deployIfNotExists -example](../concepts/effects.md#deployifnotexists-example) or the -[modify examples](../concepts/effects.md#modify-examples). +As a prerequisite, the policy definition must define the roles that `deployIfNotExists` and `modify` need to successfully deploy the content of the included template. No action is required for a built-in policy definition because these roles are prepopulated. For a custom policy definition, under the `details` property, add a `roleDefinitionIds` property. This property is an array of strings that match roles in your environment. For a full example, see [deployIfNotExists](../concepts/effect-deploy-if-not-exists.md) or [modify](../concepts/effect-modify.md). ```json "details": { ... "roleDefinitionIds": [- "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/{roleGUID}", - "/providers/Microsoft.Authorization/roleDefinitions/{builtinroleGUID}" - ] + "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/{roleGUID}", + "/providers/Microsoft.Authorization/roleDefinitions/{builtinroleGUID}" + ] } ``` -The **roleDefinitionIds** property uses the full resource identifier and doesn't take the short -**roleName** of the role. To get the ID for the 'Contributor' role in your environment, use the -following Azure CLI code: +The `roleDefinitionIds` property uses the full resource identifier and doesn't take the short `roleName` of the role. To get the ID for the Contributor role in your environment, use the following Azure CLI code: ```azurecli-interactive az role definition list --name "Contributor" ``` > [!IMPORTANT]-> Permissions should be restricted to the smallest possible set when defining **roleDefinitionIds** +> Permissions should be restricted to the smallest possible set when defining `roleDefinitionIds` > within a policy definition or assigning permissions to a managed identity manually. See-> [managed identity best practice recommendations](../../../active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations.md) +> [managed identity best practice recommendations](/entra/identity/managed-identities-azure-resources/managed-identity-best-practice-recommendations) > for more best practices. ## Configure the managed identity az role definition list --name "Contributor" Each Azure Policy assignment can be associated with only one managed identity. However, the managed identity can be assigned multiple roles. Configuration occurs in two steps: first create either a system-assigned or user-assigned managed identity, then grant it the necessary roles. > [!NOTE]- > When creating a managed identity through the portal, roles will be granted automatically to the managed identity. If **roleDefinitionIds** are later edited in the policy definition, the new permissions must be manually granted, even in the portal. + > When creating a managed identity through the portal, roles will be granted automatically to the managed identity. If `roleDefinitionIds` are later edited in the policy definition, the new permissions must be manually granted, even in the portal. ### Create the managed identity # [Portal](#tab/azure-portal) -When creating an assignment using the portal, Azure Policy can generate a system-assigned managed identity and grant it the roles defined in the policy definition's **roleDefinitionIds**. Alternatively, you can specify a user-assigned managed identity that receives the same role assignment. +When you create an assignment using the portal, Azure Policy can generate a system-assigned managed identity and grant it the roles defined in the policy definition's `roleDefinitionIds`. Alternatively, you can specify a user-assigned managed identity that receives the same role assignment. :::image type="content" source="../media/remediate-resources/remediation-tab.png" alt-text="Screenshot of a policy assignment creating a system-assigned managed identity in East US with Log Analytics Contributor permissions."::: is selected. 1. Specify the location at which the managed identity is to be located. -1. Don't assign a scope for system-assigned managed identity because the scope will be inherited from the assignment scope. +1. Don't assign a scope for system-assigned managed identity because the scope is inherited from the assignment scope. To set a user-assigned managed identity in the portal: 1. On the **Remediation** tab of the create/edit assignment view, under **Types of Managed Identity**, ensure that **User assigned managed identity** is selected. -1. Specify the scope where the managed identity is hosted. The scope of the managed identity does not have to equate to the scope of the assignment, but it must be in the same tenant. +1. Specify the scope where the managed identity is hosted. The scope of the managed identity doesn't have to equate to the scope of the assignment, but it must be in the same tenant. 1. Under **Existing user assigned identities**, select the managed identity. The `$assignment` variable now contains the principal ID of the managed identity # [Azure CLI](#tab/azure-cli) -To create an identity during the assignment of the policy, use [az policy assignment create](/cli/azure/policy/assignment?view=azure-cli-latest#az-policy-assignment-create&preserve-view=true) commands with the parameters **--location**, **--mi-system-assigned**, **--mi-user-assigned**, and **--identity-scope** depending on whether the managed identity should be system-assigned or user-assigned. +To create an identity during the assignment of the policy, use [az policy assignment create](/cli/azure/policy/assignment?view=azure-cli-latest#az-policy-assignment-create&preserve-view=true) commands with the parameters `--location`, `--mi-system-assigned`, `--mi-user-assigned`, and `--identity-scope` depending on whether the managed identity should be system-assigned or user-assigned. -To add a system-assigned identity or a user-assigned identity to a policy assignment, follow example [az policy assignment identity assign](/cli/azure/policy/assignment/identity?view=azure-cli-latest#az-policy-assignment-identity-assign&preserve-view=true) commands. +To add a system-assigned identity or a user-assigned identity to a policy assignment, follow example [az policy assignment identity assign](/cli/azure/policy/assignment/identity#az-policy-assignment-identity-assign) commands. To add a system-assigned identity or a user-assigned identity to a policy assign > [!IMPORTANT] >-> If the managed identity does not have the permissions needed to execute the required remediation task, it will be granted permissions *automatically* only through the portal. You may skip this step if creating a managed identity through the portal. +> If the managed identity does not have the permissions needed to execute the required remediation task, it will be granted permissions _automatically_ only through the portal. You may skip this step if creating a managed identity through the portal. > > For all other methods, the assignment's managed identity must be manually granted access through the addition of roles, or else the remediation deployment will fail. > > Example scenarios that require manual permissions: > - If the assignment is created through an Azure software development kit (SDK)-> - If a resource modified by **deployIfNotExists** or **modify** is outside the scope of the policy +> - If a resource modified by `deployIfNotExists` or `modify` is outside the scope of the policy > assignment > - If the template accesses properties on resources outside the scope of the policy assignment > # [Portal](#tab/azure-portal) -There are two ways to grant an assignment's managed identity the defined roles using the portal: by -using **Access control (IAM)** or by editing the policy or initiative assignment and selecting -**Save**. +There are two ways to grant an assignment's managed identity the defined roles using the portal: by using **Access control (IAM)** or by editing the policy or initiative assignment and selecting **Save**. To add a role to the assignment's managed identity, follow these steps: -1. Launch the Azure Policy service in the Azure portal by selecting **All services**, then searching - for and selecting **Policy**. +1. Launch the Azure Policy service in the Azure portal by selecting **All services**, then searching for and selecting **Policy**. 1. Select **Assignments** on the left side of the Azure Policy page. 1. Locate the assignment that has a managed identity and select the name. -1. Find the **Assignment ID** property on the edit page. The assignment ID will be something like: +1. Find the **Assignment ID** property on the edit page. The assignment ID look like the following example: ```output- /subscriptions/{subscriptionId}/resourceGroups/PolicyTarget/providers/Microsoft.Authorization/policyAssignments/2802056bfc094dfb95d4d7a5 + /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/policyAssignments/2802056bfc094dfb95d4d7a5 ``` - The name of the managed identity is the last portion of the assignment resource ID, which is - `2802056bfc094dfb95d4d7a5` in this example. Copy this portion of the assignment resource ID. + The name of the managed identity is the last portion of the assignment resource ID, which is `2802056bfc094dfb95d4d7a5` in this example. Copy this portion of the assignment resource ID. -1. Navigate to the resource or the resources parent container (resource group, subscription, - management group) that needs the role definition manually added. +1. Navigate to the resource or the resources parent container (resource group, subscription, management group) that needs the role definition manually added. -1. Select the **Access control (IAM)** link in the resources page and then select **+ Add role - assignment** at the top of the access control page. +1. Select the **Access control (IAM)** link in the resources page and then select **+ Add role assignment** at the top of the access control page. -1. Select the appropriate role that matches a **roleDefinitionId** from the policy definition. - Leave **Assign access to** set to the default of 'Azure AD user, group, or application'. In the - **Select** box, paste or type the portion of the assignment resource ID located earlier. Once the - search completes, select the object with the same name to select ID and select **Save**. +1. Select the appropriate role that matches a `roleDefinitionIds` from the policy definition. Leave **Assign access to** set to the default of 'user, group, or application'. In the **Select** box, paste or type the portion of the assignment resource ID located earlier. Once the search completes, select the object with the same name to select ID and select **Save**. # [PowerShell](#tab/azure-powershell) -The new managed identity must complete replication through Azure Active Directory before it can be -granted the needed roles. Once replication is complete, the following examples iterate the policy -definition in `$policyDef` for the **roleDefinitionIds** and uses -[New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment) to -grant the new managed identity the roles. +The new managed identity must complete replication through Microsoft Entra ID before it can be granted the needed roles. Once replication is complete, the following examples iterate the policy definition in `$policyDef` for the `roleDefinitionIds` and uses [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment) to grant the new managed identity the roles. -Specifically, the first example shows you how to grant roles at the policy scope. The second -example demonstrates how to grant roles at the initiative (policy set) scope. +Specifically, the first example shows you how to grant roles at the policy scope. The second example demonstrates how to grant roles at the initiative (policy set) scope. ```azurepowershell-interactive ################################################### if ($InitiativeRoleDefinitionIds.Count -gt 0) { # [Azure CLI](#tab/azure-cli) -The new managed identity must complete replication through Azure Active Directory before it can be granted the needed roles. Once replication is complete, the roles specified in the policy definition's **roleDefinitionIds** should be granted to the managed identity. +The new managed identity must complete replication through Microsoft Entra ID before it can be granted the needed roles. Once replication is complete, the roles specified in the policy definition's `roleDefinitionIds` should be granted to the managed identity. -Access the roles specified in the policy definition using the [az policy definition show](/cli/azure/policy/definition?view=azure-cli-latest#az-policy-definition-show&preserve-view=true) command, then iterate over each **roleDefinitionId** to create the role assignment using the [az role assignment create](/cli/azure/role/assignment?view=azure-cli-latest#az-role-assignment-create&preserve-view=true) command. +Access the roles specified in the policy definition using the [az policy definition show](/cli/azure/policy/definition?view=azure-cli-latest#az-policy-definition-show&preserve-view=true) command, then iterate over each `roleDefinitionIds` to create the role assignment using the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command. Launch the Azure Policy service in the Azure portal by selecting **All services* :::image type="content" source="../media/remediate-resources/search-policy.png" alt-text="Screenshot of searching for Policy in All Services." border="false"::: ### Step 1: Initiate remediation task creation+ There are three ways to create a remediation task through the portal. #### Option 1: Create a remediation task from the Remediation page There are three ways to create a remediation task through the portal. :::image type="content" source="../media/remediate-resources/select-remediation.png" alt-text="Screenshot of the Remediation node on the Policy page." border="false"::: -1. All **deployIfNotExists** and **modify** policy assignments are - shown on the **Policies to remediate** tab. Select one with resources - that are non-compliant to open the **New remediation task** page. +1. All `deployIfNotExists` and `modify` policy assignments are shown on the **Policies to remediate** tab. Select one with resources that are non-compliant to open the **New remediation task** page. 1. Follow steps to [specify remediation task details](#step-2-specify-remediation-task-details). There are three ways to create a remediation task through the portal. 1. Select **Compliance** on the left side of the Azure Policy page. -1. Select a non-compliant policy or initiative assignment containing **deployIfNotExists** or **modify** effects. +1. Select a non-compliant policy or initiative assignment containing `deployIfNotExists` or `modify` effects. 1. Select the **Create Remediation Task** button at the top of the page to open the **New remediation task** page. There are three ways to create a remediation task through the portal. #### Option 3: Create a remediation task during policy assignment -If the policy or initiative definition to assign has a **deployIfNotExists** or a **Modify** effect, -the **Remediation** tab of the wizard offers a _Create a remediation task_ option, which creates a remediation task at the same time as the policy assignment. +If the policy or initiative definition to assign has a `deployIfNotExists` or a `modify` effect, the **Remediation** tab of the wizard offers a _Create a remediation task_ option, which creates a remediation task at the same time as the policy assignment. - > [!NOTE] - > This is the most streamlined approach for creating a remediation task and is supported for policies assigned on a _subscription_. For policies assigned on a _management group_, remediation tasks should be created using [Option 1](#option-1-create-a-remediation-task-from-the-remediation-page) or [Option 2](#option-2-create-a-remediation-task-from-a-non-compliant-policy-assignment) after evaluation has determined resource compliance. +> [!NOTE] +> This is the most streamlined approach for creating a remediation task and is supported for policies assigned on a _subscription_. For policies assigned on a _management group_, remediation tasks should be created using [Option 1](#option-1-create-a-remediation-task-from-the-remediation-page) or [Option 2](#option-2-create-a-remediation-task-from-a-non-compliant-policy-assignment) after evaluation has determined resource compliance. 1. From the assignment wizard in the portal, navigate to the **Remediation** tab. Select the check box for **Create a remediation task**. 1. If the remediation task is initiated from an initiative assignment, select the policy to remediate from the drop-down. -1. Configure the [managed identity](#configure-the-managed-identity) and fill out the rest of the wizard. The remediation task will be created when the assignment is created. +1. Configure the [managed identity](#configure-the-managed-identity) and fill out the rest of the wizard. The remediation task is created when the assignment is created. ### Step 2: Specify remediation task details This step is only applicable when using [Option 1](#option-1-create-a-remediation-task-from-the-remediation-page) or [Option 2](#option-2-create-a-remediation-task-from-a-non-compliant-policy-assignment) to initiate remediation task creation. -1. If the remediation task is initiated from an initiative assignment, select the policy to remediate from the drop-down. One **deployIfNotExists** or **modify** policy can be remediated through a single Remediation task at a time. +1. If the remediation task is initiated from an initiative assignment, select the policy to remediate from the drop-down. One `deployIfNotExists` or `modify` policy can be remediated through a single Remediation task at a time. 1. Optionally modify remediation settings on the page. For information on what each setting controls, see [remediation task structure](../concepts/remediation-structure.md). -1. On the same page, filter the resources to remediate by using the **Scope** - ellipses to pick child resources from where the policy is assigned (including down to the - individual resource objects). Additionally, use the **Locations** dropdown list to further filter - the resources. +1. On the same page, filter the resources to remediate by using the **Scope** ellipses to pick child resources from where the policy is assigned (including down to the individual resource objects). Additionally, use the **Locations** dropdown list to further filter the resources. :::image type="content" source="../media/remediate-resources/select-resources.png" alt-text="Screenshot of the Remediate node and the grid of resources to remediate." border="false"::: -1. Begin the remediation task once the resources have been filtered by selecting **Remediate**. The - policy compliance page opens to the **Remediation tasks** tab to show the state of the tasks - progress. Deployments created by the remediation task begin right away. +1. Begin the remediation task after the resources are filtered by selecting **Remediate**. The policy compliance page opens to the **Remediation tasks** tab to show the state of the tasks progress. Deployments created by the remediation task begin right away. :::image type="content" source="../media/remediate-resources/task-progress.png" alt-text="Screenshot of the Remediation tasks tab and progress of existing remediation tasks." border="false"::: ### Step 3: Track remediation task progress -1. Navigate to the **Remediation tasks** tab on the **Remediation** page. Click on a remediation task to view details about the filtering used, the current status, and a list of resources being remediated. +1. Navigate to the **Remediation tasks** tab on the **Remediation** page. Select a remediation task to view details about the filtering used, the current status, and a list of resources being remediated. -1. From the **Remediation task** details page, right-click on a resource to view either the remediation - task's deployment or the resource. At the end of the row, select **Related events** to see - details such as an error message. +1. From the **Remediation task** details page, right-click on a resource to view either the remediation task's deployment or the resource. At the end of the row, select **Related events** to see details such as an error message. :::image type="content" source="../media/remediate-resources/resource-task-context-menu.png" alt-text="Screenshot of the context menu for a resource on the Remediate task tab." border="false"::: Resources deployed through a **remediation task** are added to the **Deployed Re # [PowerShell](#tab/azure-powershell) -To create a **remediation task** with Azure PowerShell, use the `Start-AzPolicyRemediation` -commands. Replace `{subscriptionId}` with your subscription ID and `{myAssignmentId}` with your -**deployIfNotExists** or **modify** policy assignment ID. +To create a **remediation task** with Azure PowerShell, use the `Start-AzPolicyRemediation` commands. Replace `{subscriptionId}` with your subscription ID and `{myAssignmentId}` with your `deployIfNotExists` or `modify` policy assignment ID. ```azurepowershell-interactive # Login first with Connect-AzAccount if not using Cloud Shell commands. Replace `{subscriptionId}` with your subscription ID and `{myAssignmen Start-AzPolicyRemediation -Name 'myRemedation' -PolicyAssignmentId '/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/policyAssignments/{myAssignmentId}' ``` -You may also choose to adjust remediation settings through these optional parameters: +You might also choose to adjust remediation settings through these optional parameters: - `-FailureThreshold` - Used to specify whether the remediation task should fail if the percentage of failures exceeds the given threshold. Provided as a number between 0 to 100. By default, the failure threshold is 100%. - `-ResourceCount` - Determines how many non-compliant resources to remediate in a given remediation task. The default value is 500 (the previous limit). The maximum number is 50,000 resources. - `-ParallelDeploymentCount` - Determines how many resources to remediate at the same time. The allowed values are 1 to 30 resources at a time. The default value is 10. -For more remediation cmdlets and examples, see the [Az.PolicyInsights](/powershell/module/az.policyinsights/#policy_insights) -module. +For more remediation cmdlets and examples, see the [Az.PolicyInsights](/powershell/module/az.policyinsights/#policy_insights) module. # [Azure CLI](#tab/azure-cli) -To create a **remediation task** with Azure CLI, use the `az policy remediation` commands. Replace -`{subscriptionId}` with your subscription ID and `{myAssignmentId}` with your **deployIfNotExists** -or **modify** policy assignment ID. +To create a **remediation task** with Azure CLI, use the `az policy remediation` commands. Replace `{subscriptionId}` with your subscription ID and `{myAssignmentId}` with your `deployIfNotExists` or `modify` policy assignment ID. ```azurecli-interactive # Login first with az login if not using Cloud Shell or **modify** policy assignment ID. az policy remediation create --name myRemediation --policy-assignment '/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/policyAssignments/{myAssignmentId}' ``` -For more remediation commands and examples, see the [az policy -remediation](/cli/azure/policy/remediation) commands. +For more remediation commands and examples, see the [az policy remediation](/cli/azure/policy/remediation) commands. ## Next steps - Review examples at [Azure Policy samples](../samples/index.md).-- Review the [Azure Policy definition structure](../concepts/definition-structure.md).-- Review [Understanding policy effects](../concepts/effects.md).+- Review the [Azure Policy definition structure](../concepts/definition-structure-basics.md). +- Review [Understanding policy effects](../concepts/effect-basics.md). - Understand how to [programmatically create policies](programmatically-create.md). - Learn how to [get compliance data](get-compliance-data.md). - Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md). |
governance | Create Custom Policy Definition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/create-custom-policy-definition.md | Title: "Tutorial: Create a custom policy definition" description: In this tutorial, you craft a custom policy definition for Azure Policy to enforce custom business rules on your Azure resources. Previously updated : 08/17/2021 Last updated : 09/30/2024 + # Tutorial: Create a custom policy definition -A custom policy definition allows customers to define their own rules for using Azure. These rules -often enforce: +A custom policy definition allows customers to define their own rules for using Azure. These rules often enforce: -- Security practices-- Cost management-- Organization-specific rules (like naming or locations)+- Security practices. +- Cost management. +- Organization-specific rules (like naming or locations). -Whatever the business driver for creating a custom policy, the steps are the same for defining the -new custom policy. +Whatever the business driver for creating a custom policy, the steps are the same for defining the new custom policy. -Before creating a custom policy, check the [policy samples](../samples/index.md) to see whether a -policy that matches your needs already exists. +Before creating a custom policy, check the [policy samples](../samples/index.md) to see whether a policy that matches your needs already exists. The approach to creating a custom policy follows these steps: The approach to creating a custom policy follows these steps: ## Prerequisites -If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) -before you begin. +If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. ## Identify requirements -Before creating the policy definition, it's important to understand the intent of the policy. For -this tutorial, we'll use a common enterprise security requirement as the goal to illustrate the -steps involved: +Before creating the policy definition, it's important to understand the intent of the policy. For this tutorial, use a common enterprise security requirement as the goal to illustrate the steps involved: -- Each storage account must be enabled for HTTPS-- Each storage account must be disabled for HTTP+- Each storage account must be enabled for HTTPS. +- Each storage account must be disabled for HTTP. Your requirements should clearly identify both the "to be" and the "not to be" resource states. -While we've defined the expected state of the resource, we've not yet defined what we want done with -non-compliant resources. Azure Policy supports many [effects](../concepts/effects.md). For this -tutorial, we'll define the business requirement as preventing the creation of resources if they -aren't compliant with the business rules. To meet this goal, we'll use the -[Deny](../concepts/effects.md#deny) effect. We also want the option to suspend the policy for -specific assignments. As such, we'll use the [Disabled](../concepts/effects.md#disabled) effect and -make the effect a [parameter](../concepts/definition-structure.md#parameters) in the policy -definition. +While we defined the expected state of the resource, we havenn't defined what we want done with non-compliant resources. Azure Policy supports many [effects](../concepts/effect-basics.md). For this tutorial, we define the business requirement as preventing the creation of resources if they aren't compliant with the business rules. To meet this goal, we use the [deny](../concepts/effect-deny.md) effect. We also want the option to suspend the policy for specific assignments. Use the [disabled](../concepts/effect-disabled.md) effect and make the effect a [parameter](../concepts/definition-structure-parameters.md) in the policy definition. ## Determine resource properties -Based on the business requirement, the Azure resource to audit with Azure Policy is a storage -account. However, we don't know the properties to use in the policy definition. Azure Policy -evaluates against the JSON representation of the resource, so we'll need to understand the -properties available on that resource. +Based on the business requirement, the Azure resource to audit with Azure Policy is a storage account. However, we don't know the properties to use in the policy definition. Azure Policy evaluates against the JSON representation of the resource, so we need to understand the properties available on that resource. -There are many ways to determine the properties for an Azure resource. We'll look at each for -this tutorial: +There are many ways to determine the properties for an Azure resource. We look at each for this tutorial: -- Azure Policy extension for VS Code-- Azure Resource Manager templates (ARM templates)- - Export existing resource - - Creation experience - - Quickstart templates (GitHub) - - Template reference docs -- Azure Resource Explorer+- Azure Policy extension for VS Code. +- Azure Resource Manager templates (ARM templates). + - Export existing resource. + - Creation experience. + - Quickstart templates (GitHub). + - Template reference docs. +- Azure Resource Explorer. ### View resources in VS Code extension -The [VS Code extension](../how-to/extension-for-vscode.md#search-for-and-view-resources) can be used -to browse resources in your environment and see the Resource Manager properties on each resource. +The [VS Code extension](../how-to/extension-for-vscode.md#search-for-and-view-resources) can be used to browse resources in your environment and see the Resource Manager properties on each resource. ### ARM templates There are several ways to look at an-[ARM template](../../../azure-resource-manager/templates/template-tutorial-use-template-reference.md) that -includes the property you're looking to manage. +[ARM template](../../../azure-resource-manager/templates/template-tutorial-use-template-reference.md) that includes the property you're looking to manage. #### Existing resource in the portal -The simplest way to find properties is to look at an existing resource of the same type. Resources -already configured with the setting you want to enforce also provide the value to compare against. -Look at the **Export template** page (under **Settings**) in the Azure portal for that specific -resource. +The simplest way to find properties is to look at an existing resource of the same type. Resources already configured with the setting you want to enforce also provide the value to compare against. Look at the **Export template** page, in **Settings**, in the Azure portal for that specific resource. > [!WARNING] > The ARM template exported by Azure portal can't be plugged straight into the `deployment` property-> for an ARM template in a [deployIfNotExists](../concepts/effects.md#deployifnotexists) policy +> for an ARM template in a [deployIfNotExists](../concepts/effect-deploy-if-not-exists.md) policy > definition. :::image type="content" source="../media/create-custom-policy-definition/export-template.png" alt-text="Screenshot of the Export template page on an existing resource in Azure portal." border="false"::: resource. Doing so for a storage account reveals a template similar to this example: ```json-... -"resources": [{ +"resources": [ + { "comments": "Generalized from resource: '/subscriptions/{subscriptionId}/resourceGroups/myResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount'.", "type": "Microsoft.Storage/storageAccounts", "sku": {- "name": "Standard_LRS", - "tier": "Standard" + "name": "Standard_LRS", + "tier": "Standard" }, "kind": "Storage", "name": "[parameters('storageAccounts_mystorageaccount_name')]", "apiVersion": "2018-07-01", "location": "westus", "tags": {- "ms-resource-usage": "azure-cloud-shell" + "ms-resource-usage": "azure-cloud-shell" }, "scale": null, "properties": {- "networkAcls": { - "bypass": "AzureServices", - "virtualNetworkRules": [], - "ipRules": [], - "defaultAction": "Allow" + "networkAcls": { + "bypass": "AzureServices", + "virtualNetworkRules": [], + "ipRules": [], + "defaultAction": "Allow" + }, + "supportsHttpsTrafficOnly": false, + "encryption": { + "services": { + "file": { + "enabled": true + }, + "blob": { + "enabled": true + } },- "supportsHttpsTrafficOnly": false, - "encryption": { - "services": { - "file": { - "enabled": true - }, - "blob": { - "enabled": true - } - }, - "keySource": "Microsoft.Storage" - } + "keySource": "Microsoft.Storage" + } }, "dependsOn": []-}] -... + } +] ``` -Under **properties** is a value named **supportsHttpsTrafficOnly** set to **false**. This property -looks like it may be the property we're looking for. Also, the **type** of the resource is -**Microsoft.Storage/storageAccounts**. The type lets us limit the policy to only resources of this -type. +Under `properties` is a value named `supportsHttpsTrafficOnly` set to `false`. This property looks like it might be the property we're looking for. Also, the `type` of the resource is `Microsoft.Storage/storageAccounts`. The type lets us limit the policy to only resources of this type. #### Create a resource in the portal -Another way through the portal is the resource creation experience. While creating a storage account -through the portal, an option under the **Advanced** tab is **Security transfer required**. This -property has _Disabled_ and _Enabled_ options. The info icon has additional text that confirms this -option is likely the property we want. However, the portal doesn't tell us the property name on this -screen. +Another way through the portal is the resource creation experience. When you create a storage account through the portal, an option under the **Advanced** tab is **Security transfer required**. This property has _Disabled_ and _Enabled_ options. The info icon has more text that confirms this option is likely the property we want. But the portal doesn't tell us the property name on this screen. -On the **Review + create** tab, a link is at the bottom of the page to **Download a template for -automation**. Selecting the link opens the template that creates the resource we configured. In this -case, we see two key pieces of information: +On the **Review + create** tab, a link is at the bottom of the page to **Download a template for automation**. Selecting the link opens the template that creates the resource we configured. In this case, we see two key pieces of information: ```json ... "supportsHttpsTrafficOnly": {- "type": "bool" + "type": "bool" } ... "properties": {- "accessTier": "[parameters('accessTier')]", - "supportsHttpsTrafficOnly": "[parameters('supportsHttpsTrafficOnly')]" + "accessTier": "[parameters('accessTier')]", + "supportsHttpsTrafficOnly": "[parameters('supportsHttpsTrafficOnly')]" } ... ``` -This information tells us the property type and also confirms **supportsHttpsTrafficOnly** is the -property we're looking for. +This information tells us the property type and also confirms `supportsHttpsTrafficOnly` is the property we're looking for. #### Quickstart templates on GitHub -The [Azure quickstart templates](https://github.com/Azure/azure-quickstart-templates) on GitHub has -hundreds of ARM templates built for different resources. These templates can be a great way to find -the resource property you're looking for. Some properties may appear to be what you're looking for, -but control something else. +The [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates) on GitHub has hundreds of ARM templates built for different resources. These templates can be a great way to find the resource property you're looking for. Some properties might appear to be what you're looking for, but control something else. #### Resource reference docs -To validate **supportsHttpsTrafficOnly** is correct property, check the ARM template reference for -the [storage account resource](/azure/templates/microsoft.storage/storageaccounts) on the -storage provider. The properties object has a list of valid parameters. Selecting the StorageAccountPropertiesCreateParameters-object -link shows a table of acceptable properties. **supportsHttpsTrafficOnly** is present and the -description matches what we are looking for to meet the business requirements. +To validate `supportsHttpsTrafficOnly` is correct property, check the ARM template reference for the [storage account resource](/azure/templates/microsoft.storage/storageaccounts) on the storage provider. The properties object has a list of valid parameters. Selecting the `StorageAccountPropertiesCreateParameters` object link shows a table of acceptable properties. `supportsHttpsTrafficOnly` is present and the description matches what we're looking for in regard to the business requirements. ### Azure Resource Explorer -Another way to explore your Azure resources is through the [Azure Resource -Explorer](https://resources.azure.com) (Preview). This tool uses the context of your subscription, -so you need to authenticate to the website with your Azure credentials. Once authenticated, you can -browse by providers, subscriptions, resource groups, and resources. +Another way to explore your Azure resources is through the [Azure Resource Explorer](https://resources.azure.com) (Preview). This tool uses the context of your subscription, so you need to authenticate to the website with your Azure credentials. Once authenticated, you can browse by providers, subscriptions, resource groups, and resources. -Locate a storage account resource and look at the properties. We see the -**supportsHttpsTrafficOnly** property here as well. Selecting the **Documentation** tab, we see that -the property description matches what we found in the reference docs earlier. +Locate a storage account resource and look at the properties. We see the `supportsHttpsTrafficOnly` property here as well. Selecting the **Documentation** tab, we see that the property description matches what we found in the reference docs earlier. ## Find the property alias -We've identified the resource property, but we need to map that property to an -[alias](../concepts/definition-structure.md#aliases). +We identified the resource property, but we need to map that property to an [alias](../concepts/definition-structure-alias.md). -There are a few ways to determine the aliases for an Azure resource. We'll look at each for this -tutorial: +There are a few ways to determine the aliases for an Azure resource. We look at each for this tutorial: -- Azure Policy extension for VS Code-- Azure CLI-- Azure PowerShell+- Azure Policy extension for VS Code. +- Azure CLI. +- Azure PowerShell. ### Get aliases in VS Code extension -The Azure Policy extension for VS Code extension makes it easy to browse your resources and -[discover aliases](../how-to/extension-for-vscode.md#discover-aliases-for-resource-properties). +The Azure Policy extension for VS Code extension makes it easy to browse your resources and [discover aliases](../how-to/extension-for-vscode.md#discover-aliases-for-resource-properties). > [!NOTE] > The VS Code extension only exposes Resource Manager mode properties and doesn't display any-> [Resource Provider mode](../concepts/definition-structure.md#mode) properties. +> [Resource Provider mode](../concepts/definition-structure-basics.md#mode) properties. ### Azure CLI -In Azure CLI, the `az provider` command group is used to search for resource aliases. We'll filter -for the **Microsoft.Storage** namespace based on the details we got about the Azure resource -earlier. +In Azure CLI, the `az provider` command group is used to search for resource aliases. We filter for the `Microsoft.Storage` namespace based on the details we got about the Azure resource earlier. ```azurecli-interactive # Login first with az login if not using Cloud Shell earlier. az provider show --namespace Microsoft.Storage --expand "resourceTypes/aliases" --query "resourceTypes[].aliases[].name" ``` -In the results, we see an alias supported by the storage accounts named -**supportsHttpsTrafficOnly**. This existence of this alias means we can write the policy to enforce -our business requirements! +In the results, we see an alias supported by the storage accounts named `supportsHttpsTrafficOnly`. This existence of this alias means we can write the policy to enforce our business requirements! ### Azure PowerShell -In Azure PowerShell, the `Get-AzPolicyAlias` cmdlet is used to search for resource aliases. We'll -filter for the **Microsoft.Storage** namespace based on the details we got about the Azure resource -earlier. +In Azure PowerShell, the `Get-AzPolicyAlias` cmdlet is used to search for resource aliases. Filter for the `Microsoft.Storage` namespace based on the details we got about the Azure resource earlier. ```azurepowershell-interactive # Login first with Connect-AzAccount if not using Cloud Shell earlier. (Get-AzPolicyAlias -NamespaceMatch 'Microsoft.Storage').Aliases ``` -Like Azure CLI, the results show an alias supported by the storage accounts named -**supportsHttpsTrafficOnly**. +Like Azure CLI, the results show an alias supported by the storage accounts named `supportsHttpsTrafficOnly`. ## Determine the effect to use -Deciding what to do with your non-compliant resources is nearly as important as deciding what to -evaluate in the first place. Each possible response to a non-compliant resource is called an -[effect](../concepts/effects.md). The effect controls if the non-compliant resource is logged, -blocked, has data appended, or has a deployment associated to it for putting the resource back into -a compliant state. +Deciding what to do with your non-compliant resources is nearly as important as deciding what to evaluate in the first place. Each possible response to a non-compliant resource is called an [effect](../concepts/effect-basics.md). The effect controls if the non-compliant resource is logged, blocked, has data appended, or has a deployment associated to it for putting the resource back into a compliant state. -For our example, Deny is the effect we want as we don't want non-compliant resources created in our -Azure environment. Audit is a good first choice for a policy effect to determine what the impact of -a policy is before setting it to Deny. One way to make changing the effect per assignment easier is -to parameterize the effect. See [parameters](#parameters) below for the details on how. +For our example, `deny` is the effect we want as we don't want non-compliant resources created in our Azure environment. Audit is a good first choice for a policy effect to determine what the effect of a policy is before setting it to `deny`. One way to make changing the effect per assignment easier is to parameterize the effect. See [parameters](#parameters) for the details. ## Compose the definition -We now have the property details and alias for what we plan to manage. Next, we'll compose the -policy rule itself. If you aren't yet familiar with the policy language, reference [policy -definition structure](../concepts/definition-structure.md) for how to structure the policy -definition. Here is an empty template of what a policy definition looks like: +We now have the property details and alias for what we plan to manage. Next, we compose the policy rule itself. If you aren't familiar with the policy language, reference [policy definition structure](../concepts/definition-structure-basics.md) for how to structure the policy definition. Here's an empty template of what a policy definition looks like: ```json {- "properties": { - "displayName": "<displayName>", - "description": "<description>", - "mode": "<mode>", - "parameters": { - <parameters> - }, - "policyRule": { - "if": { - <rule> - }, - "then": { - "effect": "<effect>" - } - } + "properties": { + "displayName": "<displayName>", + "description": "<description>", + "mode": "<mode>", + "parameters": { + <parameters> + }, + "policyRule": { + "if": { + <rule> + }, + "then": { + "effect": "<effect>" + } }+ } } ``` ### Metadata -The first three components are policy metadata. These components are easy to provide values for -since we know what we are creating the rule for. [Mode](../concepts/definition-structure.md#mode) is -primarily about tags and resource location. Since we don't need to limit evaluation to resources -that support tags, we'll use the _all_ value for **mode**. +The first three components are policy metadata. These components are easy to provide values for since we know what we are creating the rule for. [Mode](../concepts/definition-structure-basics.md#mode) is primarily about tags and resource location. Since we don't need to limit evaluation to resources that support tags, use the _all_ value for `mode`. ```json "displayName": "Deny storage accounts not using only HTTPS", that support tags, we'll use the _all_ value for **mode**. ### Parameters -While we didn't use a parameter for changing the evaluation, we do want to use a parameter to allow -changing the **effect** for troubleshooting. We'll define an **effectType** parameter and limit it -to only **Deny** and **Disabled**. These two options match our business requirements. The finished -parameters block looks like this example: +While we didn't use a parameter for changing the evaluation, we do want to use a parameter to allow changing the `effect` for troubleshooting. You define an `effectType` parameter and limit it to only `deny` and `disabled`. These two options match our business requirements. The finished parameters block looks like this example: ```json "parameters": {- "effectType": { - "type": "string", - "defaultValue": "Deny", - "allowedValues": [ - "Deny", - "Disabled" - ], - "metadata": { - "displayName": "Effect", - "description": "Enable or disable the execution of the policy" - } + "effectType": { + "type": "string", + "defaultValue": "Deny", + "allowedValues": [ + "Deny", + "Disabled" + ], + "metadata": { + "displayName": "Effect", + "description": "Enable or disable the execution of the policy" }+ } }, ``` ### Policy rule -Composing the [policy rule](../concepts/definition-structure.md#policy-rule) is the final step in -building our custom policy definition. We've identified two statements to test for: +Composing the [policy rule](../concepts/definition-structure-policy-rule.md) is the final step in building our custom policy definition. We identified two statements to test for: -- The storage account **type** is **Microsoft.Storage/storageAccounts**-- The storage account **supportsHttpsTrafficOnly** isn't **true**+- The storage account `type` is `Microsoft.Storage/storageAccounts`. +- The storage account `supportsHttpsTrafficOnly` isn't `true`. -Since we need both of these statements to be true, we'll use the **allOf** [logical -operator](../concepts/definition-structure.md#logical-operators). We'll pass the **effectType** -parameter to the effect instead of making a static declaration. Our finished rule looks like this -example: +Since we need both of these statements to be true, use the `allOf` [logical operator](../concepts/definition-structure-policy-rule.md#logical-operators). Pass the `effectType` parameter to the effect instead of making a static declaration. Our finished rule looks like this example: ```json "if": {- "allOf": [ - { - "field": "type", - "equals": "Microsoft.Storage/storageAccounts" - }, - { - "field": "Microsoft.Storage/storageAccounts/supportsHttpsTrafficOnly", - "notEquals": "true" - } - ] + "allOf": [ + { + "field": "type", + "equals": "Microsoft.Storage/storageAccounts" + }, + { + "field": "Microsoft.Storage/storageAccounts/supportsHttpsTrafficOnly", + "notEquals": "true" + } + ] }, "then": {- "effect": "[parameters('effectType')]" + "effect": "[parameters('effectType')]" } ``` With all three parts of the policy defined, here is our completed definition: ```json {- "properties": { - "displayName": "Deny storage accounts not using only HTTPS", - "description": "Deny storage accounts not using only HTTPS. Checks the supportsHttpsTrafficOnly property on StorageAccounts.", - "mode": "all", - "parameters": { - "effectType": { - "type": "string", - "defaultValue": "Deny", - "allowedValues": [ - "Deny", - "Disabled" - ], - "metadata": { - "displayName": "Effect", - "description": "Enable or disable the execution of the policy" - } - } - }, - "policyRule": { - "if": { - "allOf": [ - { - "field": "type", - "equals": "Microsoft.Storage/storageAccounts" - }, - { - "field": "Microsoft.Storage/storageAccounts/supportsHttpsTrafficOnly", - "notEquals": "true" - } - ] - }, - "then": { - "effect": "[parameters('effectType')]" - } + "properties": { + "displayName": "Deny storage accounts not using only HTTPS", + "description": "Deny storage accounts not using only HTTPS. Checks the supportsHttpsTrafficOnly property on StorageAccounts.", + "mode": "all", + "parameters": { + "effectType": { + "type": "string", + "defaultValue": "Deny", + "allowedValues": [ + "Deny", + "Disabled" + ], + "metadata": { + "displayName": "Effect", + "description": "Enable or disable the execution of the policy" }+ } + }, + "policyRule": { + "if": { + "allOf": [ + { + "field": "type", + "equals": "Microsoft.Storage/storageAccounts" + }, + { + "field": "Microsoft.Storage/storageAccounts/supportsHttpsTrafficOnly", + "notEquals": "true" + } + ] + }, + "then": { + "effect": "[parameters('effectType')]" + } }+ } } ``` -The completed definition can be used to create a new policy. Portal and each SDK (Azure CLI, Azure -PowerShell, and REST API) accept the definition in different ways, so review the commands for each -to validate correct usage. Then assign it, using the parameterized effect, to appropriate resources -to manage the security of your storage accounts. +The completed definition can be used to create a new policy. Portal and each SDK (Azure CLI, Azure PowerShell, and REST API) accept the definition in different ways, so review the commands for each to validate correct usage. Then assign it, using the parameterized effect, to appropriate resources to manage the security of your storage accounts. ## Clean up resources -If you're done working with resources from this tutorial, use the following steps to delete any of -the assignments or definitions created above: +If you're done working with resources from this tutorial, use the following steps to delete any of the assignments or definitions you created: -1. Select **Definitions** (or **Assignments** if you're trying to delete an assignment) under - **Authoring** in the left side of the Azure Policy page. +1. Select **Definitions** (or **Assignments** if you're trying to delete an assignment) under **Authoring** in the left side of the Azure Policy page. 1. Search for the new initiative or policy definition (or assignment) you want to remove. -1. Right-click the row or select the ellipses at the end of the definition (or assignment), and - select **Delete definition** (or **Delete assignment**). +1. Right-click the row or select the ellipses at the end of the definition (or assignment), and select **Delete definition** (or **Delete assignment**). ## Review |
governance | Govern Tags | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/govern-tags.md | Title: "Tutorial: Manage tag governance" -description: In this tutorial, you use the Modify effect of Azure Policy to create and enforce a tag governance model on new and existing resources. Previously updated : 08/17/2021+ Title: "Tutorial: Manage tag governance with Azure Policy" +description: In this tutorial, you use the modify effect of Azure Policy to create and enforce a tag governance model on new and existing resources. Last updated : 09/30/2024 + # Tutorial: Manage tag governance with Azure Policy -[Tags](../../../azure-resource-manager/management/tag-resources.md) are a crucial part of organizing -your Azure resources into a taxonomy. When following -[best practices for tag management](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#naming-and-tagging-resources), -tags can be the basis for applying your business policies with Azure Policy or -[tracking costs with Cost Management](../../../cost-management-billing/costs/cost-mgt-best-practices.md#tag-shared-resources). -No matter how or why you use tags, it's important that you can quickly add, change, and remove those -tags on your Azure resources. To see whether your Azure resource supports tagging, see -[Tag support](../../../azure-resource-manager/management/tag-support.md). +[Tags](../../../azure-resource-manager/management/tag-resources.md) are a crucial part of organizing your Azure resources into a taxonomy. When following [best practices for tag management](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#naming-and-tagging-resources), tags can be the basis for applying your business policies with Azure Policy or [tracking costs with Cost Management](../../../cost-management-billing/costs/cost-mgt-best-practices.md#tag-shared-resources). No matter how or why you use tags, it's important that you can quickly add, change, and remove those tags on your Azure resources. To see whether your Azure resource supports tagging, see [Tag support](../../../azure-resource-manager/management/tag-support.md). -Azure Policy's [Modify](../concepts/effects.md#modify) effect is designed to aid in the governance -of tags no matter what stage of resource governance you are in. **Modify** helps when: +Azure Policy's [modify](../concepts/effect-modify.md) effect is designed to aid in the governance of tags no matter what stage of resource governance you are in. `Modify` helps when: -- You're new to the cloud and have no tag governance-- Already have thousands of resources with no tag governance-- Already have an existing taxonomy that you need changed+- You're new to the cloud and have no tag governance. +- Already have thousands of resources with no tag governance. +- Already have an existing taxonomy that you need changed. -In this tutorial, you'll complete the following tasks: +In this tutorial, you complete the following tasks: > [!div class="checklist"] > - Identify your business requirements In this tutorial, you'll complete the following tasks: ## Prerequisites -To complete this tutorial, you need an Azure subscription. If you don't have one, create a -[free account](https://azure.microsoft.com/free/) before you begin. +To complete this tutorial, you need an Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/) before you begin. ## Identify requirements -Like any good implementation of governance controls, the requirements should come from your business -needs and be well understood before creating technical controls. For this scenario tutorial, the -following items are our business requirements: +Like any good implementation of governance controls, the requirements should come from your business needs and be well understood before creating technical controls. For this scenario tutorial, the following items are our business requirements: -- Two required tags on all resources: _CostCenter_ and _Env_-- _CostCenter_ must exist on all containers and individual resources- - Resources inherit from the container they're in, but may be individually overridden -- _Env_ must exist on all containers and individual resources- - Resources determine environment by container naming scheme and may not be overridden - - All resources in a container are part of the same environment +- Two required tags on all resources: _CostCenter_ and `Env`. +- _CostCenter_ must exist on all containers and individual resources. + - Resources inherit from the container they're in, but might be individually overridden. +- `Env` must exist on all containers and individual resources. + - Resources determine environment by container naming scheme and might not be overridden. + - All resources in a container are part of the same environment. ## Configure the CostCenter tag In terms specific to an Azure environment managed by Azure Policy, the _CostCenter_ tag requirements call for the following outcomes: -- Deny resource groups missing the _CostCenter_ tag-- Modify resources to add the _CostCenter_ tag from the parent resource group when missing+- Deny resource groups missing the _CostCenter_ tag. +- Modify resources to add the _CostCenter_ tag from the parent resource group when missing. ### Deny resource groups missing the CostCenter tag -Since the _CostCenter_ for a resource group can't be determined by the name of the resource group, -it must have the tag defined on the request to create the resource group. The following policy rule -with the [Deny](../concepts/effects.md#deny) effect prevents the creation or updating of resource -groups that don't have the _CostCenter_ tag: +Because the _CostCenter_ for a resource group can't be determined by the name of the resource group, it must have the tag defined on the request to create the resource group. The following policy rule with the [deny](../concepts/effect-deny.md) effect prevents the creation or updating of resource groups that don't have the _CostCenter_ tag: ```json "if": {- "allOf": [{ - "field": "type", - "equals": "Microsoft.Resources/subscriptions/resourceGroups" - }, - { - "field": "tags['CostCenter']", - "exists": false - } - ] + "allOf": [ + { + "field": "type", + "equals": "Microsoft.Resources/subscriptions/resourceGroups" + }, + { + "field": "tags['CostCenter']", + "exists": false + } + ] }, "then": {- "effect": "deny" + "effect": "deny" } ``` > [!NOTE]-> As this policy rule targets a resource group, the _mode_ on the policy definition must be 'All' -> instead of 'Indexed'. +> As this policy rule targets a resource group, the `mode` on the policy definition must be `All` +> instead of `Indexed`. ### Modify resources to inherit the CostCenter tag when missing The second _CostCenter_ need is for any resources to inherit the tag from the parent resource group when it's missing. If the tag is already defined on the resource, even if different from the parent resource group, it must be left alone. The following policy rule uses-[Modify](../concepts/effects.md#modify): +[modify](../concepts/effect-modify.md): ```json "policyRule": {- "if": { - "field": "tags['CostCenter']", - "exists": "false" - }, - "then": { - "effect": "modify", - "details": { - "roleDefinitionIds": [ - "/providers/microsoft.authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c" - ], - "operations": [{ - "operation": "add", - "field": "tags['CostCenter']", - "value": "[resourcegroup().tags['CostCenter']]" - }] + "if": { + "field": "tags['CostCenter']", + "exists": "false" + }, + "then": { + "effect": "modify", + "details": { + "roleDefinitionIds": [ + "/providers/microsoft.authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c" + ], + "operations": [ + { + "operation": "add", + "field": "tags['CostCenter']", + "value": "[resourcegroup().tags['CostCenter']]" }+ ] }+ } } ``` -This policy rule uses the **add** operation instead of **addOrReplace** as we don't want to alter +This policy rule uses the `add` operation instead of `addOrReplace` as we don't want to alter the tag value if it's present when [remediating](../how-to/remediate-resources.md) existing resources. It also uses the `[resourcegroup()]` template function to get the tag value from the parent resource group. parent resource group. ## Configure the Env tag -In terms specific to an Azure environment managed by Azure Policy, the _Env_ tag requirements call +In terms specific to an Azure environment managed by Azure Policy, the `Env` tag requirements call for the following outcomes: -- Modify the _Env_ tag on the resource group based on the naming scheme of the resource group-- Modify the _Env_ tag on all resources in the resource group to the same as the parent resource+- Modify the `Env` tag on the resource group based on the naming scheme of the resource group +- Modify the `Env` tag on all resources in the resource group to the same as the parent resource group ### Modify resource groups Env tag based on name -A [Modify](../concepts/effects.md#modify) policy is required for each environment that exists in -your Azure environment. The Modify policy for each looks something like this policy definition: +A [modify](../concepts/effect-modify.md) policy is required for each environment that exists in +your Azure environment. The `modify` policy for each looks something like this policy definition: ```json "policyRule": {- "if": { - "allOf": [{ - "field": "type", - "equals": "Microsoft.Resources/subscriptions/resourceGroups" - }, - { - "field": "name", - "like": "prd-*" - }, - { - "field": "tags['Env']", - "notEquals": "Production" - } -+ "if": { + "allOf": [ + { + "field": "type", + "equals": "Microsoft.Resources/subscriptions/resourceGroups" + }, + { + "field": "name", + "like": "prd-*" + }, + { + "field": "tags['Env']", + "notEquals": "Production" + } ]- }, - "then": { - "effect": "modify", - "details": { - "roleDefinitionIds": [ - "/providers/microsoft.authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c" - ], - "operations": [{ - "operation": "addOrReplace", - "field": "tags['Env']", - "value": "Production" - }] + }, + "then": { + "effect": "modify", + "details": { + "roleDefinitionIds": [ + "/providers/microsoft.authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c" + ], + "operations": [ + { + "operation": "addOrReplace", + "field": "tags['Env']", + "value": "Production" }+ ] }+ } } ``` > [!NOTE]-> As this policy rule targets a resource group, the _mode_ on the policy definition must be 'All' -> instead of 'Indexed'. +> As this policy rule targets a resource group, the `mode` on the policy definition must be `All` +> instead of `Indexed`. This policy only matches resource groups with the sample naming scheme used for production resources-of `prd-`. More complex naming scheme's can be achieved with several **match** conditions instead of -the single **like** in this example. +of `prd-`. More complex naming scheme's can be achieved with several `match` conditions instead of +the single `like` in this example. ### Modify resources to inherit the Env tag -The business requirement calls for all resources to have the _Env_ tag that their parent resource -group does. This tag can't be overridden, so we'll use the **addOrReplace** operation with the -[Modify](../concepts/effects.md#modify) effect. The sample Modify policy looks like the following +The business requirement calls for all resources to have the `Env` tag that their parent resource +group does. This tag can't be overridden, so use the `addOrReplace` operation with the +[modify](../concepts/effect-modify.md) effect. The sample `modify` policy looks like the following rule: ```json "policyRule": {- "if": { - "anyOf": [{ - "field": "tags['Env']", - "notEquals": "[resourcegroup().tags['Env']]" - }, - { - "field": "tags['Env']", - "exists": false - } + "if": { + "anyOf": [ + { + "field": "tags['Env']", + "notEquals": "[resourcegroup().tags['Env']]" + }, + { + "field": "tags['Env']", + "exists": false + } ]- }, - "then": { - "effect": "modify", - "details": { - "roleDefinitionIds": [ - "/providers/microsoft.authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c" - ], - "operations": [{ - "operation": "addOrReplace", - "field": "tags['Env']", - "value": "[resourcegroup().tags['Env']]" - }] + }, + "then": { + "effect": "modify", + "details": { + "roleDefinitionIds": [ + "/providers/microsoft.authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c" + ], + "operations": [ + { + "operation": "addOrReplace", + "field": "tags['Env']", + "value": "[resourcegroup().tags['Env']]" }+ ] }+ } } ``` > [!NOTE]-> As this policy rule targets resources that support tags, the _mode_ on the policy definition must -> be 'Indexed'. This configuration also ensures this policy skips resource groups. +> As this policy rule targets resources that support tags, the `mode` on the policy definition must +> be `Indexed`. This configuration also ensures this policy skips resource groups. This policy rule looks for any resource that doesn't have its parent resource groups value for the-_Env_ tag or is missing the _Env_ tag. Matching resources have their _Env_ tag set to the parent +`Env` tag or is missing the `Env` tag. Matching resources have their `Env` tag set to the parent resource groups value, even if the tag already existed on the resource but with a different value. ## Assign the initiative and remediate resources -Once the tag policies above are created, join them into a single initiative for tag governance and +After the tag policies are created, join them into a single initiative for tag governance and assign them to a management group or subscription. The initiative and included policies then evaluate compliance of existing resources and alters requests for new or updated resources that-match the **if** property in the policy rule. However, the policy doesn't automatically update +match the `if` property in the policy rule. However, the policy doesn't automatically update existing non-compliant resources with the defined tag changes. -Like [deployIfNotExists](../concepts/effects.md#deployifnotexists) policies, the **Modify** policy +Like [deployIfNotExists](../concepts/effect-deploy-if-not-exists.md) policies, the `modify` policy uses remediation tasks to alter existing non-compliant resources. Follow the directions on [How-to remediate resources](../how-to/remediate-resources.md) to identify your non-compliant-**Modify** resources and correct the tags to your defined taxonomy. +`modify` resources and correct the tags to your defined taxonomy. ## Clean up resources If you're done working with resources from this tutorial, use the following steps to delete any of-the assignments or definitions created above: +the assignments or definitions you created: 1. Select **Definitions** (or **Assignments** if you're trying to delete an assignment) under **Authoring** in the left side of the Azure Policy page. 1. Search for the new initiative or policy definition (or assignment) you want to remove. -1. Right-click the row or select the ellipses at the end of the definition (or assignment), and - select **Delete definition** (or **Delete assignment**). +1. Right-click the row or select the ellipses at the end of the definition or assignment, and + select **Delete definition** or **Delete assignment**. ## Review In this tutorial, you learned about the following tasks: To learn more about the structures of policy definitions, look at this article: > [!div class="nextstepaction"]-> [Azure Policy definition structure](../concepts/definition-structure.md) +> [Azure Policy definition structure](../concepts/definition-structure-basics.md) |
governance | Route State Change Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/route-state-change-events.md | Title: "Tutorial: Route policy state change events to Event Grid with Azure CLI" description: In this tutorial, you configure Event Grid to listen for policy state change events and call a webhook. Previously updated : 07/19/2022 Last updated : 09/30/2024 + # Tutorial: Route policy state change events to Event Grid with Azure CLI -In this article, you learn how to set up Azure Policy event subscriptions to send policy state -change events to a web endpoint. Azure Policy users can subscribe to events emitted when policy -state changes occur on resources. These events can trigger web hooks, -[Azure Functions](../../../azure-functions/index.yml), -[Azure Storage Queues](../../../storage/queues/index.yml), or any other event handler that is -supported by [Azure Event Grid](../../../event-grid/index.yml). Typically, you send events to an -endpoint that processes the event data and takes actions. However, to simplify this tutorial, you -send the events to a web app that collects and displays the messages. +In this article, you learn how to set up Azure Policy event subscriptions to send policy state change events to a web endpoint. Azure Policy users can subscribe to events emitted when policy state changes occur on resources. These events can trigger web hooks, [Azure Functions](../../../azure-functions/index.yml), [Azure Storage Queues](../../../storage/queues/index.yml), or any other event handler supported by [Azure Event Grid](../../../event-grid/index.yml). Typically, you send events to an endpoint that processes the event data and takes actions. To simplify this tutorial, you send the events to a web app that collects and displays the messages. ## Prerequisites -- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/)- account before you begin. --- This quickstart requires that you run Azure CLI version 2.0.76 or later. To find the version, run- `az --version`. If you need to install or upgrade, see - [Install Azure CLI](/cli/azure/install-azure-cli). -+- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. +- This quickstart requires that you run Azure CLI version 2.0.76 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). ## Create a resource group -Event Grid topics are Azure resources, and must be placed in an Azure resource group. The resource -group is a logical collection into which Azure resources are deployed and managed. +Event Grid topics are Azure resources, and must be placed in an Azure resource group. The resource group is a logical collection into which Azure resources are deployed and managed. Create a resource group with the [az group create](/cli/azure/group) command. -The following example creates a resource group named `<resource_group_name>` in the _westus_ -location. Replace `<resource_group_name>` with a unique name for your resource group. +The following example creates a resource group named `<resource_group_name>` in the _westus_ location. Replace `<resource_group_name>` with a unique name for your resource group. ```azurecli-interactive # Log in first with az login if you're not using Cloud Shell az group create --name <resource_group_name> --location westus ## Create an Event Grid system topic -Now that we have a resource group, we create a -[system topic](../../../event-grid/system-topics.md). A system topic in Event Grid represents one or -more events published by Azure services such as Azure Policy and Azure Event Hubs. This system topic -uses the `Microsoft.PolicyInsights.PolicyStates` topic type for Azure Policy state changes. +Now that we have a resource group, we create a [system topic](../../../event-grid/system-topics.md). A system topic in Event Grid represents one or more events published by Azure services such as Azure Policy and Azure Event Hubs. This system topic uses the `Microsoft.PolicyInsights.PolicyStates` topic type for Azure Policy state changes. -First, you'll need to register the `PolicyInsights` and `EventGrid` resource providers (RPs) at the appropriate management scope. Whereas the Azure portal auto-registers any RPs you invoke for the first time, Azure CLI does not. +First, you need to register the `PolicyInsights` and `EventGrid` resource providers (RPs) at the appropriate management scope. Azure portal autoregisters any RPs you invoke for the first time, but Azure CLI doesn't. ```azurecli-interactive # Log in first with az login if you're not using Cloud Shell az provider register --namespace Microsoft.PolicyInsights az provider register --namespace Microsoft.EventGrid ``` -Next, replace `<subscriptionId>` in the **scope** parameter with the ID of your subscription and -`<resource_group_name>` in **resource-group** parameter with the previously created resource group. +Next, replace `<subscriptionId>` in the `scope` parameter with the ID of your subscription and `<resource_group_name>` in `resource-group` parameter with the previously created resource group. ```azurecli-interactive az eventgrid system-topic create --name PolicyStateChanges --location global --topic-type Microsoft.PolicyInsights.PolicyStates --source "/subscriptions/<subscriptionId>" --resource-group "<resource_group_name>" ``` -If your Event Grid system topic will be applied to the management group scope, then the Azure CLI `--source` parameter syntax is a bit different. Here's an example: +If your Event Grid system topic is applied to the management group scope, then the Azure CLI `--source` parameter syntax is a bit different. Here's an example: ```azurecli-interactive az eventgrid system-topic create --name PolicyStateChanges --location global --topic-type Microsoft.PolicyInsights.PolicyStates --source "/tenants/<tenantID>/providers/Microsoft.Management/managementGroups/<management_group_name>" --resource-group "<resource_group_name>" az eventgrid system-topic create --name PolicyStateChanges --location global --t ## Create a message endpoint -Before subscribing to the topic, let's create the endpoint for the event message. Typically, the -endpoint takes actions based on the event data. To simplify this quickstart, you deploy a -[pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the -event messages. The deployed solution includes an App Service plan, an App Service web app, and -source code from GitHub. +Before subscribing to the topic, let's create the endpoint for the event message. Typically, the endpoint takes actions based on the event data. To simplify this quickstart, you deploy a [prebuilt web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub. -Replace `<your-site-name>` with a unique name for your web app. The web app name must be unique -because it's part of the DNS entry. +Replace `<your-site-name>` with a unique name for your web app. The web app name must be unique because it's part of the Domain Name System (DNS) entry. ```azurecli-interactive # Log in first with az login if you're not using Cloud Shell az deployment group create \ --parameters siteName=<your-site-name> hostingPlanName=viewerhost ``` -The deployment may take a few minutes to complete. After the deployment has succeeded, view your web -app to make sure it's running. In a web browser, navigate to: -`https://<your-site-name>.azurewebsites.net` +The deployment might take a few minutes to complete. After a successful deployment, view your web app to make sure it's running. In a web browser, navigate to: `https://<your-site-name>.azurewebsites.net` You should see the site with no messages currently displayed. ## Subscribe to the system topic -You subscribe to a topic to tell Event Grid which events you want to track and where to send those -events. The following example subscribes to the system topic you created, and passes the URL from -your web app as the endpoint to receive event notifications. Replace `<event_subscription_name>` -with a name for your event subscription. For `<resource_group_name>` and `<your-site-name>`, use the -values you created earlier. +You subscribe to a topic to tell Event Grid which events you want to track and where to send those events. The following example subscribes to the system topic you created, and passes the URL from your web app as the endpoint to receive event notifications. Replace `<event_subscription_name>` with a name for your event subscription. For `<resource_group_name>` and `<your-site-name>`, use the values you created earlier. The endpoint for your web app must include the suffix `/api/updates/`. az eventgrid system-topic event-subscription create \ --endpoint https://<your-site-name>.azurewebsites.net/api/updates ``` -View your web app again, and notice that a subscription validation event has been sent to it. Select -the eye icon to expand the event data. Event Grid sends the validation event so the endpoint can -verify that it wants to receive event data. The web app includes code to validate the subscription. +View your web app again, and notice that a subscription validation event was sent to it. Select the eye icon to expand the event data. Event Grid sends the validation event so the endpoint can verify that it wants to receive event data. The web app includes code to validate the subscription. ## Create a policy assignment -In this quickstart, you create a policy assignment and assign the **Require a tag on resource -groups** definition. This policy definition identifies resource groups that are missing the tag -configured during policy assignment. +In this quickstart, you create a policy assignment and assign the **Require a tag on resource groups** definition. This policy definition identifies resource groups that are missing the tag configured during policy assignment. -Run the following command to create a policy assignment scoped to the resource group you created to -hold the Event Grid topic: +Run the following command to create a policy assignment scoped to the resource group you created to hold the Event Grid topic: ```azurecli-interactive # Log in first with az login if you're not using Cloud Shell az policy assignment create --name 'requiredtags-events' --display-name 'Require The preceding command uses the following information: - **Name** - The actual name of the assignment. For this example, _requiredtags-events_ was used.-- **DisplayName** - Display name for the policy assignment. In this case, you're using _Require tag- on RG_. -- **Scope** - A scope determines what resources or grouping of resources the policy assignment gets- enforced on. It could range from a subscription to resource groups. Be sure to replace - <scope> with the name of your resource group. The format for a resource group scope is +- **DisplayName** - Display name for the policy assignment. In this case, you're using _Require tag on RG_. +- **Scope** - A scope determines what resources or grouping of resources the policy assignment gets enforced on. It could range from a subscription to resource groups. Be sure to replace `<scope>` with the name of your resource group. The format for a resource group scope is `/subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>`.-- **Policy** - The policy definition ID, based on which you're using to create the assignment. In- this case, it's the ID of policy definition _Require a tag on resource groups_. To get the policy - definition ID, run this command: - `az policy definition list --query "[?displayName=='Require a tag on resource groups']"` +- **Policy** - The policy definition ID, based on which you're using to create the assignment. In this case, it's the ID of policy definition _Require a tag on resource groups_. To get the policy definition ID, run this command: `az policy definition list --query "[?displayName=='Require a tag on resource groups']"` -After creating the policy assignment, wait for a **Microsoft.PolicyInsights.PolicyStateCreated** -event notification to appear in the web app. The resource group we created show a -`data.complianceState` value of _NonCompliant_ to start. +After creating the policy assignment, wait for a `Microsoft.PolicyInsights.PolicyStateCreated` event notification to appear in the web app. The resource group we created show a `data.complianceState` value of _NonCompliant_ to start. > [!NOTE] > If the resource group inherits other policy assignments from the subscription or management group event notification to appear in the web app. The resource group we created show ## Trigger a change on the resource group -To make the resource group compliant, a tag with the name **EventTest** is required. Add the tag to -the resource group with the following command replacing `<subscriptionId>` with your subscription ID -and `<resourceGroup>` with the name of the resource group: +To make the resource group compliant, a tag with the name _EventTest_ is required. Add the tag to the resource group with the following command replacing `<subscriptionId>` with your subscription ID and `<resourceGroup>` with the name of the resource group: ```azurecli-interactive # Log in first with az login if you're not using Cloud Shell and `<resourceGroup>` with the name of the resource group: az tag create --resource-id '/subscriptions/<SubscriptionID>/resourceGroups/<resourceGroup>' --tags EventTest=true ``` -After adding the required tag to the resource group, wait for a -**Microsoft.PolicyInsights.PolicyStateChanged** event notification to appear in the web app. Expand -the event and the `data.complianceState` value now shows _Compliant_. +After adding the required tag to the resource group, wait for a `Microsoft.PolicyInsights.PolicyStateChanged` event notification to appear in the web app. Expand the event and the `data.complianceState` value now shows _Compliant_. ## Troubleshooting -If you see an error similar to one of the following, please make sure that you've registered both resource providers at the scope to which you're subscribing (management group or subscription): +If you see an error similar to one of the following, make sure that you registered both resource providers at the scope to which you're subscribing (management group or subscription): - `Deployment has failed with the following error: {"code":"Publisher Notification Error","message":"Failed to enable publisher notifications.","details":[{"code":"Publisher Provider Error","message":"GET request for <uri> failed with status code: Forbidden, code: AuthorizationFailed and message: The client '<identifier>' with object id '<identifier>' does not have authorization to perform action 'microsoft.policyinsights/eventGridFilters/read' over scope '<scope>/providers/microsoft.policyinsights/eventGridFilters/_default' or the scope is invalid. If access was recently granted, please refresh your credentials.."}]}` - `Deployment has failed with the following error: {'code':'Publisher Notification Error','message':'Failed to enable publisher notifications.','details':[{'code':'ApiVersionNotSupported','message':'Event Grid notifications are currently not supported by microsoft.policyinsights in global. Try re-registering Microsoft.EventGrid provider if this is your first event subscription in this region.'}]}` ## Clean up resources -If you plan to continue working with this web app and Azure Policy event subscription, don't clean -up the resources created in this article. If you don't plan to continue, use the following command -to delete the resources you created in this article. +If you plan to continue working with this web app and Azure Policy event subscription, don't clean up the resources created in this article. If you don't plan to continue, use the following command to delete the resources you created in this article. -Replace `<resource_group_name>` with the resource group you created above. +Replace `<resource_group_name>` with the resource group you created. ```azurecli-interactive az group delete --name <resource_group_name> az group delete --name <resource_group_name> ## Next steps -Now that you know how to create topics and event subscriptions for Azure Policy, learn more about -policy state change events and what Event Grid can help you do: +Now that you know how to create topics and event subscriptions for Azure Policy, learn more about policy state change events and Event Grid: - [Reacting to Azure Policy state change events](../concepts/event-overview.md) - [Azure Policy schema details for Event Grid](../../../event-grid/event-schema-policy.md) |
hdinsight | Hdinsight Restrict Outbound Traffic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-restrict-outbound-traffic.md | Title: Configure outbound network traffic restriction - Azure HDInsight description: Learn how to configure outbound network traffic restriction for Azure HDInsight clusters. Previously updated : 05/23/2024 Last updated : 10/01/2024 # Configure outbound network traffic for Azure HDInsight clusters using Firewall Create an application rule collection that allows the cluster to send and receiv 1. Select the new firewall **Test-FW01** from the Azure portal. -1. Navigate to **Settings** > **Rules** > **Application rule collection** > **+ Add application rule collection**. +1. Navigate to **Settings** > **Rules** > **Application rule collection** > **+ `Add application rule collection`**. :::image type="content" source="./media/hdinsight-restrict-outbound-traffic/hdinsight-restrict-outbound-traffic-add-app-rule-collection.png" alt-text="Title: Add application rule collection."::: -1. On the **Add application rule collection** screen, provide the following information: +1. On the **`Add application rule collection`** screen, provide the following information: **Top section** Create an application rule collection that allows the cluster to send and receiv | | | | | | | Rule_2 | * | https:443 | login.windows.net | Allows Windows login activity | | Rule_3 | * | https:443 | login.microsoftonline.com | Allows Windows login activity |- | Rule_4 | * | https:443 | storage_account_name.blob.core.windows.net | Replace `storage_account_name` with your actual storage account name. Make sure ["secure transfer required"](../storage/common/storage-require-secure-transfer.md) is enabled on the storage account. If you are using Private endpoint to access storage accounts, this step is not needed and storage traffic is not forwarded to the firewall.| + | Rule_4 | * | https:443 | storage_account_name.blob.core.windows.net | Replace `storage_account_name` with your actual storage account name. Make sure ["secure transfer required"](../storage/common/storage-require-secure-transfer.md) is enabled on the storage account. If you're using Private endpoint to access storage accounts, this step isn't needed and storage traffic isn't forwarded to the firewall.| | Rule_5 | * | http:80 | azure.archive.ubuntu.com | Allows Ubuntu security updates to be installed on the cluster |+ | Rule_6 | * | https:433 | pypi.org, pypi.python.org, files.pythonhosted.org | Allows Python package installations for Azure monitoring | + :::image type="content" source="./media/hdinsight-restrict-outbound-traffic/hdinsight-restrict-outbound-traffic-add-app-rule-collection-details.png" alt-text="Title: Enter application rule collection details."::: Create the network rules to correctly configure your HDInsight cluster. | Name | Protocol | Source Addresses | Service Tags | Destination Ports | Notes | | | | | | | |- | Rule_6 | TCP | * | SQL | 1433, 11000-11999 | If you are using the default sql servers provided by HDInsight, configure a network rule in the Service Tags section for SQL that will allow you to log and audit SQL traffic. Unless you configured Service Endpoints for SQL Server on the HDInsight subnet, which will bypass the firewall. If you are using custom SQL server for Ambari, Oozie, Ranger and Hive metastore then you only need to allow the traffic to your own custom SQL Servers. Refer to [Azure SQL Database and Azure Synapse Analytics connectivity architecture](/azure/azure-sql/database/connectivity-architecture) to see why 11000-11999 port range is also needed in addition to 1433. | + | Rule_6 | TCP | * | SQL | 1433, 11000-11999 | If you're using the default sql servers provided by HDInsight, configure a network rule in the Service Tags section for SQL that will allow you to log and audit SQL traffic. Unless you configured Service Endpoints for SQL Server on the HDInsight subnet, which will bypass the firewall. If you're using custom SQL server for Ambari, Oozie, Ranger and Hive metastore then you only need to allow the traffic to your own custom SQL Servers. Refer to [Azure SQL Database and Azure Synapse Analytics connectivity architecture](/azure/azure-sql/database/connectivity-architecture) to see why 11000-11999 port range is also needed in addition to 1433. | | Rule_7 | TCP | * | Azure Monitor | * | (optional) Customers who plan to use auto scale feature should add this rule. | :::image type="content" source="./media/hdinsight-restrict-outbound-traffic/hdinsight-restrict-outbound-traffic-add-network-rule-collection.png" alt-text="Title: Enter application rule collection."::: Create the network rules to correctly configure your HDInsight cluster. Create a route table with the following entries: -* All IP addresses from [Health and management services](../hdinsight/hdinsight-management-ip-addresses.md#health-and-management-services-all-regions) with a next hop type of **Internet**. It should include 4 IPs of the generic regions as well as 2 IPs for your specific region. This rule is only needed if the ResourceProviderConnection is set to *Inbound*. If the ResourceProviderConnection is set to *Outbound* then these IPs are not needed in the UDR. +* All IP addresses from [Health and management services](../hdinsight/hdinsight-management-ip-addresses.md#health-and-management-services-all-regions) with a next hop type of **Internet**. It should include 4 IPs of the generic regions as well as 2 IPs for your specific region. This rule is only needed if the ResourceProviderConnection is set to *Inbound*. If the ResourceProviderConnection is set to *Outbound* then these IPs aren't needed in the UDR. * One Virtual Appliance route for IP address 0.0.0.0/0 with the next hop being your Azure Firewall private IP address. Once you've completed the logging setup, if you're using Log Analytics, you can AzureDiagnostics | where msg_s contains "Deny" | where TimeGenerated >= ago(1h) ``` -Integrating Azure Firewall with Azure Monitor logs is useful when first getting an application working. Especially when you aren't aware of all of the application dependencies. You can learn more about Azure Monitor logs from [Analyze log data in Azure Monitor](/azure/azure-monitor/logs/log-query-overview) +Integrating Azure Firewall with Azure Monitor logs is useful when first getting an application working. Especially when you'ren't aware of all of the application dependencies. You can learn more about Azure Monitor logs from [Analyze log data in Azure Monitor](/azure/azure-monitor/logs/log-query-overview) To learn about the scale limits of Azure Firewall and request increases, see [this](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-firewall-limits) document or refer to the [FAQs](../firewall/firewall-faq.yml). |
healthcare-apis | Fhir App Registration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-app-registration.md | Last updated 09/27/2023 # Register the Microsoft Entra apps for Azure API for FHIR -You have several configuration options to choose from when you're setting up the Azure API for FHIR or the FHIR Server for Azure (OSS). For open source, you'll need to create your own resource application registration. For Azure API for FHIR, this resource application is created automatically. +There are several configuration options to choose from when you're setting up the Azure API for FHIR® or the FHIR Server for Azure (OSS). For open source, you need to create your own resource application registration. For Azure API for FHIR, this resource application is created automatically. ## Application registrations -In order for an application to interact with Microsoft Entra ID, it needs to be registered. In the context of the FHIR server, there are two kinds of application registrations to discuss: +In order for an application to interact with Microsoft Entra ID, it needs to be registered. In the context of the FHIR server, there are two kinds of application registrations: 1. Resource application registrations. 1. Client application registrations. -**Resource applications** are representations in Microsoft Entra ID of an API or resource that is secured with Microsoft Entra ID, specifically it would be the Azure API for FHIR. A resource application for Azure API for FHIR will be created automatically when you provision the service, but if you're using the open-source server, you'll need to [register a resource application](register-resource-azure-ad-client-app.md) in Microsoft Entra ID. This resource application will have an identifier URI. It's recommended that this URI be the same as the URI of the FHIR server. This URI should be used as the `Audience` for the FHIR server. A client application can request access to this FHIR server when it requests a token. +**Resource applications** are representations in Microsoft Entra ID of an API or resource that is secured with Microsoft Entra ID. Here we discuss the Azure API for FHIR. A resource application for Azure API for FHIR is created automatically when you provision the service. If you're using the open-source server, you need to [register a resource application](register-resource-azure-ad-client-app.md) in Microsoft Entra ID. This resource application has an identifier URI. It's recommended that this URI be the same as the URI of the FHIR server. This URI should be used as the `Audience` for the FHIR server. A client application can request access to this FHIR server when it requests a token. -*Client applications* are registrations of the clients that will be requesting tokens. Often in OAuth 2.0, we distinguish between at least three different types of applications: +**Client applications** are registrations of the clients that will be requesting tokens. In OAuth 2.0, we distinguish between at least three different types of applications: -1. **Confidential clients**, also known as web apps in Microsoft Entra ID. Confidential clients are applications that use [authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md) to obtain a token on behalf of a signed in user presenting valid credentials. They're called confidential clients because they're able to hold a secret and will present this secret to Microsoft Entra ID when exchanging the authentication code for a token. Since confidential clients are able to authenticate themselves using the client secret, they're trusted more than public clients and can have longer lived tokens and be granted a refresh token. Read the details on how to [register a confidential client](register-confidential-azure-ad-client-app.md). Note it's important to register the reply URL at which the client will be receiving the authorization code. -1. **Public clients**. These are clients that canΓÇÖt keep a secret. Typically this would be a mobile device application or a single page JavaScript application, where a secret in the client could be discovered by a user. Public clients also use authorization code flow, but they aren't allowed to present a secret when obtaining a token and they may have shorter lived tokens and no refresh token. Read the details on how to [register a public client](register-public-azure-ad-client-app.md). -1. Service clients. These clients obtain tokens on behalf of themselves (not on behalf of a user) using the [client credentials flow](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md). They typically represent applications that access the FHIR server in a non-interactive way. An example would be an ingestion process. When using a service client, it isn't necessary to start the process of getting a token with a call to the `/authorize` endpoint. A service client can go straight to the `/token` endpoint and present client ID and client secret to obtain a token. Read the details on how to [register a service client](register-service-azure-ad-client-app.md) +1. **Confidential clients**, also known as web apps in Microsoft Entra ID. Confidential clients are applications that use [authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md) to obtain a token on behalf of a signed in user presenting valid credentials. They're called confidential clients because they're able to hold a secret and will present this secret to Microsoft Entra ID when exchanging the authentication code for a token. Since confidential clients are able to authenticate themselves using the client secret, they're trusted more than public clients, can have longer lived tokens, and be granted a refresh token. Read the details on how to [register a confidential client](register-confidential-azure-ad-client-app.md). Note: It's important to register the reply URL at which the client will be receiving the authorization code. +1. **Public clients**. These are clients that canΓÇÖt keep a secret. Typically this would be a mobile device application or a single page JavaScript application, where a secret in the client could be discovered by a user. Public clients also use authorization code flow. However, they aren't allowed to present a secret when obtaining a token, and may have shorter lived tokens and no refresh token. Read the details on how to [register a public client](register-public-azure-ad-client-app.md). +1. **Service clients**. These clients obtain tokens on behalf of themselves (not on behalf of a user) using the [client credentials flow](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md). They typically represent applications that access the FHIR server in a non-interactive way. An example would be an ingestion process. When using a service client, it isn't necessary to start the process of getting a token with a call to the `/authorize` endpoint. A service client can go straight to the `/token` endpoint and present the client ID and client secret to obtain a token. Read the details on how to [register a service client](register-service-azure-ad-client-app.md) ## Next steps -In this overview, you've gone through the types of application registrations you may need in order to work with a FHIR API. +In this overview, you reviewed the types of application registrations you may need in order to work with a FHIR API. Based on your setup, refer to the how-to-guides to register your applications: After you've registered your applications, you can deploy Azure API for FHIR. >[!div class="nextstepaction"] >[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md) -FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. |
healthcare-apis | Fhir Features Supported | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-features-supported.md | -Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Server for Azure. The server is an implementation of the [FHIR](https://hl7.org/fhir) standard. This document lists the main features of the FHIR Server. +Azure API for FHIR® provides a fully managed deployment of the Microsoft FHIR Server for Azure. The server is an implementation of the [FHIR](https://hl7.org/fhir) standard. This document lists the main features of the FHIR Server. ## FHIR version Previous versions also currently supported include: `3.0.2` ## REST API -Below is a summary of the supported RESTful capabilities. For more information on the implementation of these capabilities, see [FHIR REST API capabilities](fhir-rest-api-capabilities.md). +Following is a summary of the supported RESTful capabilities. For more information on the implementation of these capabilities, see [FHIR REST API capabilities](fhir-rest-api-capabilities.md). | API | Azure API for FHIR | FHIR service in Azure Health Data Services | Comment | |--|--||| | read | Yes | Yes | | | vread | Yes | Yes | | | update | Yes | Yes | | -| update with optimistic locking | Yes | Yes | -| update (conditional) | Yes | Yes | +| update with optimistic locking | Yes | Yes | | +| update (conditional) | Yes | Yes | | | patch | Yes | Yes | Support for [JSON Patch and FHIRPath Patch](../fhir/rest-api-capabilities.md#patch-and-conditional-patch) only. | | patch (conditional) | Yes | Yes | Support for [JSON Patch and FHIRPath Patch](../fhir/rest-api-capabilities.md#patch-and-conditional-patch) only. |-| history | Yes | Yes | +| history | Yes | Yes | | | create | Yes | Yes | Support both POST/PUT | | create (conditional) | Yes | Yes | Issue [#1382](https://github.com/microsoft/fhir-server/issues/1382) | | search | Partial | Partial | See [Overview of FHIR Search](overview-of-search.md). |-| chained search | Yes | Yes | See Note below. | -| reverse chained search | Yes | Yes | See Note below. | -| batch | Yes | Yes | -| transaction | No | Yes | +| chained search | Yes | Yes | See following Note. | +| reverse chained search | Yes | Yes | See following Note. | +| batch | Yes | Yes | | +| transaction | No | Yes | | | paging | Partial | Partial | `self` and `next` are supported |-| intermediaries | No | No | +| intermediaries | No | No | | > [!Note] -> In the Azure API for FHIR and the open-source FHIR server backed by Azure Cosmos DB, the chained search and reverse chained search is an MVP implementation. To accomplish chained search on Azure Cosmos DB, the implementation walks down the search expression and issues sub-queries to resolve the matched resources. This is done for each level of the expression. If any query returns more than 1000 results, an error will be thrown. +> In the Azure API for FHIR and the open-source FHIR server backed by Azure Cosmos DB, the chained search and reverse chained search is an MVP implementation. To accomplish chained search on Azure Cosmos DB, the implementation walks down the search expression and issues sub-queries to resolve the matched resources. This is done for each level of the expression. If any query returns more than 1000 results, an error is thrown. ## Extended Operations Azure Cosmos DB is a globally distributed multi-model (NoSQL, MongoDB, and other ## Role-based access control -The FHIR Server uses [Microsoft Entra ID](https://azure.microsoft.com/services/active-directory/) for access control. Specifically, role-based access control (RBAC) is enforced, if the `FhirServer:Security:Enabled` configuration parameter is set to `true`, and all requests (except `/metadata`) to the FHIR Server must have `Authorization` request header set to `Bearer <TOKEN>`. The token must contain one or more roles as defined in the `roles` claim. A request will be allowed if the token contains a role that allows the specified action on the specified resource. +The FHIR Server uses [Microsoft Entra ID](https://azure.microsoft.com/services/active-directory/) for access control. Specifically, role-based access control (RBAC) is enforced if the `FhirServer:Security:Enabled` configuration parameter is set to `true`, and all requests (except `/metadata`) to the FHIR Server must have `Authorization` request header set to `Bearer <TOKEN>`. The token must contain one or more roles as defined in the `roles` claim. A request is allowed if the token contains a role that allows the specified action on the specified resource. Currently, the allowed actions for a given role are applied *globally* on the API. ## Service limits -* [**Request Units (RUs)**](/azure/cosmos-db/concepts-limits) - You can configure up to 100,000 RUs in the portal for Azure API for FHIR. You'll need a minimum of 400 RUs or 40 RUs/GB, whichever is larger. If you need more than 100,000 RUs, you can put in a support ticket to have the RUs increased. The maximum available is 1,000,000. In addition, we support [autoscaling of RUs](autoscale-azure-api-fhir.md). +* [**Request Units (RUs)**](/azure/cosmos-db/concepts-limits) - You can configure up to 100,000 RUs in the portal for Azure API for FHIR. You need a minimum of 400 RUs or 40 RUs/GB, whichever is larger. If you need more than 100,000 RUs, you can put in a support ticket to have the RUs increased. The maximum available is 1,000,000. In addition, we support [autoscaling of RUs](autoscale-azure-api-fhir.md). * **Bundle size** - Each bundle is limited to 500 items. -* **Data size** - Data/Documents must each be slightly less than 2 MB. +* **Data size** - Data/Documents must each be less than 2 MB. * **Subscription limit** - By default, each subscription is limited to a maximum of 10 FHIR server instances. If you need more instances per subscription, open a support ticket and provide details about your needs. -* **Resource size** - Individual resource size including history should not exceed 20GB. +* **Resource size** - Individual resource size, including history, shouldn't exceed 20 GB. ## Next steps -In this article, you've read about the supported FHIR features in Azure API for FHIR. For information about deploying Azure API for FHIR, see +In this article, you read about the supported FHIR features in Azure API for FHIR. For information about deploying Azure API for FHIR, see >[!div class="nextstepaction"] >[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md) -FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. |
healthcare-apis | Fhir Github Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-github-projects.md | -We have many open-source projects on GitHub that provide you the source code and instructions to deploy services for various uses. You're always welcome to visit our GitHub repositories to learn and experiment with our features and products. +There are many open-source projects on GitHub that provide source code and instructions to deploy services for various uses. You're always welcome to visit our GitHub repositories to learn about and experiment with our features and products. -## FHIR Server +## FHIR Server GitHub projects -* [microsoft/fhir-server](https://github.com/microsoft/fhir-server/): open-source FHIR Server, which is the basis for Azure API for FHIR -* To see the latest releases, refer to the [Release Notes](https://github.com/microsoft/fhir-server/releases) +* [microsoft/fhir-server](https://github.com/microsoft/fhir-server/): an open-source FHIR Server, which is the basis for Azure API for FHIR * [microsoft/fhir-server-samples](https://github.com/microsoft/fhir-server-samples): a sample environment+* To see the latest releases, refer to the [Release Notes](https://github.com/microsoft/fhir-server/releases) ## Data Conversion & Anonymization #### FHIR Converter -* [microsoft/FHIR-Converter](https://github.com/microsoft/FHIR-Converter): a data conversion project that uses CLI tool and $convert-data FHIR endpoint to translate healthcare legacy data formats into FHIR -* Integrated with the FHIR service and FHIR server for Azure in the form of $convert-data operation +* [microsoft/FHIR-Converter](https://github.com/microsoft/FHIR-Converter) is a data conversion project that uses the CLI tool and the `$convert-data` FHIR endpoint to translate healthcare legacy data formats into FHIR. +* Integrated with the FHIR service and FHIR server for Azure in the form of `$convert-data` operation * Ongoing improvements in OSS, and continual integration to the FHIR servers #### FHIR Converter - VS Code Extension -* [microsoft/vscode-azurehealthcareapis-tools](https://github.com/microsoft/vscode-azurehealthcareapis-tools): a VS Code extension that contains a collection of tools to work with FHIR Converter -* Released to Visual Studio Marketplace, you can install it here: [FHIR Converter VS Code extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) +* [microsoft/vscode-azurehealthcareapis-tools](https://github.com/microsoft/vscode-azurehealthcareapis-tools) is a VS Code extension that contains a collection of tools to work with FHIR Converter. +* Released to Visual Studio Marketplace, you can install it here: [FHIR Converter VS Code extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter). * Used for authoring Liquid conversion templates and managing templates on Azure Container Registry #### FHIR Tools for Anonymization -* [microsoft/Tools-for-Health-Data-Anonymization](https://github.com/microsoft/Tools-for-Health-Data-Anonymization): a data anonymization project that provides tools for de-identifying FHIR data and DICOM data +* [microsoft/Tools-for-Health-Data-Anonymization](https://github.com/microsoft/Tools-for-Health-Data-Anonymization) is a data anonymization project that provides tools for de-identifying FHIR data and DICOM data. * Integrated with the FHIR service and FHIR server for Azure in the form of `de-identified $export` operation-* For FHIR data, it can also be used with Azure Data Factory (ADF) pipeline by reading FHIR data from Azure blob storage and writing back the anonymized data +* For FHIR data, it can also be used with Azure Data Factory (ADF) pipeline by reading FHIR data from Azure blob storage and writing back the anonymized data. ## MedTech service We have many open-source projects on GitHub that provide you the source code and * [microsoft/iomt-fhir](https://github.com/microsoft/iomt-fhir): integration with IoT Hub or IoT Central to FHIR with data normalization and FHIR conversion of the normalized data * Normalization: device data information is extracted into a common format for further processing * FHIR Conversion: normalized and grouped data is mapped to FHIR. Observations are created or updated according to configured templates and linked to the device and patient.-* [Tools to help build the conversation map](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper): visualize the mapping configuration for normalizing the device input data and transform it to the FHIR resources. Developers can use this tool to edit and test the mappings, device mapping and FHIR mapping, and export them for uploading to the MedTech service in the Azure portal. +* [Tools to help build the conversation map](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper): visualize the mapping configuration for normalizing device input data and transform it to FHIR resources. Developers can use this tool to edit and test mappings, device mapping and FHIR mapping, and export mappings for uploading to the MedTech service in the Azure portal. #### HealthKit and FHIR Integration -* [microsoft/healthkit-on-fhir](https://github.com/microsoft/healthkit-on-fhir): a Swift library that automates the export of Apple HealthKit Data to a FHIR Server +* [microsoft/healthkit-on-fhir](https://github.com/microsoft/healthkit-on-fhir): A Swift library that automates the export of Apple HealthKit Data to a FHIR Server. ## Next steps -In this article, you've learned about the related GitHub Projects for Azure API for FHIR that provide source code and instructions to let you experiment and deploy services for various uses. For more information about Azure API for FHIR, see +In this article, you learned about the related GitHub Projects for Azure API for FHIR that provide source code and instructions to let you experiment and deploy services for various uses. For more information about Azure API for FHIR, see >[!div class="nextstepaction"] >[What is Azure API for FHIR?](overview.md) -FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. |
healthcare-apis | Fhir Paas Cli Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-cli-quickstart.md | Title: 'Quickstart: Deploy Azure API for FHIR using Azure CLI' -description: In this quickstart, you'll learn how to deploy Azure API for FHIR in Azure using the Azure CLI. +description: In this quickstart, you learn how to deploy Azure API for FHIR in Azure using the Azure CLI. -In this quickstart, you'll learn how to deploy Azure API for FHIR in Azure using the Azure CLI. +In this quickstart, you learn how to deploy Azure API for FHIR in Azure using the Azure CLI. ## Add Azure Health Data Services (for example, HealthcareAPIs) extension In this quickstart, you'll learn how to deploy Azure API for FHIR in Azure using az extension add --name healthcareapis ``` -Get a list of commands for HealthcareAPIs: +To get a list of commands for HealthcareAPIs: ```azurecli-interactive az healthcareapis --help az healthcareapis --help ## Create Azure Resource Group -Pick a name for the resource group that will contain the Azure API for FHIR and create it: +Pick a name for the resource group that contains the Azure API for FHIR and create it: ```azurecli-interactive az group create --name "myResourceGroup" --location westus2 az healthcareapis create --resource-group myResourceGroup --name nameoffhiraccou ## Fetch FHIR API capability statement -Obtain a capability statement from the FHIR API with: +Obtain a capability statement from the FHIR API with the following command: ```azurecli-interactive curl --url "https://nameoffhiraccount.azurehealthcareapis.com/metadata" curl --url "https://nameoffhiraccount.azurehealthcareapis.com/metadata" ## Clean up resources -If you're not going to continue to use this application, delete the resource group with the following steps: +If you're not going to continue to use this application, delete the resource group with the following steps. ```azurecli-interactive az group delete --name "myResourceGroup" az group delete --name "myResourceGroup" ## Next steps -In this quickstart guide, you've deployed the Azure API for FHIR into your subscription. For information about how to register applications and the Azure API for FHIR configuration settings, see +In this quickstart guide, you deployed the Azure API for FHIR into your subscription. For information about how to register applications, and the Azure API for FHIR configuration settings, see the following. >[!div class="nextstepaction"] In this quickstart guide, you've deployed the Azure API for FHIR into your subsc >[!div class="nextstepaction"] >[Configure Private Link](configure-private-link.md) -FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. |
healthcare-apis | Fhir Paas Portal Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-portal-quickstart.md | Title: 'Quickstart: Deploy Azure API for FHIR using Azure portal' -description: In this quickstart, you'll learn how to deploy Azure API for FHIR and configure settings using the Azure portal. +description: In this quickstart, you learn how to deploy Azure API for FHIR and configure settings using the Azure portal. -In this quickstart, you'll learn how to deploy Azure API for FHIR using the Azure portal. +In this quickstart, you learn how to deploy Azure API for FHIR using the Azure portal. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. Select **Create** to create a new Azure API for FHIR account: ## Enter account details -Select an existing resource group or create a new one, choose a name for the account, and finally select **Review + create**: +Select an existing resource group or create a new one. Choose a name for the account, and finally select **Review + create**: :::image type="content" source="media/quickstart-paas-portal/portal-new-healthcare-apis-details.png" alt-text="New healthcare api details"::: Confirm creation and await FHIR API deployment. ## Additional settings (optional) -You can also select **Next: Additional settings** to view the authentication settings. The default configuration for the Azure API for FHIR is to [use Azure RBAC for assigning data plane roles](configure-azure-rbac.md). When configured in this mode, the "Authority" for the FHIR service will be set to the Microsoft Entra tenant of the subscription: +You can also select **Next: Additional settings** to view the authentication settings. The default configuration for the Azure API for FHIR is to [use Azure role-based access control (RBAC) for assigning data plane roles](configure-azure-rbac.md). When configured in this mode, the "Authority" for the FHIR service is set to the Microsoft Entra tenant of the subscription. :::image type="content" source="media/rbac/confirm-azure-rbac-mode-create.png" alt-text="Default Authentication settings"::: Notice that the box for entering allowed object IDs is grayed out, since we use Azure RBAC for configuring role assignments in this case. -If you wish to configure the FHIR service to use an external or secondary Microsoft Entra tenant, you can change the Authority and enter object IDs for user and groups that should be allowed access to the server. For more information, see the [local RBAC configuration](configure-local-rbac.md) guide. +If you wish to configure the FHIR service to use an external or secondary Microsoft Entra tenant, you can change the Authority and enter object IDs for users and groups that should be allowed access to the server. For more information, see the [local RBAC configuration](configure-local-rbac.md) guide. ## Fetch FHIR API capability statement When no longer needed, you can delete the resource group, Azure API for FHIR, an ## Next steps -In this quickstart guide, you've deployed the Azure API for FHIR into your subscription. For information about how to register applications and the Azure API for FHIR configuration settings, see +In this quickstart guide, you deployed the Azure API for FHIR into your subscription. For information about how to register applications and the Azure API for FHIR configuration settings, see >[!div class="nextstepaction"] In this quickstart guide, you've deployed the Azure API for FHIR into your subsc >[!div class="nextstepaction"] >[Configure Private Link](configure-private-link.md) -FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. |
healthcare-apis | Fhir Paas Powershell Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-powershell-quickstart.md | Title: 'Quickstart: Deploy Azure API for FHIR using PowerShell' -description: In this quickstart, you'll learn how to deploy Azure API for FHIR using PowerShell. +description: In this quickstart, you learn how to deploy Azure API for FHIR using PowerShell. -In this quickstart, you'll learn how to deploy Azure API for FHIR using PowerShell. +In this quickstart, you learn how to deploy Azure API for FHIR using PowerShell. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Register the Azure API for FHIR resource provider -If the `Microsoft.HealthcareApis` resource provider isn't already registered for your subscription, you can register it with: +If the `Microsoft.HealthcareApis` resource provider isn't already registered for your subscription, you can register it with the following command. ```azurepowershell-interactive Register-AzResourceProvider -ProviderNamespace Microsoft.HealthcareApis New-AzHealthcareApisService -Name nameoffhirservice -ResourceGroupName myResourc ``` > [!NOTE]-> Depending on the version of the `Az` PowerShell module you have installed, the provisioned FHIR server may be configured to use [local RBAC](configure-local-rbac.md) and have the currently signed in PowerShell user set in the list of allowed identity object IDs for the deployed FHIR service. Going forward, we recommend that you [use Azure RBAC](configure-azure-rbac.md) for assigning data plane roles and you may need to delete this users object ID after deployment to enable Azure RBAC mode. +> Depending on the version of the `Az` PowerShell module you have installed, the provisioned FHIR server may be configured to use [local role-based access control (RBAC)](configure-local-rbac.md) and have the currently signed in PowerShell user in the list of allowed identity object IDs for the deployed FHIR service. We recommend you [use Azure RBAC](configure-azure-rbac.md) for assigning data plane roles. You may need to delete this user's object ID after deployment to enable Azure RBAC mode. ## Fetch capability statement -You'll be able to validate that the Azure API for FHIR account is running by fetching a FHIR capability statement: +You can validate that the Azure API for FHIR account is running by fetching a FHIR capability statement with the following commands. ```azurepowershell-interactive $metadata = Invoke-WebRequest -Uri "https://nameoffhirservice.azurehealthcareapis.com/metadata" $metadata.RawContent ## Clean up resources -If you're not going to continue to use this application, delete the resource group with the following steps: +If you're not going to continue using this application, delete the resource group with the following steps. ```azurepowershell-interactive Remove-AzResourceGroup -Name myResourceGroupName Remove-AzResourceGroup -Name myResourceGroupName ## Next steps -In this quickstart guide, you've deployed the Azure API for FHIR into your subscription. For more information about the settings in Azure API for FHIR and to start using Azure API for FHIR, see +In this quickstart guide, you deployed the Azure API for FHIR into your subscription. For more information about the settings in Azure API for FHIR and to start using Azure API for FHIR, see >[!div class="nextstepaction"] >[Additional settings in Azure API for FHIR](azure-api-for-fhir-additional-settings.md) In this quickstart guide, you've deployed the Azure API for FHIR into your subsc >[!div class="nextstepaction"] >[Configure Private Link](configure-private-link.md) -FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. |
healthcare-apis | Fhir Rest Api Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-rest-api-capabilities.md | -Azure API for FHIR supports create, conditional create, update, and conditional update as defined by the FHIR specification. One useful header in these scenarios is the [If-Match](https://www.hl7.org/fhir/http.html#concurrency) header. The `If-Match` header is used and will validate the version being updated before making the update. If the `ETag` doesnΓÇÖt match the expected `ETag`, it will produce the error message *412 Precondition Failed*. +Azure API for FHIR supports create, conditional create, update, and conditional update as defined by the FHIR specification. One useful header in these scenarios is the [If-Match](https://www.hl7.org/fhir/http.html#concurrency) header. The `If-Match` header validates the version being updated before making the update. If the `ETag` doesnΓÇÖt match the expected `ETag`, it produces the error message *412 Precondition Failed*. ## Delete and Conditional Delete Azure API for FHIR offers two delete types. There's [Delete](https://www.hl7.org/fhir/http.html#delete), which is also know as Hard + Soft Delete, and [Conditional Delete](https://www.hl7.org/fhir/http.html#3.1.0.7.1). -**Delete can be performed for individual resource id or in bulk. To learn more on deleting resources in bulk, visit [$bulk-delete operation](bulk-delete-operation.md).** +**Delete can be performed for individual resource IDs or in bulk. To learn more on deleting resources in bulk, visit [$bulk-delete operation](bulk-delete-operation.md).** ### Delete (Hard + Soft Delete) -Delete defined by the FHIR specification requires that after deleting a resource, subsequent nonversion specific reads of a resource returns a 410 HTTP status code. Therefore, the resource is no longer found through searching. Additionally, Azure API for FHIR enables you to fully delete (including all history) the resource. To fully delete the resource, you can pass a parameter settings `hardDelete` to true `(DELETE {{FHIR_URL}}/{resource}/{id}?hardDelete=true)`. If you don't pass this parameter or set `hardDelete` to false, the historic versions of the resource will still be available. +Delete defined by the FHIR specification requires that after deleting a resource, subsequent nonversion specific reads of a resource returns a 410 HTTP status code. This means the resource is no longer found through searching. Additionally, Azure API for FHIR enables you to fully delete (including all history) the resource. To fully delete the resource, you can pass a parameter setting `hardDelete` set to true: `(DELETE {{FHIR_URL}}/{resource}/{id}?hardDelete=true)`. If you don't pass this parameter or set `hardDelete` to false, the historic versions of the resource are still available. > [!NOTE] > If you only want to delete the history, Azure API for FHIR supports a custom operation called `$purge-history`. This operation allows you to delete the history off of a resource. ### Conditional Delete - Conditional Delete allows you to pass search criteria to delete a resource. By default, the Conditional Delete allows you to delete one item at a time. You can also specify the `_count` parameter to delete up to 100 items at a time. Below are some examples of using Conditional Delete. + Conditional Delete allows you to pass search criteria to delete a resource. By default, the Conditional Delete allows you to delete one item at a time. You can also specify the `_count` parameter to delete up to 100 items at a time. The following are some examples of using Conditional Delete. To delete a single item using Conditional Delete, you must specify search criteria that returns a single item. You can do the same search but include `hardDelete=true` to also delete all hist `DELETE https://{{FHIR_URL}}/Patient?identifier=1032704&hardDelete=true` -To delete multiple resources, include `_count=100` parameter. This parameter deletes up to 100 resources that match the search criteria. +To delete multiple resources, include the `_count=100` parameter. This parameter deletes up to 100 resources that match the search criteria. `DELETE https://{{FHIR_URL}}/Patient?identifier=1032704&_count=100` If you don't use the hard delete parameter, then the records in Azure API for FH If the ID of the resource that was deleted is known, use the following URL pattern: -`<FHIR_URL>/<resource-type>/<resource-id>/_history` +`<FHIR_URL>/<resource-type>/<resource-id>/_history`. -For example: `https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/Patient/123456789/_history` +For example: ++`https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/Patient/123456789/_history` If the ID of the resource isn't known, do a history search on the entire resource type: -`<FHIR_URL>/<resource-type>/_history` +`<FHIR_URL>/<resource-type>/_history`. For example: `https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/Patient/_history` -After you've found the record you want to restore, use the `PUT` operation to recreate the resource with the same ID, or use the `POST` operation to make a new resource with the same information. +After you find the record you want to restore, use the `PUT` operation to recreate the resource with the same ID, or use the `POST` operation to make a new resource with the same information. > [!NOTE] > There is no time-based expiration for history/soft delete data. The only way to remove history/soft deleted data is with a hard delete or the purge history operation. ## Batch Bundles -In FHIR, bundles can be considered as a container that holds multiple resources. Batch bundles enable users to submit a set of actions to be performed on a server in single HTTP request/response. +In FHIR, bundles can be considered as a container that holds multiple resources. Batch bundles enable users to submit a set of actions to be performed on a server in a single HTTP request/response. -A batch bundle interaction with FHIR service is performed with HTTP POST command at base URL. +A batch bundle interaction with FHIR service is performed with the HTTP POST command at the base URL. + ```rest POST {{fhir_url}} { In the case of a batch, each entry is treated as an individual interaction or op ### Batch bundle parallel processing Currently batch bundles are executed serially in FHIR service. To improve performance and throughput, we're enabling parallel processing of batch bundles in public preview. -To use the capability of parallel batch bundle processing- +To use the capability of parallel batch bundle processing: + * Set header 'x-bundle-processing-logic' value to 'parallel'. * Ensure there's no overlapping resource ID that is executing on DELETE, POST, PUT, or PATCH operations in the same bundle. ## Patch and Conditional Patch -Patch is a valuable RESTful operation when you need to update only a portion of the FHIR resource. Using patch allows you to specify the element(s) that you want to update in the resource without having to update the entire record. FHIR defines three ways to Patch resources: JSON Patch, XML Patch, and FHIRPath Patch. The FHIR Service support both JSON Patch and FHIRPath Patch along with Conditional JSON Patch and Conditional FHIRPath Patch (which allows you to Patch a resource based on a search criteria instead of a resource ID). To walk through some examples, refer to the sample [FHIRPath Patch REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/FhirPatchRequests.http) and the [JSON Patch REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/JsonPatchRequests.http) for each approach. For additional details, read the [HL7 documentation for patch operations with FHIR](https://www.hl7.org/fhir/http.html#patch). +Patch is a valuable RESTful operation when you need to update only a portion of a FHIR resource. Using patch allows you to specify the elements that you want to update in the resource without having to update the entire record. FHIR defines three ways to Patch resources: JSON Patch, XML Patch, and FHIRPath Patch. The FHIR Service supports both JSON Patch and FHIRPath Patch, along with Conditional JSON Patch and Conditional FHIRPath Patch (which allows you to Patch a resource based on a search criteria instead of a resource ID). To walk through some examples, refer to the sample [FHIRPath Patch REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/FhirPatchRequests.http) and the [JSON Patch REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/JsonPatchRequests.http) for each approach. For additional details, read the [HL7 documentation for patch operations with FHIR](https://www.hl7.org/fhir/http.html#patch). > [!NOTE] > When using `PATCH` against STU3, and if you are requesting a History bundle, the patched resource's `Bundle.entry.request.method` is mapped to `PUT`. This is because STU3 doesn't contain a definition for the `PATCH` verb in the [HTTPVerb value set](http://hl7.org/fhir/STU3/valueset-http-verb.html). ### Patch with FHIRPath Patch -This method of patch is the most powerful as it leverages [FHIRPath](https://hl7.org/fhirpath/) for selecting which element to target. One common scenario is using FHIRPath Patch to update an element in a list without knowing the order of the list. For example, if you want to delete a patientΓÇÖs home telecom information without knowing the index, you can use the example below. +This method of patch is the most powerful leveraging [FHIRPath](https://hl7.org/fhirpath/) for selecting which element to target. One common scenario is using FHIRPath Patch to update an element in a list without knowing the order of the list. For example, if you want to delete a patientΓÇÖs home telecom information without knowing the index, you can use the following example. PATCH `http://{FHIR-SERVICE-HOST-NAME}/Patient/{PatientID}`<br/> Content-type: `application/fhir+json` Any FHIRPath Patch operations must have the `application/fhir+json` Content-Type ### Patch with JSON Patch -JSON Patch in the FHIR Service conforms to the well-used [specification defined by the Internet Engineering Task Force](https://datatracker.ietf.org/doc/html/rfc6902). The payload format does not use FHIR resources and instead uses a JSON document leveraging JSON-Pointers for element selection. JSON Patch is more compact and has a test operation that allows you to validate that a condition is true before doing the patch. For example, if you want to set a patient as deceased only if they're not already marked as deceased, you can use the example below. +JSON Patch in the FHIR Service conforms to the well-used [specification defined by the Internet Engineering Task Force](https://datatracker.ietf.org/doc/html/rfc6902). The payload format does not use FHIR resources and instead uses a JSON document leveraging JSON-Pointers for element selection. JSON Patch is more compact and has a test operation that allows you to validate that a condition is true before doing the patch. For example, if you want to set a patient as deceased only if they're not already marked as deceased, you can use the following example. PATCH `http://{FHIR-SERVICE-HOST-NAME}/Patient/{PatientID}`<br/> Content-type: `application/json-patch+json` Any JSON Patch operations must have the `application/json-patch+json` Content-Ty #### JSON Patch in Bundles -By default, JSON Patch isn't supported in Bundle resources. This is because a Bundle only supports with FHIR resources and the JSON Patch payload isn't a FHIR resource. To work around this, we'll use Binary resources with a Content-Type of `"application/json-patch+json"` and the base64 encoding of the JSON payload inside of a Bundle. For information about this workaround, view this topic on the [FHIR Chat Zulip](https://chat.fhir.org/#narrow/stream/179166-implementers/topic/Transaction.20with.20PATCH.20request). +By default, JSON Patch isn't supported in Bundle resources. This is because a Bundle only supports FHIR resources and the JSON Patch payload isn't a FHIR resource. To work around this, use Binary resources with a Content-Type of `"application/json-patch+json"` and the base64 encoding of the JSON payload inside of a Bundle. For information about this workaround, view this article on the [FHIR Chat Zulip](https://chat.fhir.org/#narrow/stream/179166-implementers/topic/Transaction.20with.20PATCH.20request). -In the example below, we want to change the gender on the patient to female. We've taken the JSON patch `[{"op":"replace","path":"/gender","value":"female"}]` and encoded it to base64. +In the following example, we want to change the gender on the patient to female. We've taken the JSON patch `[{"op":"replace","path":"/gender","value":"female"}]` and encoded it to base64. POST `https://{FHIR-SERVICE-HOST-NAME}/`<br/> Content-Type: `application/json` Content-Type: `application/json` ## Performance consideration with Conditional operations 1. Conditional interactions can be complex and performance-intensive. To enhance the latency of queries involving conditional interactions, you have the option to utilize the request header **x-conditionalquery-processing-logic** . Setting this header to **parallel** allows concurrent execution of queries with conditional interactions.-2. **x-ms-query-latency-over-efficiency** header value when set to "true", all queries are executed using maximum supported parallelism, which forces the scan of physical partitions to be executed concurrently. This feature was designed for accounts with a high number of physical partitions which queries can take longer due to the number of physical segments that need to be scanned. +2. When the **x-ms-query-latency-over-efficiency** header value is set to "true", all queries are executed using maximum supported parallelism, forcing the scan of physical partitions to be executed concurrently. This feature was designed for accounts with a high number of physical partitions. Such queries can take longer due to the number of physical segments that need to be scanned. ## Next steps |
healthcare-apis | Find Identity Object Ids | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/find-identity-object-ids.md | -If you have a user with user name `myuser@contoso.com`, you can locate the user's `ObjectId` by using a Microsoft Graph PowerShell command or the Azure Command-Line Interface (CLI). +If you have a user with user name `myuser@contoso.com`, you can locate the user's `ObjectId` by using a Microsoft Graph PowerShell command or the Azure Command-Line Interface (CLI) as follows. #### [PowerShell](#tab/powershell) az ad user show --id myuser@contoso.com --query id --out tsv ## Find service principal object ID -Suppose you registered a [service client app](register-service-azure-ad-client-app.md) and you want to allow this service client to access the Azure API for FHIR. Find the object ID for the client service principal with a Microsoft Graph PowerShell command or the Azure CLI. +If you registered a [service client app](register-service-azure-ad-client-app.md) and you want to allow this service client to access the Azure API for FHIR. Find the object ID for the client service principal with a Microsoft Graph PowerShell command or the Azure CLI as follows. #### [PowerShell](#tab/powershell) az ad sp show --id XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX --query id --out tsv ## Find a security group object ID -If you would like to locate the object ID of a security group, you can use a Microsoft Graph PowerShell command or the Azure CLI. +If you would like to locate the object ID of a security group, you can use a Microsoft Graph PowerShell command or the Azure CLI as follows. #### [PowerShell](#tab/powershell) az ad group show --group "mygroup" --query id --out tsv [Configure local RBAC settings](configure-local-rbac.md) |
iot-hub | Device Management Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-dotnet.md | In this section, you: * Use the reported properties to enable device twin queries to identify devices and when they were last rebooted. + To create the simulated device app, follow these steps: 1. Open Visual Studio and select **Create a new project**, then find and select the **Console App (.NET Framework)** project template, then select **Next**. To create the simulated device app, follow these steps: In this section, you create a .NET console app, using C#, that initiates a remote reboot on a device using a direct method. The app uses device twin queries to discover the last reboot time for that device. ++ 1. Open Visual Studio and select **Create a new project**. 1. In **Create a new project**, find and select the **Console App (.NET Framework)** project template, and then select **Next**. |
iot-hub | Device Management Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-java.md | This article shows you how to create: In this section, you create a Java console app that simulates a device. The app listens for the reboot direct method call from your IoT hub and immediately responds to that call. The app then sleeps for a while to simulate the reboot process before it uses a reported property to notify the **trigger-reboot** back-end app that the reboot is complete. + 1. In the **dm-get-started** folder, create a Maven project called **simulated-device** using the following command at your command prompt: ```cmd/sh In this section, you create a Java console app that: 3. Polls the reported properties sent from the device to determine when the reboot is complete. + This console app connects to your IoT Hub to invoke the direct method and read the reported properties. 1. Create an empty folder called **dm-get-started**. |
iot-hub | Device Management Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-node.md | In this section, you: * Use the reported properties to enable device twin queries to identify devices and when they last rebooted. + 1. Create an empty folder called **managed-device**. In the **managed-device** folder, create a package.json file using the following command at your command prompt. Accept all the defaults: ```cmd/sh In this section, you: In this section, you create a Node.js console app that initiates a remote reboot on a device using a direct method. The app uses device twin queries to discover the last reboot time for that device. + 1. Create an empty folder called **trigger-reboot-on-device**. In the **trigger-reboot-on-device** folder, create a package.json file using the following command at your command prompt. Accept all the defaults: ```cmd/sh |
iot-hub | Device Management Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-python.md | In this section, you: * Use the reported properties to enable device twin queries to identify devices and when they last rebooted. + In Azure Cloud Shell you used previously, or any other environment with Python, create the device code. 1. At your command prompt, run the following command to install the **azure-iot-device** package: In Azure Cloud Shell you used previously, or any other environment with Python, In this section, you create a Python console app that initiates a remote reboot on a device using a direct method. The app uses device twin queries to discover the last reboot time for that device. + In Azure Cloud Shell or any other environment with Python, create the console code. 1. At your command prompt, run the following command to install the **azure-iot-hub** package: |
iot-hub | Iot Hub Bulk Identity Mgmt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-bulk-identity-mgmt.md | Identity registry operations use the job system when the operation: Instead of a single API call waiting or blocking on the result of the operation, the operation asynchronously creates a job for that IoT hub. The operation then immediately returns a **JobProperties** object. + The following C# code snippet shows how to create an export job: ```csharp |
iot-hub | Iot Hub Live Data Visualization In Web Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-live-data-visualization-in-web-apps.md | The service connection string should look similar to the following example: Note down the service connection string, you need it later in this tutorial. + ## Download the web app from GitHub Download or clone the web app sample from GitHub: [web-apps-node-iot-hub-data-visualization](https://github.com/Azure-Samples/web-apps-node-iot-hub-data-visualization.git). |
iot-hub | Iot Hub Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-managed-identity.md | Azure IoT Hub SDKs also support this functionality in the service client's regis ```csharp // Create an export job - using RegistryManager srcRegistryManager = RegistryManager.CreateFromConnectionString(hubConnectionString); - JobProperties jobProperties = JobProperties.CreateForExportJob( outputBlobContainerUri: blobContainerUri, excludeKeysInExport: false, Azure IoT Hub SDKs also support this functionality in the service client's regis ```csharp // Create an import job - using RegistryManager destRegistryManager = RegistryManager.CreateFromConnectionString(hubConnectionString); - JobProperties jobProperties = JobProperties.CreateForImportJob( inputBlobContainerUri: blobContainerUri, outputBlobContainerUri: blobContainerUri, Azure IoT Hub SDKs also support this functionality in the service client's regis ```python # see note below-iothub_job_manager = IoTHubJobManager("<IoT Hub connection string>") +iothub_job_manager = IoTHubJobManager("<IoT Hub connection information>") # Create an import job result = iothub_job_manager.create_import_export_job(JobProperties( |
iot-hub | Module Twins Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-dotnet.md | openssl req -new -key d1m1.key.pem -out d1m1.csr -subj "/CN=device01\/module01" [!INCLUDE [iot-hub-include-find-registryrw-connection-string](../../includes/iot-hub-include-find-registryrw-connection-string.md)] + [!INCLUDE [iot-hub-get-started-create-module-identity-csharp](../../includes/iot-hub-get-started-create-module-identity-csharp.md)] ## Update the module twin using .NET device SDK Now let's communicate to the cloud from your simulated device. Once a module ide To retrieve your module connection string, navigate to your [IoT hub](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Devices%2FIotHubs) then select **Devices**. Find and select **myFirstDevice** to open it and then select **myFirstModule** to open it. In **Module Identity Details**, copy the **Connection string (primary key)** and save it for the console app. + 1. In Visual Studio, add a new project to your solution by selecting **File** > **New** > **Project**. In **Create a new project**, select **Console App (.NET Framework)**, and select **Next**. 1. In **Configure your new project**, name the project *UpdateModuleTwinReportedProperties*, then select **Next**. |
iot-hub | Module Twins Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-node.md | openssl req -new -key d1m1.key.pem -out d1m1.csr -subj "/CN=device01\/module01" [!INCLUDE [iot-hub-include-find-registryrw-connection-string](../../includes/iot-hub-include-find-registryrw-connection-string.md)] + ## Create a device identity and a module identity in IoT Hub In this section, you create a Node.js app that creates a device identity and a module identity in the identity registry in your IoT hub. A device or module can't connect to IoT hub unless it has an entry in the identity registry. For more information, see [Understand the identity registry in your IoT hub](iot-hub-devguide-identity-registry.md). When you run this console app, it generates a unique ID and key for both device and module. The ID and key are case-sensitive. Your device and module use these values to identify itself when it sends device-to-cloud messages to IoT Hub. + 1. Create a directory to hold your code. 2. Inside of that directory, first run **npm init -y** to create an empty package.json with defaults. This is the project file for your code. In this section, you create a Node.js app on your simulated device that updates ![Azure portal module detail](./media/module-twins-node/module-detail.png) -2. Similar to what you did in the previous section, create a directory for your device code and use NPM to initialize it and install the device SDK (**npm install -S azure-iot-device-amqp\@modules-preview**). +2. Similar to what you did in the previous section, create a directory for your device code and use npm to initialize it and install the device SDK (**npm install -S azure-iot-device-amqp\@modules-preview**). > [!NOTE] > The npm install command may feel slow. Be patient; it's pulling down lots of code from the package repository. |
iot-hub | Module Twins Portal Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-portal-dotnet.md | The module identity and module twin features are only available in the IoT Hub p ### Create UpdateModuleTwinReportedProperties console app + To create your app, follow these steps: 1. Add the following `using` statements at the top of the **Program.cs** file: |
iot-hub | Module Twins Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-python.md | In this article, you create a back-end service that adds a device in the identit [!INCLUDE [iot-hub-include-find-service-regrw-connection-string](../../includes/iot-hub-include-find-service-regrw-connection-string.md)] + ## Create a device identity and a module identity in IoT Hub In this section, you create a Python service app that creates a device identity and a module identity in the identity registry in your IoT hub. A device or module can't connect to IoT hub unless it has an entry in the identity registry. For more information, see [Understand the identity registry in your IoT hub](iot-hub-devguide-identity-registry.md). When you run this console app, it generates a unique ID and key for both device and module. The ID and key are case-sensitive. Your device and module use these values to identify itself when it sends device-to-cloud messages to IoT Hub. + 1. At your command prompt, run the following command to install the **azure-iot-hub** package: ```cmd/sh |
iot-hub | Schedule Jobs Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/schedule-jobs-dotnet.md | This article shows you how to create two .NET (C#) console apps: In this section, you create a .NET console app that responds to a direct method called by the solution back end. + 1. In Visual Studio, select **Create a new project**, and then choose the **Console App (.NET Framework)** project template. Select **Next** to continue. 1. In **Configure your new project**, name the project *SimulateDeviceMethods* then select **Next**. In this section, you create a .NET console app that responds to a direct method [!INCLUDE [iot-hub-include-find-registryrw-connection-string](../../includes/iot-hub-include-find-registryrw-connection-string.md)] + ## Schedule jobs for calling a direct method and sending device twin updates In this section, you create a .NET console app (using C#) that uses jobs to call the **LockDoor** direct method and send desired property updates to multiple devices. |
iot-hub | Schedule Jobs Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/schedule-jobs-java.md | This article shows you how to create two Java apps: [!INCLUDE [iot-hub-include-find-registryrw-connection-string](../../includes/iot-hub-include-find-registryrw-connection-string.md)] + ## Create the service app In this section, you create a Java console app that uses jobs to: To create the app: In this section, you create a Java console app that handles the desired properties sent from IoT Hub and implements the direct method call. + 1. In the **iot-java-schedule-jobs** folder, create a Maven project called **simulated-device** using the following command at your command prompt. Note this is a single, long command: ```cmd/sh |
iot-hub | Schedule Jobs Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/schedule-jobs-node.md | This article shows you how to create two Node.js apps: In this section, you create a Node.js console app that responds to a direct method called by the cloud, which triggers a simulated **lockDoor** method. + 1. Create a new empty folder called **simDevice**. In the **simDevice** folder, create a package.json file using the following command at your command prompt. Accept all the defaults: ```console In this section, you create a Node.js console app that responds to a direct meth [!INCLUDE [iot-hub-include-find-registryrw-connection-string](../../includes/iot-hub-include-find-registryrw-connection-string.md)] + ## Schedule jobs for calling a direct method and updating a device twin's properties In this section, you create a Node.js console app that initiates a remote **lockDoor** on a device using a direct method and update the device twin's properties. |
iot-hub | Schedule Jobs Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/schedule-jobs-python.md | This article shows you how to create two Python apps: In this section, you create a Python console app that responds to a direct method called by the cloud, which triggers a simulated **lockDoor** method. + 1. At your command prompt, run the following command to install the **azure-iot-device** package: ```cmd/sh To create a shared access policy that grants **service connect**, **registry rea For more information about IoT Hub shared access policies and permissions, see [Access control and permissions](./iot-hub-dev-guide-sas.md#access-control-and-permissions). + ## Schedule jobs for calling a direct method and updating a device twin's properties In this section, you create a Python console app that initiates a remote **lockDoor** on a device using a direct method and also updates the device twin's desired properties. |
lab-services | How To Configure Firewall Settings 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-firewall-settings-1.md | -Each organization or school will configure their own network in a way that best fits their needs. Sometimes that includes setting firewall rules that block Remote Desktop Protocol (RDP) or Secure Shell (SSH) connections to machines outside their own network. Because Azure Lab Services runs in the public cloud, some extra configuration maybe needed to allow students to access their VM when connecting from the campus network. +Each organization or school configures their own network in a way that best fits their needs. Sometimes that includes setting firewall rules that block Remote Desktop Protocol (RDP) or Secure Shell (SSH) connections to machines outside their own network. Because Azure Lab Services runs in the public cloud, some extra configuration maybe needed to allow students to access their VM when connecting from the campus network. -Each lab uses single public IP address and multiple ports. All VMs, both the template VM and student VMs, will use this public IP address. The public IP address won't change for the life of lab. Each VM will have a different port number. The port numbers range is 49152 - 65535. The combination of public IP address and port number is used to connect educators and students to the correct VM. This article will cover how to find the specific public IP address used by a lab. That information can be used to update inbound and outbound firewall rules so students can access their VMs. +Each lab uses single public IP address and multiple ports. All VMs, both the template VM and student VMs, use this public IP address. The public IP address doesn't change for the life of lab. Each VM has a different port number. The port numbers range is 49152 - 65535. The combination of public IP address and port number is used to connect educators and students to the correct VM. This article covers how to find the specific public IP address used by a lab. That information can be used to update inbound and outbound firewall rules so students can access their VMs. >[!IMPORTANT]->Each lab will have a different public IP address. +>Each lab has a different public IP address. > [!NOTE]-> If your school needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you will need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering). +> If your school needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you will need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering). ## Find public IP for a lab -The public IP addresses for each lab are listed in the **All labs** page of the Lab Services lab account. For directions how to find the **All labs** page, see [View labs in a lab account](manage-labs-1.md#view-labs-in-a-lab-account). -+The public IP addresses for each lab are listed in the **All labs** page of the Lab Services lab account. For directions how to find the **All labs** page, see [View labs in a lab account](manage-labs-1.md#view-labs-in-a-lab-account). >[!NOTE] >You won't see the public IP address if the template machine for your lab isn't published yet. ## Conclusion -Now we know the public IP address for the lab. Inbound and outbound rules can be created for the organization's firewall for the public IP address and the port range 49152 - 65535. Once the rules are updated, students can access their VMs without the network firewall blocking access. +Now we know the public IP address for the lab. Inbound and outbound rules can be created for the organization's firewall for the public IP address and the port range 49152 - 65535. Once the rules are updated, students can access their VMs without the network firewall blocking access. ## Next steps -- As an admin, [enable labs to connect your vnet](how-to-connect-vnet-injection.md).+- As an admin, [enable labs to connect your virtual network](how-to-connect-vnet-injection.md). - As an educator, work with your admin to [create a lab with a shared resource](how-to-create-a-lab-with-shared-resource.md). - As an educator, [publish your lab](how-to-create-manage-template.md#publish-the-template-vm). |
logic-apps | Create Single Tenant Workflows Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md | More workflows in your logic app raise the risk of longer load times, which nega > [!NOTE] > - > By default, the language worker runtime value for your Standard logic app is **`dotnet`**. - > Previously, **`node`** was the default value. However, **`dotnet`** is now the default value - > for all new and existing deployed Standard logic apps, even for apps that had a different value. - > This change shouldn't affect your workflow's runtime, and everything should work the same way - > as before. For more information, see the [**FUNCTIONS_WORKER_RUNTIME** app setting](edit-app-settings-host-settings.md#reference-local-settings-json). - > - > The **APP_KIND** app setting for your Standard logic app is set to **workflowApp**, but in some - > scenarios, this app setting is missing, for example, due to automation using Azure Resource Manager - > templates or other scenarios where the setting isn't included. If certain actions don't work, - > such as the **Execute JavaScript Code** action or the workflow stops working, check that the - > **APP_KIND** app setting exists and is set to to **workflowApp**. For more information, see the - > [**APP_KIND** app setting](edit-app-settings-host-settings.md#reference-local-settings-json). + > The **FUNCTIONS_WORKER_RUNTIME** app setting is required for your Standard logic app, and the + > value was previously set to **node**. However, the required value is now **dotnet** for all new + > and existing deployed Standard logic apps. This change in value shouldn't affect your workflow's + > runtime, so everything should work the same way as before. For more information, see the + > [**FUNCTIONS_WORKER_RUNTIME** app setting](edit-app-settings-host-settings.md#reference-local-settings-json). + > + > The **APP_KIND** app setting value is required for your Standard logic app, and the value must + > be **workflowApp**. However, in some scenarios, this app setting might be missing, for example, + > due to automation using Azure Resource Manager templates or other scenarios where the setting + > isn't included. If certain actions don't work, such as the **Execute JavaScript Code** action, + > or if the workflow stops working, check that the **APP_KIND** app setting exists and is set to to **workflowApp**. + > For more information, see the [**APP_KIND** app setting](edit-app-settings-host-settings.md#reference-local-settings-json). 1. When you finish, select **Next: Storage**. |
logic-apps | Create Single Tenant Workflows Visual Studio Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md | This how-to guide shows how to create an example integration workflow that runs When you're ready, you can deploy your logic app to Azure where your workflow can run in the single-tenant Azure Logic Apps environment or in an App Service Environment v3 (Windows-based App Service plans only). You can also deploy and run your workflow anywhere that Kubernetes can run, including Azure, Azure Kubernetes Service, on premises, or even other cloud providers, due to the Azure Logic Apps containerized runtime. > [!NOTE]+ > > Deploying your logic app to a Kubernetes cluster is currently in public preview. For more information about single-tenant Azure Logic Apps, review [Single-tenant versus multitenant in Azure Logic Apps](single-tenant-overview-compare.md#resource-environment-differences). Before you can create your logic app, create a local project so that you can man > [!NOTE] > - > By default, in your **local.settings.json** file, the language worker runtime value for your - > Standard logic app is **`dotnet`**. Previously, **`node`** was the default value. However, - > **`dotnet`** is now the default value for all new and existing deployed Standard logic apps, - > even for apps that had a different value. This change shouldn't affect your workflow's runtime, - > and everything should work the same way as before. For more information, see the + > The **FUNCTIONS_WORKER_RUNTIME** app setting is required for your Standard logic app, and the + > value was previously set to **node**. However, the required value is now **dotnet** for all new + > and existing deployed Standard logic apps. This change in value shouldn't affect your workflow's + > runtime, so everything should work the same way as before. For more information, see the > [**FUNCTIONS_WORKER_RUNTIME** app setting](edit-app-settings-host-settings.md#reference-local-settings-json).- > - > The **APP_KIND** app setting for your Standard logic app is set to **workflowApp**, but in some - > scenarios, this app setting is missing, for example, due to automation using Azure Resource Manager - > templates or other scenarios where the setting isn't included. If certain actions don't work, - > such as the **Execute JavaScript Code** action or the workflow stops working, check that the - > **APP_KIND** app setting exists and is set to to **workflowApp**. For more information, see the - > [**APP_KIND** app setting](edit-app-settings-host-settings.md#reference-local-settings-json). + > + > The **APP_KIND** app setting is required for your Standard logic app, and the value + > must be **workflowApp**. Howeever, in some scenarios, this app setting might be missing, + > for example, due to automation using Azure Resource Manager templates or other scenarios + > where the setting isn't included. If certain actions don't work, such as the **Execute JavaScript Code** + > action, or if the workflow stops working, check that the **APP_KIND** app setting exists and is set to to **workflowApp**. + > For more information, see the [**APP_KIND** app setting](edit-app-settings-host-settings.md#reference-local-settings-json). <a name="convert-project-nuget"></a> |
logic-apps | Edit App Settings Host Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md | In Visual Studio Code, at your logic app project's root level, the **local.setti App settings in Azure Logic Apps work similarly to app settings in Azure Functions or Azure Web Apps. If you've used these other services before, you might already be familiar with app settings. For more information, review [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) and [Work with Azure Functions Core Tools - Local settings file](../azure-functions/functions-develop-local.md#local-settings-file). -For your workflow to run properly, some app settings are marked as "required". +For your workflow to run properly, some app settings are required. -| Setting | Required | Default value | Description | -||-||-| -| `APP_KIND` | Yes | `workflowApp` | Required to set the app type for the Standard logic app resource. | +| Setting | Required | Value | Description | +||-|-|-| +| `APP_KIND` | Yes | `workflowApp` | Required to set the app type for the Standard logic app resource. The value must be set to **`workflowApp`**. <br><br>**Note**: In some scenarios, this app setting might be missing, for example, due to automation using Azure Resource Manager templates or other scenarios where the setting isn't included. If certain actions don't work, such as the **Execute JavaScript Code** action, or if the workflow stops working, check that the **APP_KIND** app setting exists and is set to to **`workflowApp`**. | | `AzureWebJobsStorage` | Yes | None | Required to set the connection string for an Azure storage account. For more information, see [AzureWebJobsStorage](../azure-functions/functions-app-settings.md#azurewebjobsstorage). | | `FUNCTINONS_EXTENSION_VERSION` | Yes | `~4` | Required to set the Azure Functions version. For more information, see [FUNCTIONS_EXTENSION_VERSION](/azure/azure-functions/functions-app-settings#functions_extension_version). |-| `FUNCTIONS_WORKER_RUNTIME` | Yes | `dotnet` | Required to set the language worker runtime for your logic app resource and workflows. <br><br>**Note**: Previously, this setting's default value was **`node`**. Now, **`dotnet`** is the default value for all new and existing deployed Standard logic apps, even for apps that had a different different value. This change shouldn't affect your workflow's runtime, and everything should work the same way as before.<br><br>For more information, see [FUNCTIONS_WORKER_RUNTIME](../azure-functions/functions-app-settings.md#functions_worker_runtime). | +| `FUNCTIONS_WORKER_RUNTIME` | Yes | `dotnet` | Required to set the language worker runtime for your logic app resource and workflows. <br><br>**Note**: This setting's value was previously set to **`node`**, but now the required value is **`dotnet`** for all new and existing deployed Standard logic apps. This change shouldn't affect your workflow's runtime, so everything should work the same way as before. <br><br>For more information, see [FUNCTIONS_WORKER_RUNTIME](../azure-functions/functions-app-settings.md#functions_worker_runtime). | | `ServiceProviders.Sftp.FileUploadBufferTimeForTrigger` | No | `00:00:20` <br>(20 seconds) | Sets the buffer time to ignore files that have a last modified timestamp that's greater than the current time. This setting is useful when large file writes take a long time and avoids fetching data for a partially written file. | | `ServiceProviders.Sftp.OperationTimeout` | No | `00:02:00` <br>(2 min) | Sets the time to wait before timing out on any operation. | | `ServiceProviders.Sftp.ServerAliveInterval` | No | `00:30:00` <br>(30 min) | Sends a "keep alive" message to keep the SSH connection active if no data exchange with the server happens during the specified period. | |
migrate | Migrate V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-v1.md | - Title: Work with the previous version of Azure Migrate -description: Describes how to work with the previous version of Azure Migrate. --- Previously updated : 03/08/2023-----# Work with the previous version of Azure Migrate --This article provides information about working with the previous version of Azure Migrate. ---There are two versions of the Azure Migrate service: --- **Current version**: Use this version to create Azure Migrate projects, discover on-premises machines, and orchestrate assessments and migrations. [Learn more](whats-new.md) about what's new in this version.-- **Previous version**: If you're using the previous version of Azure Migrate (only assessment of on-premises VMware VMs was supported), you should now use the current version. The previous version projects are referred to as Classic projects in this article. Classic Azure Migrate is retiring in Feb 2024. After Feb 2024, the classic version of Azure Migrate will no longer be supported and the inventory metadata in classic projects will be deleted. If you still need to use classic Azure Migrate projects, this is what you can and can't do:- - You can no longer create migration projects. - - We recommend that you don't perform new discoveries. - - You can still access existing projects. - - You can still run assessments. - --## Upgrade between versions --You can't upgrade projects or components in the previous version to the new version. You need to [create a new Azure Migrate project](create-manage-projects.md), and [add assessment and migration tools](./create-manage-projects.md#next-steps) to it. Use the tutorials to understand how to use the assessment and migration tools available. If you had a Log Analytics workspace attached to a Classic project, you can attach it to a project of current version after you delete the Classic project. --## Find projects from previous version --Find projects from the previous version as follows: --1. In the Azure portal > **All services**, search for and select **Azure Migrate**. -2. On the Azure Migrate dashboard, there's a notification and a link to access old Azure Migrate projects. -3. Click the link to open Classic projects. --## Delete projects from previous version --Find and delete projects from the previous version as follows: --1. In the Azure portal > **All services**, search for and select **Azure Migrate**. -2. On the Azure Migrate dashboard, there's a notification and a link to access old Azure Migrate projects. -3. Click the link to open Classic projects. -4. Select the project you would like to delete and delete it. ---## Create an assessment --After VMs are discovered in the portal, you group them and create assessments. --- You can create on-premises assessments immediately after VMs are discovered in the portal.-- For performance-based assessments, we recommend you wait at least a day before creating a performance-based assessment, to get reliable size recommendations.--Create an assessment as follows: --1. In the project **Overview** page, click **+Create assessment**. -2. Click **View all** to review the assessment properties. -3. Create the group, and specify a group name. -4. Select the machines that you want to add to the group. -5. Click **Create Assessment**, to create the group and the assessment. -6. After the assessment is created, view it in **Overview** > **Dashboard**. -7. Click **Export assessment**, to download it as an Excel file. --If you would like to update an existing assessment with the latest performance data, you can use the **Recalculate** command on the assessment to update it. --## Review an assessment --An assessment has three stages: --- An assessment starts with a suitability analysis to figure out whether machines are compatible in Azure.-- Sizing estimations.-- Monthly cost estimation.--A machine only moves along to a later stage if it passes the previous one. For example, if a machine fails the suitability check, itΓÇÖs marked as unsuitable for Azure, and sizing and costing isn't done. ---### Review Azure readiness --The Azure readiness view in the assessment shows the readiness status of each VM. --**Readiness** | **State** | **Details** - | | -Ready for Azure | No compatibility issues. The machine can be migrated as-is to Azure, and it will boot in Azure with full Azure support. | For VMs that are ready, Azure Migrate recommends a VM size in Azure. -Conditionally ready for Azure | The machine might boot in Azure, but might not have full Azure support. For example, a machine with an older version of Windows Server that isn't supported in Azure. | Azure Migrate explains the readiness issues, and provides remediation steps. -Not ready for Azure | The VM won't boot in Azure. For example, if a VM has a disk that's more than 4 TB, it can't be hosted on Azure. | Azure Migrate explains the readiness issues and provides remediation steps. -Readiness unknown | Azure Migrate can't identify Azure readiness, usually because data isn't available. | --#### Azure VM properties -Readiness takes into account a number of VM properties, to identify whether the VM can run in Azure. ---**Property** | **Details** | **Readiness** - | | -**Boot type** | BIOS supported. UEFI not supported. | Conditionally ready if boot type is UEFI. -**Cores** | Machines core <= the maximum number of cores (128) supported for an Azure VM.<br/><br/> If performance history is available, Azure Migrate considers the utilized cores.<br/>If a comfort factor is specified in the assessment settings, the number of utilized cores is multiplied by the comfort factor.<br/><br/> If there's no performance history, Azure Migrate uses the allocated cores, without applying the comfort factor. | Ready if less than or equal to limits. -**Memory** | The machine memory size <= the maximum memory (3892 GB on Azure M series Standard_M128m <sup>2</sup>) for an Azure VM. [Learn more](/azure/virtual-machines/sizes).<br/><br/> If performance history is available, Azure Migrate considers the utilized memory.<br/><br/>If a comfort factor is specified, the utilized memory is multiplied by the comfort factor.<br/><br/> If there's no history, the allocated memory is used, without applying the comfort factor.<br/><br/> | Ready if within limits. -**Storage disk** | Allocated size of a disk must be 4 TB (4096 GB) or less.<br/><br/> The number of disks attached to the machine must be 65 or less, including the OS disk. | Ready if within limits. -**Networking** | A machine must have 32 or less NICs attached to it. | Ready if within limits. --#### Guest operating system --Along with VM properties, Azure Migrate also looks at the guest OS of the on-premises VM to identify if the VM can run in Azure. --- Azure Migrate considers the OS specified in the vCenter Server.-- Since the discovery done by Azure Migrate is appliance-based, it does not have a way to verify if the OS running inside the VM is same as the one specified in vCenter Server.--The following logic is used. --**Operating System** | **Details** | **Readiness** - | | -Windows Server 2016 and all SPs | Azure provides full support. | Ready for Azure -Windows Server 2012 R2 and all SPs | Azure provides full support. | Ready for Azure -Windows Server 2012 and all SPs | Azure provides full support. | Ready for Azure -Windows Server 2008 R2 and all SPs | Azure provides full support.| Ready for Azure -Windows Server 2008 (32-bit and 64-bit) | Azure provides full support. | Ready for Azure -Windows Server 2003, 2003 R2 | Out-of-support and need a [Custom Support Agreement (CSA)](/troubleshoot/azure/virtual-machines/server-software-support) for support in Azure. | Conditionally ready for Azure. Consider upgrading the OS before migrating to Azure. -Windows 2000, 98, 95, NT, 3.1, MS-DOS | Out-of-support. The machine might boot in Azure, but no OS support is provided by Azure. | Conditionally ready for Azure. It is recommended to upgrade the OS before migrating to Azure. -Windows Client 7, 8 and 10 | Azure provides support with [Visual Studio subscription only.](/azure/virtual-machines/windows/client-images) | Conditionally ready for Azure. -Windows 10 Pro Desktop | Azure provides support with [Multitenant Hosting Rights](/azure/virtual-machines/windows/windows-desktop-multitenant-hosting-deployment). | Conditionally ready for Azure. -Windows Vista, XP Professional | Out-of-support. The machine might boot in Azure, but no OS support is provided by Azure. | Conditionally ready for Azure. It is recommended to upgrade the OS before migrating to Azure. -Linux | Azure endorses these [Linux operating systems](/azure/virtual-machines/linux/endorsed-distros). Other Linux operating systems might boot in Azure, but we recommend upgrading the OS to an endorsed version, before migrating to Azure. | Ready for Azure if the version is endorsed.<br/><br/>Conditionally ready if the version is not endorsed. -Other operating systems<br/><br/> For example, Oracle Solaris, Apple macOS etc., FreeBSD, etc. | Azure doesn't endorse these operating systems. The machine may boot in Azure, but no OS support is provided by Azure. | Conditionally ready for Azure. It is recommended to install a supported OS before migrating to Azure. -OS specified as **Other** in vCenter Server | Azure Migrate cannot identify the OS in this case. | Unknown readiness. Ensure that the OS running inside the VM is supported in Azure. -32-bit operating systems | The machine may boot in Azure, but Azure may not provide full support. | Conditionally ready for Azure, consider upgrading the OS of the machine from 32-bit OS to 64-bit OS before migrating to Azure. ---### Review sizing -- The Azure Migrate size recommendation depends on the sizing criterion specified in the assessment properties. --- If sizing is performance-based, the size recommendation considers the performance history of the VMs (CPU and memory) and disks (IOPS and throughput).-- If the sizing criterion is 'as on-premises', the size recommendation in Azure is based on the size of the VM on-premises. Disk sizing is based on the Storage type specified in the assessment properties (default is premium disks). Azure Migrate doesn't consider the performance data for the VM and disks.--### Review cost estimates --Cost estimates show the total compute and storage cost of running the VMs in Azure, along with the details for each machine. --- Cost estimates are calculated using the size recommendation for a VM machine, and its disks, and the assessment properties.-- Estimated monthly costs for compute and storage are aggregated for all VMs in the group.-- The cost estimation is for running the on-premises VM as Azure Infrastructure as a service (IaaS) VMs. Azure Migrate doesn't consider costs for Platform as a service (PaaS) or Software as a service (SaaS).--### Review confidence rating (performance-based assessment) --Each performance-based assessment is associated with a confidence rating. --- A confidence rating ranges from one-star to five-star (one-star being the lowest and five-star the highest).-- The confidence rating is assigned to an assessment, based on the availability of data points needed to compute the assessment.-- The confidence rating of an assessment helps you estimate the reliability of the size recommendations provided by Azure Migrate.-- Confidence rating isn't available for "as-is" on-premises assessments.--For performance-based sizing, Azure Migrate needs the following: -- Utilization data for CPU.-- VM memory data.-- For every disk attached to the VM, it needs the disk IOPS and throughput data.-- For each network adapter attached to a VM, Azure Migrate needs the network input/output.-- If any of the above aren't available, size recommendations (and thus confidence ratings) might not be reliable.---Depending on the percentage of data points available, the possible confidence ratings are summarized in the table. --**Availability of data points** | **Confidence rating** - | -0%-20% | 1 Star -21%-40% | 2 Star -41%-60% | 3 Star -61%-80% | 4 Star -81%-100% | 5 Star ---#### Assessment issues affecting confidence ratings --An assessment might not have all the data points available due to a number of reasons: --- You didn't profile your environment for the duration of the assessment. For example, if you create the assessment with performance duration set to one day, you must wait for at least a day after you start the discovery, or all the data points to be collected.-- Some VMs were shut down during the period for which the assessment was calculated. If any VMs were powered off for part of the duration, Azure Migrate can't collect performance data for that period.-- Some VMs were created in between during the assessment calculation period. For example, if you create an assessment using the last month's performance history but create a number of VMs in the environment a week ago, the performance history of the new VMs won't be for the entire duration.--> [!NOTE] -> If the confidence rating of any assessment is below five-stars, wait for at least a day for the appliance to profile the environment, and then recalculate the assessment. If you don't performance-based sizing might not be reliable. If you don't want to recalculate, we recommended switching to as on-premises sizing, by changing the assessment properties. ----## Create groups using dependency visualization --In addition to creating groups manually, you can create groups using dependency visualization. -- You typically use this method when you want to assess groups with higher levels of confidence by cross-checking machine dependencies, before you run an assessment.-- Dependency visualization can help you effectively plan your migration to Azure. It helps you ensure that nothing is left behind, and that surprise outages do not occur when you are migrating to Azure.-- You can discover all interdependent systems that need to migrate together and identify whether a running system is still serving users or is a candidate for decommissioning instead of migration.-- Azure Migrate uses the Service Map solution in Azure Monitor to enable dependency visualization.--> [!NOTE] -> Dependency visualization is not available in Azure Government. --To set up dependency visualization, you associate a Log Analytics workspace with an Azure Migrate project, install agents on machines for which you want to visualize dependencies, and then create groups using dependency information. ----### Associate a Log Analytics workspace --To use dependency visualization, you associate a Log Analytics workspace with a migration project. You can only create or attach a workspace in the same subscription where the migration project is created. --1. To attach a Log Analytics workspace to a project, in **Overview** > **Essentials**, click **Requires configuration**. -2. You can create a new workspace, or attach an existing one: - - To create a new workspace, specify a name. The workspace is created in a region in the same [Azure geography](https://azure.microsoft.com/global-infrastructure/geographies/) as the migration project. - - When you attach an existing workspace, you can pick from all the available workspaces in the same subscription as the migration project. Only those workspaces are listed which were created in a [supported Service Map region](https://azure.microsoft.com/global-infrastructure/services/?products=monitor®ions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace. To attach a workspace, ensure that you have 'Reader' access to the workspace. --> [!NOTE] -> You can't change the workspace associated with a migration project. --### Download and install VM agents --After you configure a workspace, you download and install agents on each on-premises machine that you want to evaluate. In addition, if you have machines with no internet connectivity, you need to download and install [Log Analytics gateway](/azure/azure-monitor/agents/gateway) on them. --1. In **Overview**, click **Manage** > **Machines**, and select the required machine. -2. In the **Dependencies** column, click **Install agents**. -3. On the **Dependencies** page, download and install the Microsoft Monitoring Agent (MMA), and the Dependency agent on each VM you want to assess. -4. Copy the workspace ID and key. You need these when you install the MMA on the on-premises machine. --> [!NOTE] -> To automate the installation of agents you can use a deployment tool such as Configuration Manager or a partner tool such as Intigua, that provides an agent deployment solution for Azure Migrate. ---#### Install the MMA agent on a Windows machine --To install the agent on a Windows machine: --1. Double-click the downloaded agent. -2. On the **Welcome** page, click **Next**. On the **License Terms** page, click **I Agree** to accept the license. -3. In **Destination Folder**, keep or modify the default installation folder > **Next**. -4. In **Agent Setup Options**, select **Azure Log Analytics** > **Next**. -5. Click **Add** to add a new Log Analytics workspace. Paste in the workspace ID and key that you copied from the portal. Click **Next**. --You can install the agent from the command line or using an automated method such as Configuration Manager. [Learn more](/azure/azure-monitor/agents/log-analytics-agent#installation-options) about using these methods to install the MMA agent. --#### Install the MMA agent on a Linux machine --To install the agent on a Linux machine: --1. Transfer the appropriate bundle (x86 or x64) to your Linux computer using scp/sftp. -2. Install the bundle by using the --install argument. -- `sudo sh ./omsagent-<version>.universal.x64.sh --install -w <workspace id> -s <workspace key>` --[Learn more](/azure/azure-monitor/agents/agents-overview#supported-operating-systems) about the list of Linux operating systems support by MMA. --### Install the MMA agent on a machine monitored by Operations Manager --For machines monitored by System Center Operations Manager 2012 R2 or later, there is no need to install the MMA agent. Service Map integrates with the Operations Manager MMA to gather the necessary dependency data. [Learn more](/previous-versions/azure/azure-monitor/vm/service-map-scom#prerequisites). The Dependency agent needs to be installed. --### Install the Dependency agent --1. To install the Dependency agent on a Windows machine, double-click the setup file and follow the wizard. -2. To install the Dependency agent on a Linux machine, install as root using the following command: -- `sh InstallDependencyAgent-Linux64.bin` -- - Learn more about the [Dependency agent support](/azure/azure-monitor/vm/vminsights-enable-overview#supported-operating-systems) for the Windows and Linux operating systems. - - [Learn more](/azure/azure-monitor/vm/vminsights-dependency-agent-maintenance#install-or-upgrade-dependency-agent) about how you can use scripts to install the Dependency agent. -->[!NOTE] -> The Azure Monitor for VMs article referenced to provide an overview of the system prerequisites and methods to deploy the Dependency agent are also applicable to the Service Map solution. --### Create a group with dependency mapping --1. After you install the agents, go to the portal, and click **Manage** > **Machines**. -2. Search for the machine where you installed the agents. -3. The **Dependencies** column for the machine should now show as **View Dependencies**. Click the column to view the dependencies of the machine. -4. The dependency map for the machine shows the following details: - - Inbound (Clients) and outbound (Servers) TCP connections to/from the machine - - The dependent machines that do not have the MMA and dependency agent installed are grouped by port numbers. - - The dependent machines that have the MMA and the dependency agent installed are shown as separate boxes. - - Processes running inside the machine, you can expand each machine box to view the processes - - Machine properties, including the FQDN, operating System, MAC address are shown. You can click on each machine box to view the details. --4. You can view dependencies for different time durations by clicking on the time duration in the time range label. By default the range is an hour. You can modify the time range, or specify start and end dates, and the duration. -- > [!NOTE] - > A time range of up to an hour is supported. Use Azure Monitor logs to [query dependency data](./how-to-create-group-machine-dependencies.md) over a longer duration. --5. After you've identified dependent machines that you want to group together, use Ctrl+Click to select multiple machines on the map, and click **Group machines**. -6. Specify a group name. Verify that the dependent machines are discovered by Azure Migrate. -- > [!NOTE] - > If a dependent machine is not discovered by Azure Migrate, you can't add it to the group. To add such machines to the group, you need to run the discovery process again with the right scope in vCenter Server and ensure that the machine is discovered by Azure Migrate. --7. If you want to create an assessment for this group, select the checkbox to create a new assessment for the group. -8. Click **OK** to save the group. --Once the group is created, it is recommended to install agents on all the machines of the group and refine the group by visualizing the dependency of the entire group. --## Query dependency data from Azure Monitor logs --Dependency data captured by Service Map is available for querying in the Log Analytics workspace associated with your Azure Migrate project. [Learn more](/previous-versions/azure/azure-monitor/vm/service-map#log-analytics-records) about the Service Map data tables to query in Azure Monitor logs. --To run the Kusto queries: --1. After you install the agents, go to the portal and click **Overview**. -2. In **Overview**, go to **Essentials** section of the project and click on workspace name provided next to **OMS Workspace**. -3. On the Log Analytics workspace page, click **General** > **Logs**. -4. Write your query to gather dependency data using Azure Monitor logs. Find sample queries in the next section. -5. Run your query by clicking on Run. --[Learn more](/azure/azure-monitor/logs/log-analytics-tutorial) about how to write Kusto queries. --### Sample Azure Monitor logs queries --Following are sample queries you can use to extract dependency data. You can modify the queries to extract your preferred data points. An exhaustive list of the fields in dependency data records is available [here](/previous-versions/azure/azure-monitor/vm/service-map#log-analytics-records). Find more sample queries [here](/previous-versions/azure/azure-monitor/vm/service-map#sample-log-searches). --#### Summarize inbound connections on a set of machines --The records in the table for connection metrics, VMConnection, do not represent individual physical network connections. Multiple physical network connections are grouped into a logical connection. [Learn more](/previous-versions/azure/azure-monitor/vm/service-map#connections) about how physical network connection data is aggregated into a single logical record in VMConnection. --``` -// the machines of interest -let ips=materialize(ServiceMapComputer_CL -| summarize ips=makeset(todynamic(Ipv4Addresses_s)) by MonitoredMachine=ResourceName_s -| mvexpand ips to typeof(string)); -let StartDateTime = datetime(2019-03-25T00:00:00Z); -let EndDateTime = datetime(2019-03-30T01:00:00Z); -VMConnection -| where Direction == 'inbound' -| where TimeGenerated > StartDateTime and TimeGenerated < EndDateTime -| join kind=inner (ips) on $left.DestinationIp == $right.ips -| summarize sum(LinksEstablished) by Computer, Direction, SourceIp, DestinationIp, DestinationPort -``` --#### Summarize volume of data sent and received on inbound connections between a set of machines --``` -// the machines of interest -let ips=materialize(ServiceMapComputer_CL -| summarize ips=makeset(todynamic(Ipv4Addresses_s)) by MonitoredMachine=ResourceName_s -| mvexpand ips to typeof(string)); -let StartDateTime = datetime(2019-03-25T00:00:00Z); -let EndDateTime = datetime(2019-03-30T01:00:00Z); -VMConnection -| where Direction == 'inbound' -| where TimeGenerated > StartDateTime and TimeGenerated < EndDateTime -| join kind=inner (ips) on $left.DestinationIp == $right.ips -| summarize sum(BytesSent), sum(BytesReceived) by Computer, Direction, SourceIp, DestinationIp, DestinationPort -``` ---## Next steps -[Learn about](migrate-services-overview.md) the latest version of Azure Migrate. |
modeling-simulation-workbench | Resources Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/resources-troubleshoot.md | Title: Troubleshoot Azure Modeling and Simulation Workbench -description: Learn how to troubleshoot issues with an Azure Modeling and Simulation Workbench. +description: Learn how to troubleshoot issues with an Azure Modeling and Simulation Workbench. Last updated 07/19/2023 This troubleshooting guide contains general troubleshooting steps and information for Azure Modeling and Simulation Workbench. The content is organized by topic type. +Additional troubleshooting steps for transient issues can be found on the [Known Issues](./troubleshoot-known-issues.md) page. + ## Remote desktop troubleshooting ### Remote desktop access error |
modeling-simulation-workbench | Troubleshoot Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/troubleshoot-known-issues.md | + + Title: Known issues for Azure Modeling and Simulation Workbench +description: "Troubleshooting guide for known issues on Azure Modeling and Simulation Workbench." ++++ Last updated : 09/29/2024++#customer intent: As a user, I want to troubleshoot and understand known issues with Azure Modeling and Simulation Workbench. ++++# Known issues: Azure Modeling and Simulation Workbench ++The Modeling and Simulation Workbench is a secure, cloud-based platform for collaborative engineering, design, and simulation workloads that require security and confidentiality. The Workbench provides isolation for separate enterprises, allowing each to bring in code, data, or applications and apply them to a shared environment without exposing confidential intellectual property. ++This Known Issues guide provides troubleshooting and advisory information for resolving or acknowledging issues to be addressed. Where applicable, workaround or mitigation steps are provided. ++## EDA license upload failures on server name ++When uploading Electronic Design Automation (EDA) license files with server names that contain a dash ("-") symbol, the chamber license file server fails to process the file. For some license files, the `SERVER` line server name isn't being parsed correctly. The parser fails to tokenize this line in order to reformat for the chamber license server environment. ++### Troubleshooting steps ++If your license server has any dash ("-") characters in the name and fails when uploading a license file, this issue might be present for your release. Replace the server name with any single word placeholder using only alphanumeric characters (A-Z, a-z, 0-9) and no special characters or "-". For example if your `SERVER` line looks like this: ++```INI +SERVER license-server-01 6045BDEB339C 1717 +``` ++Replace the license server name to a name without dashes. The name is irrelevant as the license server transforms whatever is in the server position with the correctly formatted name. ++```INI +SERVER serverplaceholder 6045BDEB339C 1717 +``` ++## Synopsys license files upload failures due to missing port numbers ++Certain Synopsys EDA license files fail when uploaded to the Modeling and Simulation Workbench chamber license service without a port number. ++### Troubleshooting steps ++A Synopsys license file issued without a port number on the `VENDOR` line won't successfully upload unless edited by hand to include the port number. The port number can be found on the chamber license server overview page. ++A license file issued without a port number on the `VENDOR` line is shown. ++```INI +VENDOR snpslmd /path/to/snpslmd +``` ++Add the license server port to the end of the `VENDOR` line. You don't need to update the tool file path, indicated in the example as */path/to/snpslmd* or any other content. ++```INI +VENDOR snpslmd /path/to/snpslmd 27021 +``` ++## Users on public IP connector with IP on allowlist can't access workbench desktop or data pipeline ++A chamber with a public IP connector configured to allow users who's IP is listed after the first entry of the allowlist can't access the chamber either through the desktop or data pipeline using AzCopy. If the allowlist on a public IP connector contains overlapping networks, in some instances the preprocessor might fail to detect the overlapping networks before attempting to commit them to the active NSG. Failures aren't reported back to the user. Other NSG rules elsewhere - either before or after the interfering rule - might not be processed, defaulting to the "deny all" rule. Access to the connector might be blocked unexpectedly for users that previously had access and appear elsewhere in the list. Access is blocked for all connector interactions including desktop, data pipeline upload, and data pipeline download. The connector still responds to port queries, but doesn't allow interactions from an IP or IP range shown in the connector networking allowlist. ++### Prerequisites ++* A chamber is configured with a public IP connector (gateway appears as "None"). ++* The allowlist has entries with CIDR-masked IP ranges less than a single host /32 (/31 and smaller). ++* The IP ranges of two or more entries with subnet masking are overlapping. Overlapping ranges can sometimes be identified with leading octets being identical, but trailing octets marked with a "0". ++### Troubleshooting steps ++If a user that could previously access the workbench loses connectivity even though their IP is still on the allowlist, an overlapping but unhandled error with the allowlist might be blocking. Loss of connectivity doesn't preclude that any onsite or local firewall, VPN, or gateway might also be blocking access. ++Users should identify overlapping IP ranges by checking the allowlist of masked subnets less than a single host (less than /32) and ensure that those subnets have no overlap. Those overlapping subnets should be replaced with nonoverlapping subnets. An indicator of this is that the first allowlist entry is acknowledged, but other rules aren't. ++## Data pipeline upload file corruption or truncation ++Uploaded files to a chamber through the data pipeline might be truncated or otherwise corrupt. ++### Troubleshooting steps ++While uploading files into a chamber, you might see a file that isn't the expected length, corrupt, or otherwise doesn't pass a hash check. ++### Possible causes ++The file isn't corrupt or truncated, but still uploading. The data pipeline isn't a single stage and files placed in the upload pipeline don't appear instantaneously in the */mount/datapipeline/datain* directory and are likely still completing. Check back later and verify the length or hash. ++## Azure VMs located in the same region as a workbench can't access a public IP connector ++Resources deployed outside of a workbench, in particular virtual machines (VM), can't access a chamber through a public IP connector if located in the same region. A VM deployed in the same region or even the same resource group as a workbench isn't able to connect to the chamber's connector. The VM's public facing IP address is on the allowlist. A locally installed version of AzCopy is unable to access the chamber's data pipeline. Errors include timeouts or not authorized. ++### Prerequisites ++* A workbench chamber is deployed using a public IP connector in one region. ++* A virtual machine or other resource with a public facing IP address is deployed in the same region. ++* The connector's allowlist has the public facing IP address of the VM. ++### Troubleshooting steps ++Azure resources in the same region don't use public IP or internet to communicate. Rather, if an Azure resource initiates communication to another Azure resource in the same region, private Azure networking is used. As a result, both the source and destination IP addresses are private network addresses, which aren't permitted on the connector's allowlist. ++The VM or other directly communicating resource should be located in another region beside's the workbench's region. Networking continues to happen on Azure's backbone network and doesn't pass over the general internet, but instead uses the public IP address. The new region can be any permissible region for the resource and doesn't need to be an active region for the Modeling and Simulation Workbench. ++<!-- KEEP THIS FOR FUTURE CHANGES +## [Issue] +Required: Issue - H2 ++Each known issue should be in its own section. +Provide a title for the section so that users can +easily identify the issue that they are experiencing. ++[Describe the issue.] ++<!-- Required: Issue description (no heading) ++Describe the issue. ++### Prerequisites ++<!--Optional: Prerequisites - H3 ++Use clear and unambiguous language, and use +an unordered list format. ++### Troubleshooting steps ++<!-- Optional: Troubleshooting steps - H3 ++Not all known issues can be corrected, but if a solution +is known, describe the steps to take to correct the issue. ++### Possible causes ++<!-- Optional: Possible causes - H3 ++In an H3 section, describe potential causes. ++--> |
network-watcher | Diagnose Vm Network Traffic Filtering Problem Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem-cli.md | When no longer needed, use [az group delete](/cli/azure/group) to delete **myRes az group delete --name 'myResourceGroup' --yes ``` -## Next steps --In this quickstart, you created a VM and diagnosed inbound and outbound network traffic filters. You learned that network security group rules allow or deny traffic to and from a VM. Learn more about [security rules](../virtual-network/network-security-groups-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) and how to [create security rules](../virtual-network/manage-network-security-group.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#create-a-security-rule). +## Next step -Even with the proper network traffic filters in place, communication to a virtual machine can still fail, due to routing configuration. To learn how to diagnose virtual machine routing problems, see [Diagnose a virtual machine network routing problem](diagnose-vm-network-routing-problem-powershell.md). To diagnose outbound routing, latency, and traffic filtering problems with one tool, see [Troubleshoot connections with Azure Network Watcher](network-watcher-connectivity-powershell.md). +> [!div class="nextstepaction"] +> [Diagnose a virtual machine network routing problem](diagnose-vm-network-routing-problem.md) |
network-watcher | Diagnose Vm Network Traffic Filtering Problem Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem-powershell.md | When no longer needed, use [Remove-AzResourceGroup](/powershell/module/az.resour Remove-AzResourceGroup -Name 'myResourceGroup' -Force ``` -## Next steps --In this quickstart, you created a VM and diagnosed inbound and outbound network traffic filters. You learned that network security group rules allow or deny traffic to and from a VM. Learn more about [security rules](../virtual-network/network-security-groups-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) and how to [create security rules](../virtual-network/manage-network-security-group.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#create-a-security-rule). +## Next step -Even with the proper network traffic filters in place, communication to a virtual machine can still fail, due to routing configuration. To learn how to diagnose virtual machine routing problems, see [Diagnose a virtual machine network routing problem](diagnose-vm-network-routing-problem-powershell.md). To diagnose outbound routing, latency, and traffic filtering problems with one tool, see [Troubleshoot connections with Azure Network Watcher](network-watcher-connectivity-powershell.md). +> [!div class="nextstepaction"] +> [Diagnose a virtual machine network routing problem](diagnose-vm-network-routing-problem.md) |
network-watcher | Diagnose Vm Network Traffic Filtering Problem | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem.md | Last updated 10/26/2023 In this quickstart, you deploy a virtual machine and use Network Watcher [IP flow verify](network-watcher-ip-flow-verify-overview.md) to test the connectivity to and from different IP addresses. Using the IP flow verify results, you determine the security rule that's blocking the traffic and causing the communication failure and learn how you can resolve it. You also learn how to use the [effective security rules](effective-security-rules-overview.md) for a network interface to determine why a security rule is allowing or denying traffic. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. |
operator-nexus | How To Credential Manager Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/how-to-credential-manager-key-vault.md | az networkcloud cluster update --ids /subscriptions/<subscription ID>/resourceGr az networkcloud cluster show --ids /subscriptions/<subscription ID>/resourceGroups/<Resource Group Name>/providers/Microsoft.NetworkCloud/clusters/<Nexus Cluster Name> --query secretArchive ``` +## Add a permission to User-assigned identity ++When using a User-assigned identity, add the following role assignment to the UAI resource: ++1. Open the Azure Portal and locate the User-assigned identity in question. +2. Under **Access control (IAM)**, click **Add role assignment**. +3. Select **Role**: Managed Identity Operator. (See the permissions that the role provides [managed-identity-operator](/azure/role-based-access-control/built-in-roles/identity#managed-identity-operator)). +4. Assign access to: **User, group, or service principal**. +5. Select **Member**: AFOI-NC-MGMT-PME-PROD application. +6. Review and assign. ++ For more help: ```console |
operator-nexus | Howto Configure Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-cluster.md | az networkcloud cluster create --name "$CLUSTER_NAME" --location "$LOCATION" \ | COMPX_RACK_SKU | Rack SKU for CompX Rack; repeat for each rack in compute-rack-definitions | | COMPX_RACK_SN | Rack Serial Number for CompX Rack; repeat for each rack in compute-rack-definitions | | COMPX_RACK_LOCATION | Rack physical location for CompX Rack; repeat for each rack in compute-rack-definitions |-| COMPX_SVRY_BMC_PASS | CompX Rack ServerY BMC password; repeat for each rack in compute-rack-definitions and for each server in rack | +| COMPX_SVRY_BMC_PASS | CompX Rack ServerY Baseboard Management Controller (BMC) password; repeat for each rack in compute-rack-definitions and for each server in rack | | COMPX_SVRY_BMC_USER | CompX Rack ServerY BMC user; repeat for each rack in compute-rack-definitions and for each server in rack | | COMPX_SVRY_BMC_MAC | CompX Rack ServerY BMC MAC address; repeat for each rack in compute-rack-definitions and for each server in rack |-| COMPX_SVRY_BOOT_MAC | CompX Rack ServerY boot NIC MAC address; repeat for each rack in compute-rack-definitions and for each server in rack | +| COMPX_SVRY_BOOT_MAC | CompX Rack ServerY boot Network Interface Card (NIC) MAC address; repeat for each rack in compute-rack-definitions and for each server in rack | | COMPX_SVRY_SERVER_DETAILS | CompX Rack ServerY details; repeat for each rack in compute-rack-definitions and for each server in rack |-| COMPX_SVRY_SERVER_NAME | CompX Rack ServerY name, repeat for each rack in compute-rack-definitions and for each server in rack | +| COMPX_SVRY_SERVER_NAME | CompX Rack ServerY name; repeat for each rack in compute-rack-definitions and for each server in rack | | MRG_NAME | Cluster managed resource group name | | MRG_LOCATION | Cluster Azure region | | NF_ID | Reference to Network Fabric | az networkcloud cluster create --name "$CLUSTER_NAME" --location "$LOCATION" \ | TENANT_ID | Subscription tenant ID | | SUBSCRIPTION_ID | Subscription ID | | KV_RESOURCE_ID | Key Vault ID |-| CLUSTER_TYPE | Type of cluster, Single, or MultiRack | -| CLUSTER_VERSION | NC Version of cluster | +| CLUSTER_TYPE | Type of cluster, Single, or MultiRack | +| CLUSTER_VERSION | Network Cloud (NC) Version of cluster | | TAG_KEY1 | Optional tag1 to pass to Cluster Create | | TAG_VALUE1 | Optional tag1 value to pass to Cluster Create | | TAG_KEY2 | Optional tag2 to pass to Cluster Create | You can find examples for an 8-Rack 2M16C SKU cluster using these two files: >[!NOTE] >To get the correct formatting, copy the raw code file. The values within the cluster.parameters.jsonc file are customer specific and may not be a complete list. Update the value fields for your specific environment. -1. In a web browser, go to the [Azure portal](https://portal.azure.com/) and sign in. -1. From the Azure portal search bar, search for 'Deploy a custom template' and then select it from the available services. +1. Navigate to [Azure portal](https://portal.azure.com/) in a web browser and sign in. +1. Search for 'Deploy a custom template' in the Azure portal search bar, and then select it from the available services. 1. Click on Build your own template in the editor. 1. Click on Load file. Locate your cluster.jsonc template file and upload it. 1. Click Save. You can find examples for an 8-Rack 2M16C SKU cluster using these two files: 1. Click Load file. Locate your cluster.parameters.jsonc parameters file and upload it. 1. Click Save. 1. Select the correct Subscription.-1. Search for the Resource group to see if it already exists. If not, create a new Resource group. +1. Search for the Resource group to see if it already exists. If not, create a new Resource group. 1. Make sure all Instance Details are correct. 1. Click Review + create. ### Cluster validation -A successful Operator Nexus Cluster creation results in the creation of an AKS cluster +A successful Operator Nexus Cluster creation results in the creation of an Azure Kubernetes Service (AKS) cluster inside your subscription. The cluster ID, cluster provisioning state, and deployment state are returned as a result of a successful `cluster create`. Cluster create Logs can be viewed in the following locations: ## Deploy Cluster -After creating the cluster, the deploy cluster action can be triggered. +The deploy Cluster action can be triggered after creating the Cluster. The deploy Cluster action creates the bootstrap image and deploys the Cluster. Deploy Cluster initiates a sequence of events that occur in the Cluster Manager. -1. Validation of the cluster/rack properties +1. Validation of the cluster/rack properties. 2. Generation of a bootable image for the ephemeral bootstrap cluster (Validation of Infrastructure).-3. Interaction with the IPMI interface of the targeted bootstrap machine. -4. Perform hardware validation checks +3. Interaction with the Intelligent Platform Management Interface (IPMI) interface of the targeted bootstrap machine. +4. Performing hardware validation checks. 5. Monitoring of the Cluster deployment process. Deploy the on-premises Cluster: az networkcloud cluster deploy \ > See the section [Cluster Deploy Failed](#cluster-deploy-failed) for more detailed steps. > Optionally, the command can run asynchronously using the `--no-wait` flag. -### Cluster Deploy with hardware validation +### Cluster Deployment with hardware validation During a Cluster deploy process, one of the steps executed is hardware validation. The hardware validation procedure runs various test and checks against the machines passed and/or are available to meet the thresholds necessary for deployment to c > Additionally, the provided Service Principal in the Cluster object is used for authentication against the Log Analytics Workspace Data Collection API. > This capability is only visible during a new deployment (Green Field); existing cluster will not have the logs available retroactively. +> [!NOTE] +> The RAID controller is reset during Cluster deployment wiping all data from the server's virtual disks. Any Baseboard Management Controller (BMC) virtual disk alerts can typically be ignored unless there are additional physical disk and/or RAID controllers alerts. + By default, the hardware validation process writes the results to the configured Cluster `analyticsWorkspaceId`. However, due to the nature of Log Analytics Workspace data collection and schema evaluation, there can be ingestion delay that can take several minutes or more. For this reason, the Cluster deployment proceeds even if there was a failure to write the results to the Log Analytics Workspace. To help address this possible event, the results, for redundancy, are also logge In the provided Cluster object's Log Analytics Workspace, a new custom table with the Cluster's name as prefix and the suffix `*_CL` should appear. In the _Logs_ section of the LAW resource, a query can be executed against the new `*_CL` Custom Log table. -#### Cluster Deploy Action with skipping specific bare-metal-machine +#### Cluster Deployment with skipping specific bare-metal-machine -A parameter can be passed in to the deploy command that represents the names of +The `--skip-validation-for-machines` parameter represents the names of bare metal machines in the cluster that should be skipped during hardware validation. Nodes skipped aren't validated and aren't added to the node pool. Additionally, nodes skipped don't count against the total used by threshold calculations. az networkcloud cluster show --resource-group "$CLUSTER_RG" \ ``` The Cluster deployment is in-progress when detailedStatus is set to `Deploying` and detailedStatusMessage shows the progress of deployment.-Some examples of deployment progress shown in detailedStatusMessage are `Hardware validation is in progress.` (if cluster is deployed with hardware validation) ,`Cluster is bootstrapping.`, `KCP initialization in progress.`, `Management plane deployment in progress.`, `Cluster extension deployment in progress.`, `waiting for "<rack-ids>" to be ready`, etc. +Some examples of deployment progress shown in detailedStatusMessage are `Hardware validation is in progress.` (if cluster is deployed with hardware validation), `Cluster is bootstrapping.`, `KCP initialization in progress.`, `Management plane deployment in progress.`, `Cluster extension deployment in progress.`, `waiting for "<rack-ids>" to be ready`, etc. :::image type="content" source="./media/nexus-deploy-kcp-status.png" lightbox="./media/nexus-deploy-kcp-status.png" alt-text="Screenshot of Azure portal showing cluster deploy progress kcp init."::: Note, `<APIVersion>` is the API version 2024-07-01 or newer. ## Delete a cluster -When deleting a cluster, it deletes the resources in Azure and the cluster that resides in the on-premises environment. +Deleting a cluster deletes the resources in Azure and the cluster that resides in the on-premises environment. >[!NOTE] >If there are any tenant resources that exist in the cluster, it will not be deleted until those resources are deleted. |
operator-nexus | Troubleshoot Hardware Validation Failure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/troubleshoot-hardware-validation-failure.md | Expanding `result_detail` for a given category shows detailed results. * Memory/RAM Related Failure (memory_capacity_GB) (measured in GiB) * Memory specs are defined in the SKU. Memory below threshold value indicates missing or failed Dual In-Line Memory Module (DIMM). A failed DIMM would also be reflected in the `health_info` category. The following example shows a failed memory check. - ```json + ```yaml { "field_name": "memory_capacity_GB", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. * CPU Related Failure (cpu_sockets) * CPU specs are defined in the SKU. Failed `cpu_sockets` check indicates a failed CPU or CPU count mismatch. The following example shows a failed CPU check. - ```json + ```yaml { "field_name": "cpu_sockets", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. * Model Check Failure (Model) * Failed `Model` check indicates that wrong server is racked in the slot or there's a cabling mismatch. The following example shows a failed model check. - ```json + ```yaml { "field_name": "Model", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. * Serial Number Check Failure (Serial_Number) * The server's serial number, also referred as the service tag, is defined in the cluster. Failed `Serial_Number` check indicates a mismatch between the serial number in the cluster and the actual serial number of the machine. The following example shows a failed serial number check. - ```json + ```yaml { "field_name": "Serial_Number", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. * iDRAC License Check Failure * All iDRACs require a perpetual/production iDRAC datacenter or enterprise license. Trial licenses are valid for only 30 days. A failed `iDRAC License Check` indicates that the required iDRAC license is missing. The following examples show a failed iDRAC license check for a trial license and missing license respectively. - ```json + ```yaml { "field_name": "iDRAC License Check", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. } ``` - ```json + ```yaml { "field_name": "iDRAC License Check", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. * Firmware Version Checks * Firmware version checks were introduced in release 3.9. The following example shows the expected log for release versions before 3.9. - ```json + ```yaml { "system_info": { "system_info_result": "Pass", Expanding `result_detail` for a given category shows detailed results. * Firmware versions are determined based on the `cluster version` value in the cluster object. The following example shows a failed check due to indeterminate cluster version. If this problem is encountered, verify the version in the cluster object. - ```json + ```yaml { "system_info": { "system_info_result": "Fail", Expanding `result_detail` for a given category shows detailed results. ### Drive info category -* Disk Check Failure +* Disk Checks Failure * Drive specs are defined in the SKU. Mismatched capacity values indicate incorrect drives or drives inserted in to incorrect slots. Missing capacity and type fetched values indicate drives that are failed, missing, or inserted in to incorrect slots. - ```json + ```yaml { "field_name": "Disk_0_Capacity_GB", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. } ``` - ```json + ```yaml { "field_name": "Disk_0_Capacity_GB", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. } ``` - ```json + ```yaml { "field_name": "Disk_0_Type", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. * Network Interface Cards (NIC) Check Failure * Dell server NIC specs are defined in the SKU. A mismatched link status indicates loose or faulty cabling or crossed cables. A mismatched model indicates incorrect NIC card is inserted in to slot. Missing link/model fetched values indicate NICs that are failed, missing, or inserted in to incorrect slots. - ```json + ```yaml { "field_name": "NIC.Slot.3-1-1_LinkStatus", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. } ``` - ```json + ```yaml { "field_name": "NIC.Embedded.2-1-1_LinkStatus", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. } ``` - ```json + ```yaml { "field_name": "NIC.Slot.3-1-1_Model", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. } ``` - ```json + ```yaml { "field_name": "NIC.Slot.3-1-1_LinkStatus", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. } ``` - ```json + ```yaml { "field_name": "NIC.Slot.3-1-1_Model", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. * NIC Check L2 Switch Information * HWV reports L2 switch information for each of the server interfaces. The switch connection ID (switch interface MAC) and switch port connection ID (switch interface label) are informational. - ```json + ```yaml { "field_name": "NIC.Slot.3-1-1_SwitchConnectionID", "comparison_result": "Info", Expanding `result_detail` for a given category shows detailed results. } ``` - ```json + ```yaml { "field_name": "NIC.Slot.3-1-1_SwitchPortConnectionID", "comparison_result": "Info", Expanding `result_detail` for a given category shows detailed results. * Cabling Checks for Bonded Interfaces * Mismatched cabling is reported in the result_log. Cable check validates that that bonded NICs connect to switch ports with same Port ID. In the following example Peripheral Component Interconnect (PCI) 3/1 and 3/2 connect to "Ethernet1/1" and "Ethernet1/3" respectively on TOR, triggering a failure for HWV. - ```json + ```yaml { "network_info": { "network_info_result": "Fail", Expanding `result_detail` for a given category shows detailed results. * iDRAC (BMC) MAC Address Check Failure * The iDRAC MAC address is defined in the cluster for each BMM. A failed `iDRAC_MAC` check indicates a mismatch between the iDRAC/BMC MAC in the cluster and the actual MAC address retrieved from the machine. - ```json + ```yaml { "field_name": "iDRAC_MAC", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. * Preboot execution environment (PXE) MAC Address Check Failure * The PXE MAC address is defined in the cluster for each BMM. A failed `PXE_MAC` check indicates a mismatch between the PXE MAC in the cluster and the actual MAC address retrieved from the machine. - ```json + ```yaml { "field_name": "NIC.Embedded.1-1_PXE_MAC", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. * Health Check Sensor Failure * Server health checks cover various hardware component sensors. A failed health sensor indicates a problem with the corresponding hardware component. The following examples indicate fan, drive, and CPU failures respectively. - ```json + ```yaml { "field_name": "System Board Fan1A", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. } ``` - ```json + ```yaml { "field_name": "Solid State Disk 0:1:1", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. } ``` - ```json + ```yaml { "field_name": "CPU.Socket.1", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. * To troubleshoot a server health failure engage vendor. * Health Check LifeCycle (LC) Log Failures- * Dell server health checks fail for recent Critical LC Log Alarms. The hardware validation plugin logs the alarm ID, name, and timestamp. Recent LC Log critical alarms indicate need for further investigation. The following example shows a failure for a critical backplane voltage alarm. + * Dell server health checks fail for recent Critical LC Log Alarms. The hardware validation plugin logs the alarm ID, name, and timestamp. Recent critical alarms indicate need for further investigation. The following example shows a failure for a critical backplane voltage alarm. - ```json + ```yaml { "field_name": "LCLog_Critical_Alarms", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. ``` * Virtual disk errors typically indicate a RAID cleanup false positive condition and are logged due to the timing of raid cleanup and system power off pre HWV. The following example shows an LC log critical error on virtual disk 238. If multiple errors are encountered blocking deployment, delete cluster, wait two hours, then reattempt cluster deployment. If the failures aren't deployment blocking, wait two hours then run BMM replace.+ * Virtual disk errors are allowlisted starting with release 3.13 and don't trigger a health check failure. - ```json + ```yaml { "field_name": "LCLog_Critical_Alarms", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. * If `Backplane Comm` critical errors are logged, perform flea drain. Engage vendor to troubleshoot any other LC log critical failures. -* Health Check Server Power Action Failures +* Health Check Server Power Control Action Failures * Dell server health checks fail for failed server power-up or failed iDRAC reset. A failed server control action indicates an underlying hardware issue. The following example shows failed power on attempt. - ```json + ```yaml { "field_name": "Server Control Actions", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. } ``` - ```json + ```yaml "result_log": [ "Server power up failed with: server OS is powered off after successful power on attempt", ] Expanding `result_detail` for a given category shows detailed results. * To troubleshoot server power-on failure attempt a flea drain. If problem persists engage vendor. +* RAID cleanup failures + * As part of RAID cleanup, the RAID controller configuration is reset. Dell server health check fails for RAID controller reset failure. A failed RAID cleanup action indicates an underlying hardware issue. The following example shows a failed RAID controller reset. ++ ```yaml + { + "field_name": "Server Control Actions", + "comparison_result": "Fail", + "expected": "Success", + "fetched": "Failed" + } + ``` ++ ```yaml + "result_log": [ + "RAID cleanup failed with: raid deletion failed after 2 attempts", + ] + ``` ++ * To clear RAID in BMC webui: ++ `BMC` -> `Dashboard` -> `Storage` -> `Controllers` -> `Actions` -> `Reset Configuration` ++ * To clear RAID with racadm check for RAID controllers then clear config: ++ ```bash + racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD storage get controllers | grep "RAID" + racadm --nocertwarn -r $IP -u $BMC_USR -p $BC_PWD storage resetconfig:RAID.SL.3-1 #substitute with RAID controller from get command + racadm --nocertwarn -r $IP -u $BMC_USR -p $BC_PWD jobqueue create RAID.SL.3-1 --realtime #substitute with RAID controller from get command + ``` ++ * To troubleshoot RAID cleanup failure check for any errors logged. For Dell R650/660, ensure that only slots 0 and 1 contain physical drives. For Dell R750/760, ensure that only slots 0 through 3 contain physical drives. For any other models, confirm there are no extra drives inserted based on SKU definition. All extra drives should be removed to align with the SKU. If the problem persists engage vendor. + * BMC virtual disk critical alerts triggered during HWV can be ignored. + * Health Check Power Supply Failure and Redundancy Considerations * Dell server health checks warn when one power supply is missing or failed. Power supply "field_name" might be displayed as 0/PS0/Power Supply 0 and 1/PS1/Power Supply 1 for the first and second power supplies respectively. A failure of one power supply doesn't trigger an HWV device failure. - ```json + ```yaml { "field_name": "Power Supply 1", "comparison_result": "Warning", Expanding `result_detail` for a given category shows detailed results. } ``` - ```json + ```yaml { "field_name": "System Board PS Redundancy", "comparison_result": "Warning", Expanding `result_detail` for a given category shows detailed results. * The `boot_device_name` check is currently informational. * Mismatched boot device name shouldn't trigger a device failure. - ```json + ```yaml { "field_name": "boot_device_name", "comparison_result": "Info", Expanding `result_detail` for a given category shows detailed results. } ``` -* PXE Device Check Considerations +* PXE Device Checks Considerations * This check validates the PXE device settings.+ * Starting with the 2024-07-01 GA API version, HWV attempts to auto fix the BIOS boot configuration. * Failed `pxe_device_1_name` or `pxe_device_1_state` checks indicate a problem with the PXE configuration. * Failed settings need to be fixed to enable system boot during deployment. - ```json + ```yaml { "field_name": "pxe_device_1_name", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. } ``` - ```json + ```yaml { "field_name": "pxe_device_1_state", "comparison_result": "Fail", Expanding `result_detail` for a given category shows detailed results. * Device Login Check Considerations * The `device_login` check fails if the iDRAC isn't accessible or if the hardware validation plugin isn't able to sign-in. - ```json + ```yaml { "device_login": "Fail" } Expanding `result_detail` for a given category shows detailed results. * To troubleshoot, ping the iDRAC from a jumpbox with access to the BMC network. If iDRAC pings check that passwords match. -### Special considerations --* Servers Failing Multiple Health and Network Checks - * Raid deletion is performed during cluster deploy and cluster delete actions for all releases inclusive of 3.12. - * If we observe servers getting powered off during hardware validation with multiple failed health and network checks, we need to reattempt cluster deployment. - * If issues persist, raid deletion needs to be performed manually on `control` nodes in the cluster. -- * To clear raid in BMC webui: -- `BMC` -> `Storage` -> `Virtual Disks` -> `Action` -> `Delete` -> `Apply Now` -- * To clear raid with racadm: -- ```bash - racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD raid deletevd:Disk.Virtual.239:RAID.SL.3-1 - racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD jobqueue create RAID.SL.3-1 --realtime - ``` - ## Adding servers back into the Cluster after a repair After Hardware is fixed, run BMM Replace following instructions from the following page [BMM actions](howto-baremetal-functions.md). |
operator-nexus | Troubleshoot Reboot Reimage Replace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/troubleshoot-reboot-reimage-replace.md | Servers contain many physical components that can fail over time. It's important A hardware validation process is invoked to ensure the integrity of the physical host in advance of deploying the OS image. Like the reimage action, the tenant data isn't modified during replacement. +> [!IMPORTANT] +> Starting with the 2024-07-01 GA API version, the RAID controller is reset during BMM replace, wiping all data from the server's virtual disks. Baseboard Management Controller (BMC) virtual disk alerts triggered during BMM replace can be ignored unless there are additonal physical disk and/or RAID controllers alerts. + As a best practice, first issue a `cordon` command to remove the bare metal machine from workload scheduling and then shut down the BMM in advance of physical repairs. When you're performing a physical hot swappable power supply repair, a replace action isn't required because the BMM host will continue to function normally after the repair. |
operator-service-manager | Best Practices Onboard Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/best-practices-onboard-deploy.md | For the cert-manager operator, our current deployed version is 1.14.5. Users sh For the CRD resources, our current deployed version is 1.14.5. Users should test for compatibility with this version. Since management of a common cluster CRD is something typically handled by a cluster administrator, we are working to enable CRD resource upgrades via standard Nexus Add-on process. +## NfApp Sequential Ordering Behavior ++### Overview ++By default, containerized network function applications (NfApps) are installed or updated based on the sequential order in which they appear in the network function design version (NFDV). For delete, the NfApps are deleted in the reverse order sepcified. Where a publisher needs to define specific ordering of NfApps, different from the default, a dependsOnProfile is used to define a unique sequence for install, update and delete operations. ++### How to use dependsOnProfile ++A publisher can use the dependsOnProfile in the NFDV to control the sequence of helm executions for NfApps. Given the following example, on install operation the NfApps will be deployed in the following order: dummyApplication1, dummyApplication2, then dummyApplication. On update operation, the NfApps will be updated in the following order: dummyApplication2, dummyApplication1, then dummyApplication. On delete operation, the NfApps will be deleted in the following order: dummyApplication2, dummyApplication1, then dummyApplication. ++```json +{ + "location": "eastus", + "properties": { + "networkFunctionTemplate": { + "networkFunctionApplications": [ + { + "dependsOnProfile": { + "installDependsOn": [ + "dummyApplication1", + "dummyApplication2" + ], + "uninstallDependsOn": [ + "dummyApplication1" + ], + "updateDependsOn": [ + "dummyApplication1" + ] + }, + "name": "dummyApplication" + }, + { + "dependsOnProfile": { + "installDependsOn": [ + ], + "uninstallDependsOn": [ + "dummyApplication2" + ], + "updateDependsOn": [ + "dummyApplication2" + ] + }, + "name": "dummyApplication1" + }, + { + "dependsOnProfile": null, + "name": "dummyApplication2" + } + ], + "nfviType": "AzureArcKubernetes" + }, + "networkFunctionType": "ContainerizedNetworkFunction" + } +} +``` ++### Common Errors ++As of today, if dependsOnProfile provided in the NFDV is invalid, the NF operation will fail with a validation error. The validation error message is shown in the operation status resource and looks similar to the following example. ++```json + { + "id": "/providers/Microsoft.HybridNetwork/locations/EASTUS2EUAP/operationStatuses/ca051ddf-c8bc-4cb2-945c-a292bf7b654b*C9B39996CFCD97AB3A121AE136ED47F67BB13946C573EF90628C47628BC5EF5F", + "name": "ca051ddf-c8bc-4cb2-945c-a292bf7b654b*C9B39996CFCD97AB3A121AE136ED47F67BB13946C573EF90628C47628BC5EF5F", + "resourceId": "/subscriptions/4a0479c0-b795-4d0f-96fd-c7edd2a2928f/resourceGroups/xinrui-publisher/providers/Microsoft.HybridNetwork/networkfunctions/testnfDependsOn02", + "status": "Failed", + "startTime": "2023-07-17T20:48:01.4792943Z", + "endTime": "2023-07-17T20:48:10.0191285Z", + "error": { + "code": "DependenciesValidationFailed", + "message": "CyclicDependencies: Circular dependencies detected at hellotest." + } +} +``` |
operator-service-manager | Safe Upgrade Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/safe-upgrade-practices.md | In the NFDV resource, under deployParametersMappingRuleProfile there is the prop ### Publisher changes For the applicationEnablement property, the publisher has two options: either provide a default value or parameterize it. +#### Sample NFDV +```json +{ + "location":"<location>", + "properties": { + "networkFunctionTemplate": { + "networkFunctionApplications": [ + { + "artifactProfile": { + "helmArtifactProfile": { + "var":"var" + }, + "artifactStore": { + "id": "<artifactStore id>" + } + }, + "deployParametersMappingRuleProfile": { + "helmMappingRuleProfile": { + "releaseNamespace": "{deployParameters.role1releasenamespace}", + "releaseName": "{deployParameters.role1releasename}" + }, + "applicationEnablement": "Enabled" + }, + "artifactType": "HelmPackage", + "dependsOnProfile": "null", + "name": "hellotest" + }, + { + "artifactProfile": { + "helmArtifactProfile": { + "var":"var" + }, + "artifactStore": { + "id": "<artifactStore id>" + } + }, + "deployParametersMappingRuleProfile": { + "helmMappingRuleProfile": { + "releaseNamespace": "{deployParameters.role2releasenamespace}", + "releaseName": "{deployParameters.role2releasename}" + }, + "applicationEnablement": "Enabled" + }, + "artifactType": "HelmPackage", + "dependsOnProfile": "null", + "name": "hellotest1" + } + ], + "nfviType": "AzureArcKubernetes" + }, + "description": "null", + "deployParameters": {"type":"object","properties":{"role1releasenamespace":{"type":"string"},"role1releasename":{"type":"string"},"role2releasenamespace":{"type":"string"},"role2releasename":{"type":"string"}},"required":["role1releasenamespace","role1releasename","role2releasenamespace","role2releasename"]}, + "networkFunctionType": "ContainerizedNetworkFunction" + } +} +``` + ### Operator changes Operators specify applicationEnablement as defined by the NFDV. If applicationEnablement for specific application is parameterized, then it must be passed through the deploymentValues property at runtime. +#### Sample configuration group schema (CGS) resource +```json +{ + "type": "object", + "properties": { + "location": { + "type": "string" + }, + "nfviType": { + "type": "string" + }, + "nfdvId": { + "type": "string" + }, + "helloworld-cnf-config": { + "type": "object", + "properties": { + "role1releasenamespace": { + "type": "string" + }, + "role1releasename": { + "type": "string" + }, + "role2releasenamespace": { + "type": "string" + }, + "role2releasename": { + "type": "string" + }, + "roleOverrideValues1": { + "type": "string" + }, + "roleOverrideValues2": { + "type": "string" + } + }, + "required": [ + "role1releasenamespace", + "role1releasename", + "role2releasenamespace", + "role2releasename", + "roleOverrideValues1", + "roleOverrideValues2" + ] + } + }, + "required": [ + "nfviType", + "nfdvId", + "location", + "helloworld-cnf-config" + ] +} +``` + +#### Sample configuration group value (CGV) resource +```json +{ + "location": "<location>", + "nfviType": "AzureArcKubernetes", + "nfdvId": "<nfdv_id>", + "helloworld-cnf-config": { + "role1releasenamespace": "hello-test-releasens", + "role1releasename": "hello-test-release", + "role2releasenamespace": "hello-test-2-releasens", + "role2releasename": "hello-test-2-release", + "roleOverrideValues1": "{\"name\":\"hellotest\",\"deployParametersMappingRuleProfile\":{\"applicationEnablement\":\"Enabled\",\"helmMappingRuleProfile\":{\"releaseName\":\"override-release\",\"releaseNamespace\":\"override-namespace\",\"helmPackageVersion\":\"1.0.0\",\"values\":\"\",\"options\":{\"installOptions\":{\"atomic\":\"true\",\"wait\":\"true\",\"timeout\":\"30\",\"injectArtifactStoreDetails\":\"true\"},\"upgradeOptions\":{\"atomic\":\"true\",\"wait\":\"true\",\"timeout\":\"30\",\"injectArtifactStoreDetails\":\"true\"}}}}}", + "roleOverrideValues2": "{\"name\":\"hellotest1\",\"deployParametersMappingRuleProfile\":{\"applicationEnablement\" : \"Enabled\"}}" + } +} +``` ++#### Sample NF template +```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "nameValue": { + "type": "string", + "defaultValue": "HelloWorld" + }, + "locationValue": { + "type": "string", + "defaultValue": "eastus" + }, + "nfviTypeValue": { + "type": "string", + "defaultValue": "AzureArcKubernetes" + }, + "nfviIdValue": { + "type": "string" + }, + "config": { + "type": "object", + "defaultValue": {} + }, + "nfdvId": { + "type": "string" + } + }, + "variables": { + "deploymentValuesValue": "[string(createObject('role1releasenamespace', parameters('config').role1releasenamespace, 'role1releasename',parameters('config').role1releasename, 'role2releasenamespace', parameters('config').role2releasenamespace, 'role2releasename',parameters('config').role2releasename))]", + "nfName": "[concat(parameters('nameValue'), '-CNF')]", + "roleOverrideValues1": "[string(parameters('config').roleOverrideValues1)]", + "roleOverrideValues2": "[string(parameters('config').roleOverrideValues2)]" + }, + "resources": [ + { + "type": "Microsoft.HybridNetwork/networkFunctions", + "apiVersion": "2023-09-01", + "name": "[variables('nfName')]", + "location": "[parameters('locationValue')]", + "properties": { + "networkFunctionDefinitionVersionResourceReference": { + "id": "[parameters('nfdvId')]", + "idType": "Open" + }, + "nfviType": "[parameters('nfviTypeValue')]", + "nfviId": "[parameters('nfviIdValue')]", + "allowSoftwareUpdate": true, + "configurationType": "Open", + "deploymentValues": "[string(variables('deploymentValuesValue'))]", + "roleOverrideValues": [ + "[variables('roleOverrideValues1')]", + "[variables('roleOverrideValues2')]" + ] + } + } + ] +} +``` + ## Support for in service upgrades Azure Operator Service Manager, where possible, supports in service upgrades, an upgrade method which advances a deployment version without interrupting the service. However, the ability for a given service to be upgraded without interruption is a feature of the service itself. Consult further with the service publisher to understand the in-service upgrade capabilities. |
oracle | Database Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/database-overview.md | |
sap | High Availability Guide Suse Nfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs.md | -[sles-hae-guides]:https://www.suse.com/documentation/sle-ha-12/ -[sles-for-sap-bp]:https://www.suse.com/documentation/sles-for-sap-12/ -[suse-ha-12sp3-relnotes]:https://www.suse.com/releasenotes/x86_64/SLE-HA/12-SP3/ - [template-file-server]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-file-server-md%2Fazuredeploy.json [sap-hana-ha]:sap-hana-high-availability.md Read the following SAP Notes and papers first * [Azure Virtual Machines planning and implementation for SAP on Linux][planning-guide] * [Azure Virtual Machines deployment for SAP on Linux (this article)][deployment-guide] * [Azure Virtual Machines DBMS deployment for SAP on Linux][dbms-guide]-* [SUSE Linux Enterprise High Availability Extension 12 SP3 best practices guides][sles-hae-guides] - * Highly Available NFS Storage with DRBD and Pacemaker -* [SUSE Linux Enterprise Server for SAP Applications 12 SP3 best practices guides][sles-for-sap-bp] -* [SUSE High Availability Extension 12 SP3 Release Notes][suse-ha-12sp3-relnotes] +* [SUSE Linux Enterprise High Availability Extension 12 SP5 Highly Available NFS Storage with DRBD and Pacemaker](https://documentation.suse.com/fr-fr/sle-ha/12-SP5/html/SLE-HA-all/art-ha-quick-nfs.html) +* [SUSE Linux Enterprise Server for SAP Applications 12 SP5 best practices guides](https://documentation.suse.com/en-us/sles-sap/12-SP5/html/SLES4SAP-guide/pre-s4s.html) +* [SUSE Linux Enterprise Server for SAP Applications 12 SP5 Release Notes](https://www.suse.com/releasenotes/x86_64/SLE-HA/12-SP5) + ## Overview |
security | Log Audit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/log-audit.md | The following table lists the most important types of logs available in Azure: |[Activity logs](/azure/azure-monitor/essentials/platform-logs-overview)|Control-plane events on Azure Resource Manager resources| Provides insight into the operations that were performed on resources in your subscription.| REST API, [Azure Monitor](/azure/azure-monitor/essentials/platform-logs-overview)| |[Azure Resource logs](/azure/azure-monitor/essentials/platform-logs-overview)|Frequent data about the operation of Azure Resource Manager resources in subscription| Provides insight into operations that your resource itself performed.| Azure Monitor| |[Microsoft Entra ID reporting](../../active-directory/reports-monitoring/overview-reports.md)|Logs and reports | Reports user sign-in activities and system activity information about users and group management.|[Microsoft Graph](/graph/overview)|-|[Virtual machines and cloud services](/azure/azure-monitor/vm/monitor-virtual-machine)|Windows Event Log service and Linux Syslog| Captures system data and logging data on the virtual machines and transfers that data into a storage account of your choice.| Windows (using [Azure Diagnostics](/azure/azure-monitor/agents/diagnostics-extension-overview)] storage) and Linux in Azure Monitor| +|[Virtual machines and cloud services](/azure/azure-monitor/vm/monitor-virtual-machine)|Windows Event Log service and Linux Syslog| Captures system data and logging data on the virtual machines and transfers that data into a storage account of your choice.| Windows (using [Azure Diagnostics](/azure/azure-monitor/agents/diagnostics-extension-overview) storage) and Linux in Azure Monitor| |[Azure Storage Analytics](/rest/api/storageservices/fileservices/storage-analytics)|Storage logging, provides metrics data for a storage account|Provides insight into trace requests, analyzes usage trends, and diagnoses issues with your storage account.| REST API or the [client library](/dotnet/api/overview/azure/storage)| |[Network security group (NSG) flow logs](../../network-watcher/network-watcher-nsg-flow-logging-overview.md)|JSON format, shows outbound and inbound flows on a per-rule basis|Displays information about ingress and egress IP traffic through a Network Security Group.|[Azure Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md)| |[Application insight](/azure/azure-monitor/app/app-insights-overview)|Logs, exceptions, and custom diagnostics| Provides an application performance monitoring (APM) service for web developers on multiple platforms.| REST API, [Power BI](https://powerbi.microsoft.com/documentation/powerbi-azure-and-power-bi/)| |
sentinel | Create Codeless Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-codeless-connector.md | description: Learn how to create a codeless connector in Microsoft Sentinel usin Previously updated : 06/26/2024 Last updated : 09/26/2024 # Create a codeless connector for Microsoft Sentinel We recommend testing your components with an API testing tool like one of the fo ## Build the data connector -There are 4 components required to build the CCP data connector. +There are four components required to build the CCP data connector. 1. [Output table definition](#output-table-definition) 1. [Data Collection Rule (DCR)](#data-collection-rule) Build the data connector user interface with the [**Data Connector Definition** Notes: 1) The `kind` property for API polling connector should always be `Customizable`. 2) Since this is a type of API polling connector, set the `connectivityCriteria` type to `hasDataConnectors`-3) The example `instructionsSteps` include a button of type `ConnectionToggleButton`. This button helps trigger the deployment of data connector rules based on the connection parameters specified. +3) The example `instructionSteps` include a button of type `ConnectionToggleButton`. This button helps trigger the deployment of data connector rules based on the connection parameters specified. Use an [API testing tool](#testing-apis) to call the data connector definitions API to create the data connector UI in order to validate it in the data connectors gallery. To learn from an example, see the [Data connector definitions reference example ### Data connection rules -This portion defines the connection rules including: -- polling-- authentication-- paging+There are currently two kinds of data connection rules possible for defining your CCP data connector. -For more information on building this section, see the [Data connector connection rules reference](data-connector-connection-rules-reference.md). --To learn from an example, see the [Data connector connection rules reference example](data-connector-connection-rules-reference.md#example-ccp-data-connector). +- `RestApiPoller` kind allows you to customize paging, authorization and expected request/response payloads for your data source. For more information, see [RestApiPoller data connector connection rules reference](data-connector-connection-rules-reference.md). +- `GCP` kind allows you to decrease your development time by automatically configuring paging and expected response payloads for your Google Cloud Platform (GCP) data source. For more information, see [GCP data connector connection rules reference](data-connection-rules-reference-gcp.md) Use an [API testing tool](#testing-apis) to call the data connector API to create the data connector which combines the connection rules and previous components. Verify the connector is now connected in the UI. Finally, the CCP utilizes the credential objects in the data connector section. ## Create the deployment template -Manually package an Azure Resource Management (ARM) template using the [example template code samples](#example-arm-template) as your guide. These code samples are divided by ARM template sections for you to splice together. +Manually package an Azure Resource Management (ARM) template using the [example template code samples](#example-arm-template) as your guide. These code samples are divided by ARM template sections which you must splice together. ++If you're creating a Google Cloud Platform (GCP) CCP data connector, package the deployment template using the [example GCP CCP template](https://github.com/austinmccollum/Azure-Sentinel/blob/patch-5/DataConnectors/Templates/Connector_GCP_CCP_template.json). For information on how to fill out the GCP CCP template, see [GCP data connector connection rules reference](data-connection-rules-reference-gcp.md). -In addition to the example template, published solutions available in the Microsoft Sentinel content hub use the CCP for their data connector. Review the following solutions as more examples of how to stitch the components together into an ARM template. +In addition to the example templates, published solutions available in the Microsoft Sentinel content hub use the CCP for their data connectors. Review the following solutions as more examples of how to stitch the components together into an ARM template. +**`RestApiPoller`** CCP data connector examples - [Ermes Browser Security](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/Ermes%20Browser%20Security/Data%20Connectors/ErmesBrowserSecurityEvents_ccp) - [Palo Alto Prisma Cloud CWPP](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/Palo%20Alto%20Prisma%20Cloud%20CWPP/Data%20Connectors/PaloAltoPrismaCloudCWPP_ccp) - [Sophos Endpoint Protection](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/Sophos%20Endpoint%20Protection/Data%20Connectors/SophosEP_ccp) In addition to the example template, published solutions available in the Micros - [Atlassian Jira](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/AtlassianJiraAudit/Data%20Connectors/JiraAuditAPISentinelConnector_ccpv2) - [Okta Single Sign-On](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/Okta%20Single%20Sign-On/Data%20Connectors/OktaNativePollerConnectorV2) +**`GCP`** CCP data connector examples +- [GCP audit logs](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Google%20Cloud%20Platform%20Audit%20Logs/Package/mainTemplate.json) +- [GCP security command center](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Google%20Cloud%20Platform%20Security%20Command%20Center/Package/mainTemplate.json) + ## Deploy the connector Deploy your codeless connector as a custom template. There are 5 ARM deployment resources in this template guide which house the 4 CC } ``` -## Next steps +## Related content For more information, see - [About Microsoft Sentinel solutions](sentinel-solutions.md). |
sentinel | Create Custom Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-custom-connector.md | Title: Resources for creating Microsoft Sentinel custom connectors | Microsoft Docs + Title: Resources for creating Microsoft Sentinel custom connectors description: Learn about available resources for creating custom connectors for Microsoft Sentinel. Methods include the Log Analytics agent and API, Logstash, Logic Apps, PowerShell, and Azure Functions.-+ Previously updated : 01/09/2023- Last updated : 09/26/2024+ # Resources for creating Microsoft Sentinel custom connectors -Microsoft Sentinel provides a wide range of [built-in connectors for Azure services and external solutions](connect-data-sources.md), and also supports ingesting data from some sources without a dedicated connector. +Microsoft Sentinel provides a wide range of [out-of-the-box connectors for Azure services and external solutions](connect-data-sources.md), and also supports ingesting data from some sources without a dedicated connector. If you're unable to connect your data source to Microsoft Sentinel using any of the existing solutions available, consider creating your own data source connector. |
sentinel | Data Connection Rules Reference Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connection-rules-reference-gcp.md | + + Title: GCP data connector reference for the Codeless Connector Platform ++description: This article provides reference JSON fields and properties for creating the GCP data connector type and its data connection rules as part of the Codeless Connector Platform. +++ Last updated : 9/30/2024+++++# GCP data connector reference for the Codeless Connector Platform ++To create a Google Cloud Platform (GCP) data connector with the Codeless Connector Platform (CCP), use this reference as a supplement to the [Microsoft Sentinel REST API for Data Connectors](/rest/api/securityinsights/data-connectors/create-or-update?view=rest-securityinsights-2024-01-01-preview&tabs=HTTP#gcpdataconnector&preserve-view=true) docs. ++Each `dataConnector` represents a specific *connection* of a Microsoft Sentinel data connector. One data connector might have multiple connections, which fetch data from different endpoints. The JSON configuration built using this reference document is used to complete the deployment template for the CCP data connector. ++For more information, see [Create a codeless connector for Microsoft Sentinel](create-codeless-connector.md#create-the-deployment-template). ++## Build the GCP CCP data connector ++Simplify the development of connecting your GCP data source with a sample GCP CCP data connector deployment template. ++[**GCP CCP example template**](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/Templates/Connector_GCP_CCP_template.json) ++With most of the deployment template sections filled out, you only need to build the first two components, the output table and the DCR. For more information, see the [Output table definition](create-codeless-connector.md#output-table-definition) and [Data Collection Rule (DCR)](create-codeless-connector.md#data-collection-rule) sections. ++## Data Connectors - Create or update ++Reference the [Create or Update](/rest/api/securityinsights/data-connectors/create-or-update) operation in the REST API docs to find the latest stable or preview API version. The difference between the *create* and the *update* operation is the update requires the **etag** value. ++**PUT** method +```http +https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resourceGroupName}}/providers/Microsoft.OperationalInsights/workspaces/{{workspaceName}}/providers/Microsoft.SecurityInsights/dataConnectors/{{dataConnectorId}}?api-version={{apiVersion}} +``` ++## URI parameters ++For more information about the latest API version, see [Data Connectors - Create or Update URI Parameters](/rest/api/securityinsights/data-connectors/create-or-update#uri-parameters). ++|Name | Description | +||| +| **dataConnectorId** | The data connector ID must be a unique name and is the same as the `name` parameter in the [request body](#request-body).| +| **resourceGroupName** | The name of the resource group, not case sensitive. | +| **subscriptionId** | The ID of the target subscription. | +| **workspaceName** | The *name* of the workspace, not the ID.<br>Regex pattern: `^[A-Za-z0-9][A-Za-z0-9-]+[A-Za-z0-9]$` | +| **api-version** | The API version to use for this operation. | ++## Request body ++The request body for a `GCP` CCP data connector has the following structure: ++```json +{ + "name": "{{dataConnectorId}}", + "kind": "GCP", + "etag": "", + "properties": { + "connectorDefinitionName": "", + "auth": {}, + "request": {}, + "dcrConfig": "" + } +} ++``` ++### GCP ++**GCP** represents a CCP data connector where the paging and expected response payloads for your Google Cloud Platform (GCP) data source has already been configured. Configuring your GCP service to send data to a GCP Pub/Sub must be done separately. For more information, see [Publish message in Pub/Sub overview](https://cloud.google.com/pubsub/docs/publish-message-overview). ++| Name | Required | Type | Description | +| - | - | - | - | +| **name** | True | string | The unique name of the connection matching the URI parameter | +| **kind** | True | string | Must be `GCP` | +| **etag** | | GUID | Leave empty for creation of new connectors. For update operations, the etag must match the existing connector's etag (GUID). | +| properties.connectorDefinitionName | | string | The name of the DataConnectorDefinition resource that defines the UI configuration of the data connector. For more information, see [Data Connector Definition](create-codeless-connector.md#data-connector-user-interface). | +| properties.**auth** | True | Nested JSON | Describes the credentials for polling the GCP data. For more information, see [authentication configuration](#authentication-configuration). | +| properties.**request** | True | Nested JSON | Describes the GCP project Id and GCP subscription for polling the data. For more information, see [request configuration](#request-configuration). | +| properties.**dcrConfig** | | Nested JSON | Required parameters when the data is sent to a Data Collection Rule (DCR). For more information, see [DCR configuration](#dcr-configuration). | ++## Authentication configuration ++Authentication to GCP from Microsoft Sentinel uses a GCP Pub/Sub. You must configure the authentication separately. Use the Terraform scripts [here](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/GCP/Terraform/sentinel_resources_creation/GCPInitialAuthenticationSetup/GCPInitialAuthenticationSetup.tf). For more information, see [GCP Pub/Sub authentication from another cloud provider](https://cloud.google.com/docs/authentication/provide-credentials-adc#wlif). ++As a best practice, use parameters in the auth section instead of hard-coding credentials. For more information, see [Secure confidential input](create-codeless-connector.md#secure-confidential-input). ++In order to create the deployment template which also uses parameters, you need to escape the parameters in this section with an extra starting `[`. This allows the parameters to assign a value based on the user interaction with the connector. For more information, see [Template expressions escape characters](../azure-resource-manager/templates/template-expressions.md#escape-characters). ++To enable the credentials to be entered from the UI, the `connectorUIConfig` section requires `instructions` with the desired parameters. For more information, see [Data connector definitions reference for the Codeless Connector Platform](data-connector-ui-definitions-reference.md#instructions). ++GCP auth example: +```json +"auth": { + "serviceAccountEmail": "[[parameters('GCPServiceAccountEmail')]", + "projectNumber": "[[parameters('GCPProjectNumber')]", + "workloadIdentityProviderId": "[[parameters('GCPWorkloadIdentityProviderId')]" +} +``` ++## Request configuration ++The request section requires the `projectId` and `subscriptionNames` from the GCP Pub/Sub. ++GCP request example: +```json +"request": { + "projectId": "[[parameters('GCPProjectId')]", + "subscriptionNames": [ + "[[parameters('GCPSubscriptionName')]" + ] +} +``` ++## DCR configuration ++| Field | Required | Type | Description | +|-|-|-|-| +| **DataCollectionEndpoint** | True | String | DCE (Data Collection Endpoint) for example: `https://example.ingest.monitor.azure.com`. | +| **DataCollectionRuleImmutableId** | True | String | The DCR immutable ID. Find it by viewing the DCR creation response or using the [DCR API](/rest/api/monitor/data-collection-rules/get) | +| **StreamName** | True | string | This value is the `streamDeclaration` defined in the DCR (prefix must begin with *Custom-*) | ++## Example CCP data connector ++Here's an example of all the components of the `GCP` CCP data connector JSON together. ++```json +{ + "kind": "GCP", + "properties": { + "connectorDefinitionName": "[[parameters('connectorDefinitionName')]", + "dcrConfig": { + "streamName": "[variables('streamName')]", + "dataCollectionEndpoint": "[[parameters('dcrConfig').dataCollectionEndpoint]", + "dataCollectionRuleImmutableId": "[[parameters('dcrConfig').dataCollectionRuleImmutableId]" + }, + "dataType": "[variables('dataType')]", + "auth": { + "serviceAccountEmail": "[[parameters('GCPServiceAccountEmail')]", + "projectNumber": "[[parameters('GCPProjectNumber')]", + "workloadIdentityProviderId": "[[parameters('GCPWorkloadIdentityProviderId')]" + }, + "request": { + "projectId": "[[parameters('GCPProjectId')]", + "subscriptionNames": [ + "[[parameters('GCPSubscriptionName')]" + ] + } + } +} +``` ++For more information, see [Create GCP data connector REST API example](/rest/api/securityinsights/data-connectors/create-or-update?view=rest-securityinsights-2024-01-01-preview&tabs=HTTP#creates-or-updates-a-gcp-data-connector&preserve-view=true). |
sentinel | Data Connector Connection Rules Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connector-connection-rules-reference.md | Title: Data connectors reference for the Codeless Connector Platform + Title: RestApiPoller data connector reference for the Codeless Connector Platform description: This article provides reference JSON fields and properties for creating the RestApiPoller data connector type and its data connection rules as part of the Codeless Connector Platform. Previously updated : 11/13/2023 Last updated : 9/30/2024 -# Data connector reference for the Codeless Connector Platform +# RestApiPoller data connector reference for the Codeless Connector Platform -To create a data connector with the Codeless Connector Platform (CCP), use this document as a supplement to the [Microsoft Sentinel REST API for Data Connectors](/rest/api/securityinsights/data-connectors) reference docs. Specifically this reference document expands on the following details: --- The data connector kind, `RestApiPoller`, which is used for the CCP.-- Authorization configuration-- Data source request and response configuration options-- Data stream paging options-- Data collection rule map +To create a `RestApiPoller` data connector with the Codeless Connector Platform (CCP), use this reference as a supplement to the [Microsoft Sentinel REST API for Data Connectors](/rest/api/securityinsights/data-connectors) docs. Each `dataConnector` represents a specific *connection* of a Microsoft Sentinel data connector. One data connector might have multiple connections, which fetch data from different endpoints. The JSON configuration built using this reference document is used to complete the deployment template for the CCP data connector. For more information about the latest API version, see [Data Connectors - Create ## Request body -The request body for the CCP data connector has the following structure: +The request body for a `RestApiPoller` CCP data connector has the following structure: ```json { The request body for the CCP data connector has the following structure: ``` -**RestApiPoller** represents the codeless API Poller connector. +### RestApiPoller ++**RestApiPoller** represents an API Poller CCP data connector where you customize paging, authorization and request/response payloads for your data source. | Name | Required | Type | Description | | - | - | - | - | The CCP supports the following authentication types: - [Basic](#basic-auth) - [APIKey](#apikey) - [OAuth2](#oauth2)+- [Jwt](#jwt) > [!NOTE]-> CCP OAuth2 implementation does not support certificate credentials. +> CCP OAuth2 implementation does not support client certificate credentials. As a best practice, use parameters in the auth section instead of hard-coding credentials. For more information, see [Secure confidential input](create-codeless-connector.md#secure-confidential-input). After the user returns to the client via the redirect URL, the application will Auth code flow is for fetching data on behalf of a user's permissions and client credentials is for fetching data with application permissions. The data server grants access to the application. Since there is no user in client credentials flow, no authorization endpoint is needed, only a token endpoint. Example:-OAuth2 auth code grant +OAuth2 `authorization_code` grant type ```json "auth": { "type": "OAuth2",- "ClientId": "[parameters('appId')]", - "ClientSecret": "[parameters('appSecret')]", + "ClientId": "[[parameters('appId')]", + "ClientSecret": "[[parameters('appSecret')]", "tokenEndpoint": "https://login.microsoftonline.com/{{tenantId}}/oauth2/v2.0/token", "authorizationEndpoint": "https://login.microsoftonline.com/{{tenantId}}/oauth2/v2.0/authorize", "authorizationEndpointHeaders": {}, OAuth2 auth code grant } ``` Example:+OAuth2 `client_credentials` grant type + ```json "auth": { "type": "OAuth2",- "ClientId": "[parameters('appId')]", - "ClientSecret": "[parameters('appSecret')]", + "ClientId": "[[parameters('appId')]", + "ClientSecret": "[[parameters('appSecret')]", "tokenEndpoint": "https://login.microsoftonline.com/{{tenantId}}/oauth2/v2.0/token", "tokenEndpointHeaders": { "Accept": "application/json", Example: } ``` +#### Jwt ++Example: +JSON web token (JWT) ++```json +"auth": { + "type": "JwtToken", + "userName": { + "key":"username", + "value":"[[parameters('UserName')]" + }, + "password": { + "key":"password", + "value":"[[parameters('Password')]" + }, + "TokenEndpoint": {"https://token_endpoint.contoso.com"}, + "IsJsonRequest": true +} +``` + ## Request configuration The request section defines how the CCP data connector sends requests to your data source, like the API endpoint and how often to poll that endpoint. Here's an example of all the components of the CCP data connector JSON together. "dataType": "ExampleLogs", "auth": { "type": "Basic",- "password": "[parameters('username')]", - "userName": "[parameters('password')]" + "password": "[[parameters('username')]", + "userName": "[[parameters('password')]" }, "request": { "apiEndpoint": "https://rest.contoso.com/example", |
sentinel | Migration Splunk Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-splunk-automation.md | Title: Migrate Splunk SOAR automation to Microsoft Sentinel | Microsoft Docs description: Learn how to identify SOAR use cases, and how to migrate your Splunk SOAR automation to Microsoft Sentinel.--++ Previously updated : 05/03/2022 Last updated : 09/11/2024 +#customer intent: As a SOC administrator, I want to migrate Splunk SOAR automations to Microsoft Sentinel playbooks or automation rules. # Migrate Splunk SOAR automation to Microsoft Sentinel -Microsoft Sentinel provides Security Orchestration, Automation, and Response (SOAR) capabilities with [automation rules](automate-incident-handling-with-automation-rules.md) and [playbooks](tutorial-respond-threats-playbook.md). Automation rules automate incident handling and response, and playbooks run predetermined sequences of actions to response and remediate threats. This article discusses how to identify SOAR use cases, and how to migrate your Splunk SOAR automation to Microsoft Sentinel. +Microsoft Sentinel provides Security Orchestration, Automation, and Response (SOAR) capabilities with automation rules and playbooks. Automation rules facilitate simple incident handling and response, while playbooks run more complex sequences of actions to respond and remediate threats. This article discusses how to identify SOAR use cases, and how to migrate your Splunk SOAR automation to Microsoft Sentinel automation rules and playbooks. -Automation rules simplify complex workflows for your incident orchestration processes, and allow you to centrally manage your incident handling automation. --With automation rules, you can: -- Perform simple automation tasks without necessarily using playbooks. For example, you can assign, tag incidents, change status, and close incidents. -- Automate responses for multiple analytics rules at once. -- Control the order of actions that are executed. -- Run playbooks for those cases where more complex automation tasks are necessary. +For more information about the differences between automation rules and playbooks, see the following articles: +- [Automate threat response with automation rules](automate-incident-handling-with-automation-rules.md) +- [Automate threat response with playbooks](automation/automate-responses-with-playbooks.md) ## Identify SOAR use cases -HereΓÇÖs what you need to think about when migrating SOAR use cases from Splunk. -- **Use case quality**. Choose good use cases for automation. Use cases should be based on procedures that are clearly defined, with minimal variation, and a low false-positive rate. Automation should work with efficient use cases.-- **Manual intervention**. Automated response can have wide ranging effects and high impact automations should have human input to confirm high impact actions before theyΓÇÖre taken.-- **Binary criteria**. To increase response success, decision points within an automated workflow should be as limited as possible, with binary criteria. Binary criteria reduces the need for human intervention, and enhances outcome predictability.-- **Accurate alerts or data**. Response actions are dependent on the accuracy of signals such as alerts. Alerts and enrichment sources should be reliable. Microsoft Sentinel resources such as watchlists and reliable threat intelligence can enhance reliability.-- **Analyst role**. While automation where possible is great, reserve more complex tasks for analysts, and provide them with the opportunity for input into workflows that require validation. In short, response automation should augment and extend analyst capabilities. +Here's what you need to think about when migrating SOAR use cases from Splunk. +- **Use case quality**. Choose automation use cases based on procedures that are clearly defined, with minimal variation, and a low false-positive rate. +- **Manual intervention**. Automated responses can have wide ranging effects. High impact automations should have human input to confirm high impact actions before they're taken. +- **Binary criteria**. To increase response success, decision points within an automated workflow should be as limited as possible, with binary criteria. When there are only two variables in the automated decision making, the need for human intervention is reduced and outcome predictability is enhanced. +- **Accurate alerts or data**. Response actions are dependent on the accuracy of signals such as alerts. Alerts and enrichment sources should be reliable. Microsoft Sentinel resources such as watchlists and threat intelligence with high confidence ratings enhance reliability. +- **Analyst role**. While automation is great, reserve the most complex tasks for analysts. Provide them with the opportunity for input into workflows that require validation. In short, response automation should augment and extend analyst capabilities. ## Migrate SOAR workflow |
sentinel | Migration Splunk Detection Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-splunk-detection-rules.md | Title: Migrate Splunk detection rules to Microsoft Sentinel description: Learn how to identify, compare, and migrate your Splunk detection rules to Microsoft Sentinel built-in rules.--- Previously updated : 03/11/2024+++ Last updated : 09/23/2024 #customer intent: As a SOC administrator, I want to migrate Splunk detection rules to KQL so I can migrate to Microsoft Sentinel. # Migrate Splunk detection rules to Microsoft Sentinel -This article describes how to identify, compare, and migrate your Splunk detection rules to Microsoft Sentinel built-in rules. +Splunk detection rules are security information and event management (SIEM) components that compare to analytics rules in Microsoft Sentinel. This article describes the concepts to identify, compare, and migrate them to Microsoft Sentinel. The best way is to start with the [SIEM migration experience](siem-migration.md), which identifies out-of-the-box (OOTB) analytics rules to automatically translate to. If you want to migrate your Splunk Observability deployment, learn more about how to [migrate from Splunk to Azure Monitor Logs](/azure/azure-monitor/logs/migrate-splunk-to-azure-monitor-logs). -## Identify and migrate rules +## Audit rules -Microsoft Sentinel uses machine learning analytics to create high-fidelity and actionable incidents, and some of your existing detections may be redundant in Microsoft Sentinel. Therefore, don't migrate all of your detection and analytics rules blindly. Review these considerations as you identify your existing detection rules. +Microsoft Sentinel uses machine learning analytics to create high-fidelity and actionable incidents. Some of your existing Splunk detections may be redundant in Microsoft Sentinel, so don't migrate them all blindly. Review these considerations as you identify your existing detection rules. - Make sure to select use cases that justify rule migration, considering business priority and efficiency. - Check that you [understand Microsoft Sentinel rule types](detect-threats-built-in.md). - Check that you understand the [rule terminology](#compare-rule-terminology).-- Review any rules that haven't triggered any alerts in the past 6-12 months, and determine whether they're still relevant.+- Review outdated rules that don't have alerts for the past 6-12 months, and determine whether they're still relevant. - Eliminate low-level threats or alerts that you routinely ignore.-- Use existing functionality, and check whether Microsoft Sentinel's [built-in analytics rules](https://github.com/Azure/Azure-Sentinel/tree/master/Detections) might address your current use cases. Because Microsoft Sentinel uses machine learning analytics to produce high-fidelity and actionable incidents, it's likely that some of your existing detections won't be required anymore.-- Confirm connected data sources and review your data connection methods. Revisit data collection conversations to ensure data depth and breadth across the use cases you plan to detect.-- Test the capabilities of the [SIEM migration experience](siem-migration.md) to determine if the automated translation is suitable.-- Explore community resources such as the [SOC Prime Threat Detection Marketplace](https://my.socprime.com/platform-overview/) to check whether your rules are available.-- Consider whether an online query converter such as Uncoder.io might work for your rules. -- If rules aren't available or can't be converted, they need to be created manually, using a KQL query. Review the [rules mapping](#map-and-compare-rule-samples) to create new queries. +- Confirm connected data sources and review your data connection methods. Microsoft Sentinel Analytics require that the data type is present in the Log Analytics workspace before a rule is enabled. Revisit data collection conversations to ensure data depth and breadth across the use cases you plan to detect. Then use the [SIEM migration experience](siem-migration.md) to ensure the data sources are mapped appropriately. -Learn more about [best practices for migrating detection rules](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/best-practices-for-migrating-detection-rules-from-arcsight/ba-p/2216417). +## Migrate rules -**To migrate your analytics rules to Microsoft Sentinel**: +After you identify the Splunk detections to migrate, review these considerations for the migration process: ++- Compare the existing functionality of Microsoft Sentinel's OOTB analytics rules with your current use cases. Use the [SIEM migration experience](siem-migration.md) to see which Splunk detections are automatically converted to OOTB templates. +- Translate detections that don't align to OOTB analytics rules. The best way to translate Splunk detections automatically is with the [SIEM migration experience](siem-migration.md). +- Discover more algorithms for your use cases by exploring community resources such as the [SOC Prime Threat Detection Marketplace](https://my.socprime.com/platform-overview/). +- Manually translate detections if built-in rules aren't available or aren't automatically translated. Create the new KQL queries and review the [rules mapping](#map-and-compare-rule-samples). ++For more information, see [best practices for migrating detection rules](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/best-practices-for-migrating-detection-rules-from-arcsight/ba-p/2216417). ++### Rule migration steps 1. Verify that you have a testing system in place for each rule you want to migrate. Learn more about [best practices for migrating detection rules](https://techcomm 1. **Ensure that your team has useful resources** to test your migrated rules. - 1. **Confirm that you have any required data sources connected,** and review your data connection methods. + 1. **Confirm that you have the required data sources connected,** and review your data connection methods. -1. Verify whether your detections are available as built-in templates in Microsoft Sentinel: +1. Verify whether your detections are available as OOTB templates in Microsoft Sentinel: - - **Use the SIEM migration experience** to automate translation and migration. + - **Use the SIEM migration experience** to automate translation and installation of the OOTB templates. For more information, see [Use the SIEM migration experience](siem-migration.md). - - **If the built-in rules are sufficient**, use built-in rule templates to create rules for your own workspace. + - **If you have use cases not reflected in the detections**, create rules for your own workspace with OOTB rule templates. - In Microsoft Sentinel, go to the **Configuration > Analytics > Rule templates** tab, and create and update each relevant analytics rule. + In Microsoft Sentinel, go to the **Content hub**. - For more information, see [Detect threats out-of-the-box](detect-threats-built-in.md). + Filter **Content type** for **Analytics rule** templates. ++ Find and **Install/Update** each corresponding Content hub solution or standalone analytics rule template. - - **If you have detections that aren't covered by Microsoft Sentinel's built-in rules**, try an online query converter, such as [Uncoder.io](https://uncoder.io/) or [SPL2KQL](https://azure.github.io/spl2kql) to convert your queries to KQL. + For more information, see [Detect threats out-of-the-box](detect-threats-built-in.md). - Identify the trigger condition and rule action, and then construct and review your KQL query. + - **If you have detections that aren't covered by Microsoft Sentinel's OOTB rules**, first try the [SIEM migration experience](siem-migration.md) for automatic translation. - - **If neither the built-in rules nor an online rule converter is sufficient**, you'll need to create the rule manually. In such cases, use the following steps to start creating your rule: + - **If neither the OOTB rules nor the SIEM migration completely translate the detection**, create the rule manually. In such cases, use the following steps to create your rule: - 1. **Identify the data sources you want to use in your rule**. You'll want to create a mapping table between data sources and data tables in Microsoft Sentinel to identify the tables you want to query. + 1. **Identify the data sources you want to use in your rule**. Identify the Microsoft Sentinel tables you want to query by creating a mapping table between data sources and data tables. 1. **Identify any attributes, fields, or entities** in your data that you want to use in your rules. - 1. **Identify your rule criteria and logic**. At this stage, you may want to use rule templates as samples for how to construct your KQL queries. + 1. **Identify your rule criteria and logic**. At this stage, consider finding rule templates as samples for how to construct your KQL queries. Consider filters, correlation rules, active lists, reference sets, watchlists, detection anomalies, aggregations, and so on. You might use references provided by your legacy SIEM to understand [how to best map your query syntax](#map-and-compare-rule-samples). 1. **Identify the trigger condition and rule action, and then construct and review your KQL query**. When reviewing your query, consider KQL optimization guidance resources. -1. Test the rule with each of your relevant use cases. If it doesn't provide expected results, you may want to review the KQL and test it again. +1. Test the rule with each of your relevant use cases. If it doesn't provide expected results, review and edit the KQL and test it again. -1. When you're satisfied, you can consider the rule migrated. Create a playbook for your rule action as needed. For more information, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md). +1. When you're satisfied, consider the rule migrated. Create a playbook for your rule action as needed. For more information, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md). Learn more about analytics rules: Learn more about analytics rules: ## Compare rule terminology -This table helps you to clarify the concept of a rule in Microsoft Sentinel compared to Splunk. +This table helps you to clarify the concept of a rule based on Kusto Query Language (KQL) in Microsoft Sentinel compared to a Splunk detection based on Search Processing Language (SPL). | |Splunk |Microsoft Sentinel | |||| Use these samples to compare and map rules from Splunk to Microsoft Sentinel in ||||| |`chart/ timechart` |Returns results in a tabular output for time-series charting. |[render operator](/azure/data-explorer/kusto/query/renderoperator?pivots=azuredataexplorer) |`… | render timechart` | |`dedup` |Removes subsequent results that match a specified criterion. |• [distinct](/azure/data-explorer/kusto/query/distinctoperator)<br>• [summarize](/azure/data-explorer/kusto/query/summarizeoperator) |`… | summarize by Computer, EventID` |-|`eval` |Calculates an expression. Learn about [common eval commands](https://github.com/Azure/Azure-Sentinel/blob/master/Tools/RuleMigration/SPL%20to%20KQL.md#common-eval-commands). |[extend](/azure/data-explorer/kusto/query/extendoperator) |`T | extend duration = endTime - startTime` | +|`eval` |Calculates an expression. Learn about [common `eval` commands](https://github.com/Azure/Azure-Sentinel/blob/master/Tools/RuleMigration/SPL%20to%20KQL.md#common-eval-commands). |[extend](/azure/data-explorer/kusto/query/extendoperator) |`T | extend duration = endTime - startTime` | |`fields` |Removes fields from search results. |• [project](/azure/data-explorer/kusto/query/projectoperator)<br>• [project-away](/azure/data-explorer/kusto/query/projectawayoperator) |`T | project cost=price*quantity, price` | |`head/tail` |Returns the first or last N results. |[top](/azure/data-explorer/kusto/query/topoperator) |`T | top 5 by Name desc nulls last` | |`lookup` |Adds field values from an external source. |• [externaldata](/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuredataexplorer)<br>• [lookup](/azure/data-explorer/kusto/query/lookupoperator) |[KQL example](#lookup-command-kql-example) | Use these samples to compare and map rules from Splunk to Microsoft Sentinel in |`anomalydetection` |Find anomalies in the specified field.<br><br>[SPL example](#anomalydetection-command-spl-example) |[series_decompose_anomalies()](/azure/data-explorer/kusto/query/series-decompose-anomaliesfunction) |[KQL example](#anomalydetection-command-kql-example) | |`where` |Filters search results using `eval` expressions. Used to compare two different fields. |[where](/azure/data-explorer/kusto/query/whereoperator) |`T | where fruit=="apple"` | -#### lookup command: KQL example +#### `lookup` command: KQL example ```kusto Users Users h@"?...SAS..." // Secret token to access the blob ])) | ... ```-#### stats command: KQL example +#### `stats` command: KQL example ```kusto Sales Sales Total=sum(UnitPrice * NumUnits) by Fruit, StartOfMonth=startofmonth(SellDateTime) ```-#### mstats command: KQL example +#### `mstats` command: KQL example ```kusto T | summarize count() by price_range=bin(price, 10.0) ``` -#### transaction command: SPL example +#### `transaction` command: SPL example ```spl sourcetype=MyLogTable type=Event sourcetype=MyLogTable type=Event | Rename timestamp as StartTime | Table City, ActivityId, StartTime, Duration ```-#### transaction command: KQL example +#### `transaction` command: KQL example ```kusto let Events = MyLogTable | where type=="Event"; on ActivityId Duration = StopTime – StartTime ``` -Use `row_window_session()` to the calculate session start values for a column in a serialized row set. +Use `row_window_session()` to calculate session start values for a column in a serialized row set. ```kusto ...| extend SessionStarted = row_window_session( Timestamp, 1h, 5m, ID != prev(ID)) ```-#### eventstats command: SPL example +#### `eventstats` command: SPL example ```spl … | bin span=1m _time |stats count AS count_i by _time, category | eventstats sum(count_i) as count_total by _time ```-#### eventstats command: KQL example +#### `eventstats` command: KQL example Here's an example with the `join` statement: groupBin =bin(TimeGenerated, binSize) sum(TotalEvents) by groupBin | mvexpand list_EventID, list_TotalEvents ```-#### anomalydetection command: SPL example +#### `anomalydetection` command: SPL example ```spl sourcetype=nasdaq earliest=-10y | anomalydetection Close _ Price ```-#### anomalydetection command: KQL example +#### `anomalydetection` command: KQL example ```kusto let LookBackPeriod= 7d; LineFit)=series_fit_line(Trend) | extend (anomalies,score) = series_decompose_anomalies(Trend) ```-### Common eval commands +### Common `eval` commands |SPL command |Description |SPL example |KQL command |KQL example | ||||||-|`abs(X)` |Returns the absolute value of X. |`abs(number)` |[abs()](/azure/data-explorer/kusto/query/abs-function) |`abs(X)` | -|`case(X,"Y",…)` |Takes pairs of `X` and `Y` arguments, where the `X` arguments are boolean expressions. When evaluated to `TRUE`, the arguments return the corresponding `Y` argument. |[SPL example](#casexy-spl-example) |[case](/azure/data-explorer/kusto/query/casefunction) |[KQL example](#casexy-kql-example) | -|`ceil(X)` |Ceiling of a number X. |`ceil(1.9)` |[ceiling()](/azure/data-explorer/kusto/query/ceilingfunction) |`ceiling(1.9)` | -|`cidrmatch("X",Y)` |Identifies IP addresses that belong to a particular subnet. |`cidrmatch`<br>`("123.132.32.0/25",ip)` |• [ipv4_is_match()](/azure/data-explorer/kusto/query/ipv4-is-matchfunction)<br>• [ipv6_is_match()](/azure/data-explorer/kusto/query/ipv6-is-matchfunction) |`ipv4_is_match('192.168.1.1', '192.168.1.255')`<br>`== false` | -|`coalesce(X,…)` |Returns the first value that isn't null. |`coalesce(null(), "Returned val", null())` |[coalesce()](/azure/data-explorer/kusto/query/coalescefunction) |`coalesce(tolong("not a number"),`<br> `tolong("42"), 33) == 42` | +|`abs(X)` |Returns the absolute value of X. |`abs(number)` |[`abs()`](/azure/data-explorer/kusto/query/abs-function) |`abs(X)` | +|`case(X,"Y",…)` |Takes pairs of `X` and `Y` arguments, where the `X` arguments are boolean expressions. When evaluated to `TRUE`, the arguments return the corresponding `Y` argument. |[SPL example](#casexy-spl-example) |[`case`](/azure/data-explorer/kusto/query/casefunction) |[KQL example](#casexy-kql-example) | +|`ceil(X)` |Ceiling of a number X. |`ceil(1.9)` |[`ceiling()`](/azure/data-explorer/kusto/query/ceilingfunction) |`ceiling(1.9)` | +|`cidrmatch("X",Y)` |Identifies IP addresses that belong to a particular subnet. |`cidrmatch`<br>`("123.132.32.0/25",ip)` |• [`ipv4_is_match()`](/azure/data-explorer/kusto/query/ipv4-is-matchfunction)<br>• [ipv6_is_match()](/azure/data-explorer/kusto/query/ipv6-is-matchfunction) |`ipv4_is_match('192.168.1.1', '192.168.1.255')`<br>`== false` | +|`coalesce(X,…)` |Returns the first value that isn't null. |`coalesce(null(), "Returned val", null())` |[`coalesce()`](/azure/data-explorer/kusto/query/coalescefunction) |`coalesce(tolong("not a number"),`<br> `tolong("42"), 33) == 42` | |`cos(X)` |Calculates the cosine of X. |`n=cos(0)` |[cos()](/azure/data-explorer/kusto/query/cosfunction) |`cos(X)` |-|`exact(X)` |Evaluates an expression X using double precision floating point arithmetic. |`exact(3.14*num)` |[todecimal()](/azure/data-explorer/kusto/query/todecimalfunction) |`todecimal(3.14*2)` | +|`exact(X)` |Evaluates an expression X using double precision floating point arithmetic. |`exact(3.14*num)` |[`todecimal()`](/azure/data-explorer/kusto/query/todecimalfunction) |`todecimal(3.14*2)` | |`exp(X)` |Returns eX. |`exp(3)` |[exp()](/azure/data-explorer/kusto/query/exp-function) |`exp(3)` |-|`if(X,Y,Z)` |If `X` evaluates to `TRUE`, the result is the second argument `Y`. If `X` evaluates to `FALSE`, the result evaluates to the third argument `Z`. |`if(error==200,`<br> `"OK", "Error")` |[iif()](/azure/data-explorer/kusto/query/iiffunction) |[KQL example](#ifxyz-kql-example) | -|`isbool(X)` |Returns `TRUE` if `X` is boolean. |`isbool(field)` |• [iif()](/azure/data-explorer/kusto/query/iiffunction)<br>• [gettype](/azure/data-explorer/kusto/query/gettypefunction) |`iif(gettype(X) =="bool","TRUE","FALSE")` | -|`isint(X)` |Returns `TRUE` if `X` is an integer. |`isint(field)` |• [iif()](/azure/data-explorer/kusto/query/iiffunction)<br>• [gettype](/azure/data-explorer/kusto/query/gettypefunction) |[KQL example](#isintx-kql-example) | -|`isnull(X)` |Returns `TRUE` if `X` is null. |`isnull(field)` |[isnull()](/azure/data-explorer/kusto/query/isnullfunction) |`isnull(field)` | -|`isstr(X)` |Returns `TRUE` if `X` is a string. |`isstr(field)` |• [iif()](/azure/data-explorer/kusto/query/iiffunction)<br>• [gettype](/azure/data-explorer/kusto/query/gettypefunction) |[KQL example](#isstrx-kql-example) | -|`len(X)` |This function returns the character length of a string `X`. |`len(field)` |[strlen()](/azure/data-explorer/kusto/query/strlenfunction) |`strlen(field)` | -|`like(X,"y")` |Returns `TRUE` if and only if `X` is like the SQLite pattern in `Y`. |`like(field, "addr%")` |• [has](/azure/data-explorer/kusto/query/has-anyoperator)<br>• [contains](/azure/data-explorer/kusto/query/datatypes-string-operators)<br>• [startswith](/azure/data-explorer/kusto/query/datatypes-string-operators)<br>• [matches regex](/azure/data-explorer/kusto/query/re2) |[KQL example](#likexy-example) | -|`log(X,Y)` |Returns the log of the first argument `X` using the second argument `Y` as the base. The default value of `Y` is `10`. |`log(number,2)` |• [log](/azure/data-explorer/kusto/query/log-function)<br>• [log2](/azure/data-explorer/kusto/query/log2-function)<br>• [log10](/azure/data-explorer/kusto/query/log10-function) |`log(X)`<br><br>`log2(X)`<br><br>`log10(X)` | +|`if(X,Y,Z)` |If `X` evaluates to `TRUE`, the result is the second argument `Y`. If `X` evaluates to `FALSE`, the result evaluates to the third argument `Z`. |`if(error==200,`<br> `"OK", "Error")` |[`iif()`](/azure/data-explorer/kusto/query/iiffunction) |[KQL example](#ifxyz-kql-example) | +|`isbool(X)` |Returns `TRUE` if `X` is boolean. |`isbool(field)` |• [`iif()`](/azure/data-explorer/kusto/query/iiffunction)<br>• [`gettype`](/azure/data-explorer/kusto/query/gettypefunction) |`iif(gettype(X) =="bool","TRUE","FALSE")` | +|`isint(X)` |Returns `TRUE` if `X` is an integer. |`isint(field)` |• [`iif()`](/azure/data-explorer/kusto/query/iiffunction)<br>• [`gettype`](/azure/data-explorer/kusto/query/gettypefunction) |[KQL example](#isintx-kql-example) | +|`isnull(X)` |Returns `TRUE` if `X` is null. |`isnull(field)` |[`isnull()`](/azure/data-explorer/kusto/query/isnullfunction) |`isnull(field)` | +|`isstr(X)` |Returns `TRUE` if `X` is a string. |`isstr(field)` |• [`iif()`](/azure/data-explorer/kusto/query/iiffunction)<br>• [`gettype`](/azure/data-explorer/kusto/query/gettypefunction) |[KQL example](#isstrx-kql-example) | +|`len(X)` |This function returns the character length of a string `X`. |`len(field)` |[`strlen()`](/azure/data-explorer/kusto/query/strlenfunction) |`strlen(field)` | +|`like(X,"y")` |Returns `TRUE` if and only if `X` is like the SQLite pattern in `Y`. |`like(field, "addr%")` |• [`has`](/azure/data-explorer/kusto/query/has-anyoperator)<br>• [`contains`](/azure/data-explorer/kusto/query/datatypes-string-operators)<br>• [`startswith`](/azure/data-explorer/kusto/query/datatypes-string-operators)<br>• [matches regex](/azure/data-explorer/kusto/query/re2) |[KQL example](#likexy-example) | +|`log(X,Y)` |Returns the log of the first argument `X` using the second argument `Y` as the base. The default value of `Y` is `10`. |`log(number,2)` |• [`log`](/azure/data-explorer/kusto/query/log-function)<br>• [`log2`](/azure/data-explorer/kusto/query/log2-function)<br>• [`log10`](/azure/data-explorer/kusto/query/log10-function) |`log(X)`<br><br>`log2(X)`<br><br>`log10(X)` | |`lower(X)` |Returns the lowercase value of `X`. |`lower(username)` |[tolower](/azure/data-explorer/kusto/query/tolowerfunction) |`tolower(username)` |-|`ltrim(X,Y)` |Returns `X` with the characters in parameter `Y` trimmed from the left side. The default output of `Y` is spaces and tabs. |`ltrim(" ZZZabcZZ ", " Z")` |[trim_start()](/azure/data-explorer/kusto/query/trimstartfunction) |`trim_start(“ ZZZabcZZ”,” ZZZ”)` | -|`match(X,Y)` |Returns if X matches the regex pattern Y. |`match(field, "^\d{1,3}.\d$")` |[matches regex](/azure/data-explorer/kusto/query/re2) |`… | where field matches regex @"^\d{1,3}.\d$")` | -|`max(X,…)` |Returns the maximum value in a column. |`max(delay, mydelay)` |• [max()](/azure/data-explorer/kusto/query/max-aggfunction)<br>• [arg_max()](/azure/data-explorer/kusto/query/arg-max-aggfunction) |`… | summarize max(field)` | -|`md5(X)` |Returns the MD5 hash of a string value `X`. |`md5(field)` |[hash_md5](/azure/data-explorer/kusto/query/md5hashfunction) |`hash_md5("X")` | -|`min(X,…)` |Returns the minimum value in a column. |`min(delay, mydelay)` |• [min_of()](/azure/data-explorer/kusto/query/min-offunction)<br>• [min()](/azure/data-explorer/kusto/query/min-aggfunction)<br>• [arg_min](/azure/data-explorer/kusto/query/arg-min-aggfunction) |[KQL example](#minx-kql-example) | -|`mvcount(X)` |Returns the number (total) of `X` values. |`mvcount(multifield)` |[dcount](/azure/data-explorer/kusto/query/dcount-aggfunction) |`…| summarize dcount(X) by Y` | -|`mvfilter(X)` |Filters a multi-valued field based on the boolean `X` expression. |`mvfilter(match(email, "net$"))` |[mv-apply](/azure/data-explorer/kusto/query/mv-applyoperator) |[KQL example](#mvfilterx-kql-example) | -|`mvindex(X,Y,Z)` |Returns a subset of the multi-valued `X` argument from a start position (zero-based) `Y` to `Z` (optional). |`mvindex( multifield, 2)` |[array_slice](/azure/data-explorer/kusto/query/arrayslicefunction) |`array_slice(arr, 1, 2)` | -|`mvjoin(X,Y)` |Given a multi-valued field `X` and string delimiter `Y`, and joins the individual values of `X` using `Y`. |`mvjoin(address, ";")` |[strcat_array](/azure/data-explorer/kusto/query/strcat-arrayfunction) |[KQL example](#mvjoinxy-kql-example) | -|`now()` |Returns the current time, represented in Unix time. |`now()` |[now()](/azure/data-explorer/kusto/query/nowfunction) |`now()`<br><br>`now(-2d)` | +|`ltrim(X,Y)` |Returns `X` with the characters in parameter `Y` trimmed from the left side. The default output of `Y` is spaces and tabs. |`ltrim(" ZZZabcZZ ", " Z")` |[`trim_start()`](/azure/data-explorer/kusto/query/trimstartfunction) |`trim_start(“ ZZZabcZZ”,” ZZZ”)` | +|`match(X,Y)` |Returns if X matches the regex pattern Y. |`match(field, "^\d{1,3}.\d$")` |[`matches regex`](/azure/data-explorer/kusto/query/re2) |`… | where field matches regex @"^\d{1,3}.\d$")` | +|`max(X,…)` |Returns the maximum value in a column. |`max(delay, mydelay)` |• [`max()`](/azure/data-explorer/kusto/query/max-aggfunction)<br>• [`arg_max()`](/azure/data-explorer/kusto/query/arg-max-aggfunction) |`… | summarize max(field)` | +|`md5(X)` |Returns the MD5 hash of a string value `X`. |`md5(field)` |[`hash_md5`](/azure/data-explorer/kusto/query/md5hashfunction) |`hash_md5("X")` | +|`min(X,…)` |Returns the minimum value in a column. |`min(delay, mydelay)` |• [`min_of()`](/azure/data-explorer/kusto/query/min-offunction)<br>• [min()](/azure/data-explorer/kusto/query/min-aggfunction)<br>• [arg_min](/azure/data-explorer/kusto/query/arg-min-aggfunction) |[KQL example](#minx-kql-example) | +|`mvcount(X)` |Returns the number (total) of `X` values. |`mvcount(multifield)` |[`dcount`](/azure/data-explorer/kusto/query/dcount-aggfunction) |`…| summarize dcount(X) by Y` | +|`mvfilter(X)` |Filters a multi-valued field based on the boolean `X` expression. |`mvfilter(match(email, "net$"))` |[`mv-apply`](/azure/data-explorer/kusto/query/mv-applyoperator) |[KQL example](#mvfilterx-kql-example) | +|`mvindex(X,Y,Z)` |Returns a subset of the multi-valued `X` argument from a start position (zero-based) `Y` to `Z` (optional). |`mvindex( multifield, 2)` |[`array_slice`](/azure/data-explorer/kusto/query/arrayslicefunction) |`array_slice(arr, 1, 2)` | +|`mvjoin(X,Y)` |Given a multi-valued field `X` and string delimiter `Y`, and joins the individual values of `X` using `Y`. |`mvjoin(address, ";")` |[`strcat_array`](/azure/data-explorer/kusto/query/strcat-arrayfunction) |[KQL example](#mvjoinxy-kql-example) | +|`now()` |Returns the current time, represented in Unix time. |`now()` |[`now()`](/azure/data-explorer/kusto/query/nowfunction) |`now()`<br><br>`now(-2d)` | |`null()` |Doesn't accept arguments and returns `NULL`. |`null()` |[null](/azure/data-explorer/kusto/query/scalar-data-types/null-values?pivots=azuredataexplorer) |`null`-|`nullif(X,Y)` |Includes two arguments, `X` and `Y`, and returns `X` if the arguments are different. Otherwise, returns `NULL`. |`nullif(fieldA, fieldB)` |[iif](/azure/data-explorer/kusto/query/iiffunction) |`iif(fieldA==fieldB, null, fieldA)` | -|`random()` |Returns a pseudo-random number between `0` to `2147483647`. |`random()` |[rand()](/azure/data-explorer/kusto/query/randfunction) |`rand()` | +|`nullif(X,Y)` |Includes two arguments, `X` and `Y`, and returns `X` if the arguments are different. Otherwise, returns `NULL`. |`nullif(fieldA, fieldB)` |[`iif`](/azure/data-explorer/kusto/query/iiffunction) |`iif(fieldA==fieldB, null, fieldA)` | +|`random()` |Returns a pseudo-random number between `0` to `2147483647`. |`random()` |[`rand()`](/azure/data-explorer/kusto/query/randfunction) |`rand()` | |`relative_ time(X,Y)` |Given an epoch time `X` and relative time specifier `Y`, returns the epoch time value of `Y` applied to `X`. |`relative_time(now(),"-1d@d")` |[unix time](/azure/data-explorer/kusto/query/datetime-timespan-arithmetic#example-unix-time) |[KQL example](#relative-timexy-kql-example) |-|`replace(X,Y,Z)` |Returns a string formed by substituting string `Z` for every occurrence of regular expression string `Y` in string `X`. |Returns date with the month and day numbers switched.<br>For example, for the `4/30/2015` input, the output is `30/4/2009`:<br><br>`replace(date, "^(\d{1,2})/ (\d{1,2})/", "\2/\1/")` |[replace()](/azure/data-explorer/kusto/query/replacefunction) |[KQL example](#replacexyz-kql-example) | -|`round(X,Y)` |Returns `X` rounded to the number of decimal places specified by `Y`. The default is to round to an integer. |`round(3.5)` |[round](/azure/data-explorer/kusto/query/roundfunction) |`round(3.5)` | -|`rtrim(X,Y)` |Returns `X` with the characters of `Y` trimmed from the right side. If `Y` isn't specified, spaces and tabs are trimmed. |`rtrim(" ZZZZabcZZ ", " Z")` |[trim_end()](/azure/data-explorer/kusto/query/trimendfunction) |`trim_end(@"[ Z]+",A)` | +|`replace(X,Y,Z)` |Returns a string formed by substituting string `Z` for every occurrence of regular expression string `Y` in string `X`. |Returns date with the month and day numbers switched.<br>For example, for the `4/30/2015` input, the output is `30/4/2009`:<br><br>`replace(date, "^(\d{1,2})/ (\d{1,2})/", "\2/\1/")` |[`replace()`](/azure/data-explorer/kusto/query/replacefunction) |[KQL example](#replacexyz-kql-example) | +|`round(X,Y)` |Returns `X` rounded to the number of decimal places specified by `Y`. The default is to round to an integer. |`round(3.5)` |[`round`](/azure/data-explorer/kusto/query/roundfunction) |`round(3.5)` | +|`rtrim(X,Y)` |Returns `X` with the characters of `Y` trimmed from the right side. If `Y` isn't specified, spaces and tabs are trimmed. |`rtrim(" ZZZZabcZZ ", " Z")` |[`trim_end()`](/azure/data-explorer/kusto/query/trimendfunction) |`trim_end(@"[ Z]+",A)` | |`searchmatch(X)` |Returns `TRUE` if the event matches the search string `X`. |`searchmatch("foo AND bar")` |[iif()](/azure/data-explorer/kusto/query/iiffunction) |`iif(field has "X","Yes","No")` |-| `split(X,"Y")` |Returns `X` as a multi-valued field, split by delimiter `Y`. |`split(address, ";")` |[split()](/azure/data-explorer/kusto/query/splitfunction) |`split(address, ";")` | -|`sqrt(X)` |Returns the square root of `X`. |`sqrt(9)` |[sqrt()](/azure/data-explorer/kusto/query/sqrtfunction) |`sqrt(9)` | -|`strftime(X,Y)` |Returns the epoch time value `X` rendered using the format specified by `Y`. |`strftime(_time, "%H:%M")` |[format_datetime()](/azure/data-explorer/kusto/query/format-datetimefunction) |`format_datetime(time,'HH:mm')` | +| `split(X,"Y")` |Returns `X` as a multi-valued field, split by delimiter `Y`. |`split(address, ";")` |[`split()`](/azure/data-explorer/kusto/query/splitfunction) |`split(address, ";")` | +|`sqrt(X)` |Returns the square root of `X`. |`sqrt(9)` |[`sqrt()`](/azure/data-explorer/kusto/query/sqrtfunction) |`sqrt(9)` | +|`strftime(X,Y)` |Returns the epoch time value `X` rendered using the format specified by `Y`. |`strftime(_time, "%H:%M")` |[`format_datetime()`](/azure/data-explorer/kusto/query/format-datetimefunction) |`format_datetime(time,'HH:mm')` | | `strptime(X,Y)` |Given a time represented by a string `X`, returns value parsed from format `Y`. |`strptime(timeStr, "%H:%M")` |[format_datetime()](/azure/data-explorer/kusto/query/format-datetimefunction) |[KQL example](#strptimexy-kql-example) |-|`substr(X,Y,Z)` |Returns a substring field `X` from start position (one-based) `Y` for `Z` (optional) characters. |`substr("string", 1, 3)` |[substring()](/azure/data-explorer/kusto/query/substringfunction) |`substring("string", 0, 3)` | -|`time()` |Returns the wall-clock time with microsecond resolution. |`time()` |[format_datetime()](/azure/data-explorer/kusto/query/format-datetimefunction) |[KQL example](#time-kql-example) | -|`tonumber(X,Y)` |Converts input string `X` to a number, where `Y` (optional, default value is `10`) defines the base of the number to convert to. |`tonumber("0A4",16)` |[toint()](/azure/data-explorer/kusto/query/tointfunction) |`toint("123")` | -|`tostring(X,Y)` |[Description](#tostringxy) |[SPL example](#tostringxy-spl-example) |[tostring()](/azure/data-explorer/kusto/query/tostringfunction) |`tostring(123)` | -|`typeof(X)` |Returns a string representation of the field type. |`typeof(12)` |[gettype()](/azure/data-explorer/kusto/query/gettypefunction) |`gettype(12)` | -|`urldecode(X)` |Returns the URL `X` decoded. |[SPL example](#urldecodex-spl-example) |[url_decode](/azure/data-explorer/kusto/query/urldecodefunction) |[KQL example](#urldecodex-spl-example) | +|`substr(X,Y,Z)` |Returns a substring field `X` from start position (one-based) `Y` for `Z` (optional) characters. |`substr("string", 1, 3)` |[`substring()`](/azure/data-explorer/kusto/query/substringfunction) |`substring("string", 0, 3)` | +|`time()` |Returns the wall-clock time with microsecond resolution. |`time()` |[`format_datetime()`](/azure/data-explorer/kusto/query/format-datetimefunction) |[KQL example](#time-kql-example) | +|`tonumber(X,Y)` |Converts input string `X` to a number, where `Y` (optional, default value is `10`) defines the base of the number to convert to. |`tonumber("0A4",16)` |[`toint()`](/azure/data-explorer/kusto/query/tointfunction) |`toint("123")` | +|`tostring(X,Y)` |[Description](#tostringxy) |[SPL example](#tostringxy-spl-example) |[`tostring()`](/azure/data-explorer/kusto/query/tostringfunction) |`tostring(123)` | +|`typeof(X)` |Returns a string representation of the field type. |`typeof(12)` |[`gettype()`](/azure/data-explorer/kusto/query/gettypefunction) |`gettype(12)` | +|`urldecode(X)` |Returns the URL `X` decoded. |[SPL example](#urldecodex-spl-example) |[`url_decode`](/azure/data-explorer/kusto/query/urldecodefunction) |[KQL example](#urldecodex-spl-example) | -#### case(X,"Y",…) SPL example +#### `case(X,"Y",…)` SPL example ```SPL case(error == 404, "Not found", error == 500,"Internal Server Error", error == 200, "OK") ```-#### case(X,"Y",…) KQL example +#### `case(X,"Y",…)` KQL example ```kusto T | extend Message = case(error == 404, "Not found", error == 500,"Internal Server Error", "OK") ```-#### if(X,Y,Z) KQL example +#### `if(X,Y,Z)` KQL example ```kusto iif(floor(Timestamp, 1d)==floor(now(), 1d), "today", "anotherday") ```-#### isint(X) KQL example +#### `isint(X)` KQL example ```kusto iif(gettype(X) =="long","TRUE","FALSE") ```-#### isstr(X) KQL example +#### `isstr(X)` KQL example ```kusto iif(gettype(X) =="string","TRUE","FALSE") ```-#### like(X,"y") example +#### `like(X,"y")` example ```kusto … | where field has "addr" iif(gettype(X) =="string","TRUE","FALSE") … | where field matches regex "^addr.*" ```-#### min(X,…) KQL example +#### `min(X,…)` KQL example ```kusto min_of (expr_1, expr_2 ...) min_of (expr_1, expr_2 ...) …| summarize arg_min(Price,*) by Product ```-#### mvfilter(X) KQL example +#### `mvfilter(X)` KQL example ```kusto T | mv-apply Metric to typeof(real) on T | mv-apply Metric to typeof(real) on top 2 by Metric desc ) ```-#### mvjoin(X,Y) KQL example +#### `mvjoin(X,Y)` KQL example ```kusto strcat_array(dynamic([1, 2, 3]), "->") ```-#### relative time(X,Y) KQL example +#### `relative time(X,Y)` KQL example ```kusto let toUnixTime = (dt:datetime) let toUnixTime = (dt:datetime) (dt - datetime(1970-01-01))/1s }; ```-#### replace(X,Y,Z) KQL example +#### `replace(X,Y,Z)` KQL example ```kusto replace( @'^(\d{1,2})/(\d{1,2})/', @'\2/\1/',date) ```-#### strptime(X,Y) KQL example +#### `strptime(X,Y)` KQL example ```kusto format_datetime(datetime('2017-08-16 11:25:10'), 'HH:mm') ```-#### time() KQL example +#### `time()` KQL example ```kusto format_datetime(datetime(2015-12-14 02:03:04), 'h:m:s') ```-#### tostring(X,Y) +#### `tostring(X,Y)` Returns a field value of `X` as a string. - If the value of `X` is a number, `X` is reformatted to a string value. - If `X` is a boolean value, `X` is reformatted to `TRUE` or `FALSE`. - If `X` is a number, the second argument `Y` is optional and can either be `hex` (converts `X` to a hexadecimal), `commas` (formats `X` with commas and two decimal places), or `duration` (converts `X` from a time format in seconds to a readable time format: `HH:MM:SS`). -##### tostring(X,Y) SPL example +##### `tostring(X,Y)` SPL example This example returns: foo=615 and foo2=00:10:15: … | eval foo=615 | eval foo2 = tostring( foo, "duration") ```-#### urldecode(X) SPL example +#### `urldecode(X)` SPL example ```SPL urldecode("http%3A%2F%2Fwww.splunk.com%2Fdownload%3Fr%3Dheader") ```-### Common stats commands KQL example +### Common `stats` commands KQL example |SPL command |Description |KQL command |KQL example | ||||| |
sentinel | Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration.md | Title: Plan your migration to Microsoft Sentinel | Microsoft Docs description: Discover the reasons for migrating from a legacy SIEM, and learn how to plan out the different phases of your migration.--++ Previously updated : 03/11/2024 Last updated : 09/23/2024 # Plan your migration to Microsoft Sentinel In this guide, you learn how to migrate your legacy SIEM to Microsoft Sentinel. |Track migration with a workbook |[Track your Microsoft Sentinel migration with a workbook](migration-track.md) | |Use the SIEM Migration experience | [SIEM Migration (Preview)](siem-migration.md) | |Migrate from ArcSight |ΓÇó [Migrate detection rules](migration-arcsight-detection-rules.md)<br>ΓÇó [Migrate SOAR automation](migration-arcsight-automation.md)<br>ΓÇó [Export historical data](migration-arcsight-historical-data.md) |-|Migrate from Splunk |ΓÇó [Migrate detection rules](migration-splunk-detection-rules.md)<br>ΓÇó [Migrate SOAR automation](migration-splunk-automation.md)<br>ΓÇó [Export historical data](migration-splunk-historical-data.md)<br><br>If you want to migrate your Splunk Observability deployment, learn more about how to [migrate from Splunk to Azure Monitor Logs](/azure/azure-monitor/logs/migrate-splunk-to-azure-monitor-logs). | +|Migrate from Splunk |ΓÇó [Start with the SIEM Migration experience](siem-migration.md)<br>ΓÇó [Migrate detection rules](migration-splunk-detection-rules.md)<br>ΓÇó [Migrate SOAR automation](migration-splunk-automation.md)<br>ΓÇó [Export historical data](migration-splunk-historical-data.md)<br><br>If you want to migrate your Splunk Observability deployment, learn more about how to [migrate from Splunk to Azure Monitor Logs](/azure/azure-monitor/logs/migrate-splunk-to-azure-monitor-logs). | |Migrate from QRadar |ΓÇó [Migrate detection rules](migration-qradar-detection-rules.md)<br>ΓÇó [Migrate SOAR automation](migration-qradar-automation.md)<br>ΓÇó [Export historical data](migration-qradar-historical-data.md) | |Ingest historical data |ΓÇó [Select a target Azure platform to host the exported historical data](migration-ingestion-target-platform.md)<br>ΓÇó [Select a data ingestion tool](migration-ingestion-tool.md)<br>ΓÇó [Ingest historical data into your target platform](migration-export-ingest.md) | |Convert dashboards to workbooks |[Convert dashboards to Azure Workbooks](migration-convert-dashboards.md) | When planning the discover phase, use the following guidance to identify your us - Prepare a validation process. Define test scenarios and build a test script. - Can you apply a methodology to prioritize use cases? You can follow a methodology such as MoSCoW to prioritize a leaner set of use cases for migration. -## Next steps +## Next step In this article, you learned how to plan and prepare for your migration. |
sentinel | Sap Solution Security Content | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-security-content.md | The following tables list the built-in [analytics rules](deploy-sap-security-con | **SAP - Multiple Logons from the same IP** | Identifies the sign-in of several users from same IP address within a scheduled time interval. <br><br>**Sub-use case**: [Persistency](#persistency) | Sign in using several users through the same IP address. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access | | **SAP - Multiple Logons by User** | Identifies sign-ins of the same user from several terminals within scheduled time interval. <br><br>Available only via the Audit SAL method, for SAP versions 7.5 and higher. | Sign in using the same user, using different IP addresses. <br><br>**Data sources**: SAPcon - Audit Log | PreAttack, Credential Access, Initial Access, Collection <br><br>**Sub-use case**: [Persistency](#persistency) | | **SAP - Informational - Lifecycle - SAP Notes were implemented in system** | Identifies SAP Note implementation in the system. | Implement an SAP Note using SNOTE/TCI. <br><br>**Data sources**: SAPcon - Change Requests | - |+| **SAP - (Preview) AS JAVA - Sensitive Privileged User Signed In** | Identifies a sign-in from an unexpected network. <br><br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist. | Sign in to the backend system using privileged users. <br><br>**Data sources**: SAPJAVAFilesLog | Initial Access | +| **SAP - (Preview) AS JAVA - Sign-In from Unexpected Network** | Identifies sign-ins from an unexpected network. <br><br>Maintain privileged users in the [SAP - Networks](#networks) watchlist. | Sign in to the backend system from an IP address that isn't assigned to one of the networks in the SAP - Networks watchlist <br><br>**Data sources**: SAPJAVAFilesLog | Initial Access, Defense Evasion | ### Data exfiltration The following tables list the built-in [analytics rules](deploy-sap-security-con | **SAP - Execution of Obsolete or Insecure Function Module** |Identifies the execution of an obsolete or insecure ABAP function module. <br><br>Maintain obsolete functions in the [SAP - Obsolete Function Modules](#modules) watchlist. Make sure to activate table logging changes for the `EUFUNC` table in the backend. (SE13)<br><br> **Note**: Relevant for production systems only. | Run an obsolete or insecure function module directly using SE37. <br><br>**Data sources**: SAPcon - Table Data Log | Discovery, Command and Control | | **SAP - Execution of Obsolete/Insecure Program** |Identifies the execution of an obsolete or insecure ABAP program. <br><br> Maintain obsolete programs in the [SAP - Obsolete Programs](#programs) watchlist.<br><br> **Note**: Relevant for production systems only. | Run a program directly using SE38/SA38/SE80, or by using a background job. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Command and Control | | **SAP - Multiple Password Changes by User** | Identifies multiple password changes by user. | Change user password <br><br>**Data sources**: SAPcon - Audit Log | Credential Access |+| **SAP - (Preview) AS JAVA - User Creates and Uses New User** | Identifies the creation or manipulation of users by admins within the SAP AS Java environment. | Sign in to the backend system using users that you have created or manipulated.<br><br>**Data sources**: SAPJAVAFilesLog | Persistence | ### Attempts to bypass SAP security mechanisms |
sentinel | Siem Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/siem-migration.md | +- Microsoft Sentinel in the Defender portal #customer intent: As an SOC administrator, I want to use the SIEM migration experience so I can migrate to Microsoft Sentinel. Migrate your SIEM to Microsoft Sentinel for all your security monitoring use cas These features are currently included in the SIEM Migration experience: **Splunk**-- The experience focuses on migrating Splunk security monitoring to Microsoft Sentinel.-- The experience only supports migration of Splunk detections to Microsoft Sentinel analytics rules.+- The experience focuses on migrating Splunk security monitoring to Microsoft Sentinel and mapping out-of-the-box (OOTB) analytics rules wherever possible. +- The experience supports migration of Splunk detections to Microsoft Sentinel analytics rules, including mapping Splunk data sources and lookups. ## Prerequisites You need the following from the source SIEM: **Splunk** - The migration experience is compatible with both Splunk Enterprise and Splunk Cloud editions. - A Splunk admin role is required to export all Splunk alerts. For more information, see [Splunk role-based user access](https://docs.splunk.com/Documentation/Splunk/9.1.3/Security/Aboutusersandroles).-- Export the historical data from Splunk to the relevant tables in the Log Analytics workspace. For more information, see [Export historical data from Splunk](migration-splunk-historical-data.md)+- Export the historical data from Splunk to the relevant tables in the Log Analytics workspace. For more information, see [Export historical data from Splunk](migration-splunk-historical-data.md). You need the following on the target, Microsoft Sentinel: - The SIEM migration experience deploys analytics rules. This capability requires the **Microsoft Sentinel Contributor** role. For more information, see [Permissions in Microsoft Sentinel](roles.md). -- Ingest security data previously used in your source SIEM into Microsoft Sentinel. Install and enable out-of-the-box (OOTB) data connectors to match your security monitoring estate from your source SIEM.- - If the data connectors aren't installed yet, find the relevant solutions in **Content hub**. - - If no data connector exists, create a custom ingestion pipeline.<br>For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md) or [Custom data ingestion and transformation](data-transformation.md). +- Ingest security data previously used in your source SIEM into Microsoft Sentinel. Before an analytics rule is translated and enabled, the rule's data source must be present in the Log Analytics workspace. Install and enable out-of-the-box (OOTB) data connectors in **Content hub** to match your security monitoring estate from your source SIEM. If no data connector exists, create a custom ingestion pipeline. + + For more information, see the following articles: + - [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md) + - [Custom data ingestion and transformation](data-transformation.md) ++- Create Microsoft Sentinel watchlists from your Splunk lookups so the fields used are mapped for the translated analytics rules. ## Translate Splunk detection rules -At the core of Splunk detection rules is the Search Processing Language (SPL). The SIEM migration experience systematically translates SPL to Kusto query language (KQL) for each Splunk rule. Carefully review translations and make adjustments to ensure migrated rules function as intended in your Microsoft Sentinel workspace. For more information on the concepts important in translating detection rules, see [migrate Splunk detection rules](migration-splunk-detection-rules.md). +At the core of Splunk detection rules, is the Search Processing Language (SPL). The SIEM migration experience systematically translates SPL to Kusto query language (KQL) for each Splunk rule. Carefully review translations and make adjustments to ensure migrated rules function as intended in your Microsoft Sentinel workspace. For more information on the concepts important in translating detection rules, see [migrate Splunk detection rules](migration-splunk-detection-rules.md). Current capabilities: -- Translate simple queries with a single data source-- Direct translations listed in the article, [Splunk to Kusto cheat sheet](/azure/data-explorer/kusto/query/splunk-cheat-sheet)-- Review translated query error feedback with edit capability to save time in the detection rule translation process-- Translated queries feature a completeness status with translation states --Here are some of the priorities that are important to us as we continue to develop the translation technology: --- Splunk Common Information Model (CIM) to Microsoft Sentinel's Advanced Security Information Model (ASIM) translation support-- Support for Splunk macros-- Support for Splunk lookups-- Translation of complex correlation logic that queries and correlates events across multiple data sources+- Map Splunk detections to OOTB Microsoft Sentinel analytics rules. +- Translate simple queries with a single data source. +- Automatic translations of SPL to KQL for the mappings listed in the article, [Splunk to Kusto cheat sheet](/azure/data-explorer/kusto/query/splunk-cheat-sheet). +- **Schema Mapping (Preview)** creates logical links for the translated rules by mapping Splunk data sources to Microsoft Sentinel tables, and Splunk lookups to watchlists. +- Translated query review provides error feedback with edit capability to save time in the detection rule translation process. +- **Translation State** indicating how completely SPL syntax is translated to KQL at the grammatical level. +- Support for Splunk macros translation using inline replacement macro definition within SPL queries. +- Splunk Common Information Model (CIM) to Microsoft Sentinel's Advanced Security Information Model (ASIM) translation support. +- Downloadable pre-migration and post-migration summary. ## Start the SIEM migration experience -1. Navigate to Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Content management**, select **Content hub**. +1. Find the SIEM migration experience in Microsoft Sentinel from the [Azure portal](https://portal.azure.com) or the [Defender portal](https://security.microsoft.com/), under **Content management** > **Content hub**. 1. Select **SIEM Migration**. ## Upload Splunk detections Here are some of the priorities that are important to us as we continue to devel 1. Run the following query: - `| rest splunk_server=local count=0 /services/saved/searches | search disabled=0 | table title,search ,*` + ``` + |rest splunk_server=local count=0 /servicesNS/-/-/saved/searches + |search disabled=0 + |search alert_threshold != "" + |table title,search,description,cron_schedule,dispatch.earliest_time,alert.severity,alert_comparator,alert_threshold,alert.suppress.period,id + |tojson|table _raw + |rename _raw as alertrules|mvcombine delim=", " alertrules + |append [| rest splunk_server=local count=0 /servicesNS/-/-/admin/macros|table title,definition,args,iseval|tojson|table _raw |rename _raw as macros|mvcombine delim=", " macros] + |filldown alertrules + |tail 1 + ``` 1. Select the export button and choose JSON as the format. Here are some of the priorities that are important to us as we continue to devel :::image type="content" source="media/siem-migration/upload-file.png" alt-text="Screenshot showing the upload files tab."::: +## Schema mapping ++Use **Schema mapping** to precisely define how the data types and fields in the analytics rule logic are mapped based on the extracted sources from the SPL queries to the Microsoft Sentinel tables. ++### Data sources ++Known sources such as Splunk CIM schemas and data models are automatically mapped to ASIM schemas when applicable. Other sources used in the Splunk detection must be manually mapped to Microsoft Sentinel or Log Analytics tables. Mapping schemas are hierarchical so Splunk sources map 1:1 with Microsoft Sentinel tables and the fields within those sources. +++Once the schema mapping is complete, any manual updates are reflected in the **Mapping Status** as "Manually mapped". The changes are taken into account in the next step when the rules are translated. The mapping is saved per workspace, so you don't have to repeat it. ++### Lookups ++Splunk lookups compare to Microsoft Sentinel watchlists, which are lists of curated field-value combinations to correlate with the events in your Microsoft Sentinel environment. Since Splunk lookups are defined and available outside the boundaries of SPL queries, the equivalent Microsoft Sentinel watchlist must be created as a prerequisite. Schema mapping then takes lookups automatically identified from the uploaded Splunk queries and maps them to Sentinel Watchlists. ++For more information, see [Create watchlist](watchlists-create.md). +++SPL queries reference lookups with the `lookup`, `inputlookup`, and `outputlookup` keywords. The `outputlookup` operation writes data to a lookup and isn't supported in translation. The SIEM migration translation engine uses the `_GetWatchlist()` KQL function to map to the correct Sentinel watchlist along with other KQL functions to complete the rule logic. ++When a Splunk lookup doesn't have a corresponding watchlist mapped, the translation engine keeps the same name for both the watchlist and its fields as the Splunk lookup and fields. + ## Configure rules 1. Select **Configure Rules**. Here are some of the priorities that are important to us as we continue to devel - **Name** is the original Splunk detection rule name. - **Translation Type** indicates if a Sentinel OOTB analytics rule matches the Splunk detection logic.- - **Translation State** has the following values: - - **Fully Translated** queries in this rule were fully translated to KQL - - **Partially Translated** queries in this rule weren't fully translated to KQL - - **Not Translated** indicates an error in translation - - **Manually Translated** when any rule is reviewed and saved + - **Translation State** gives feedback about how completely the syntax of a Splunk detection was translated to KQL. The translation state doesn't test the rule or verify the data source. + - **Fully Translated** - Queries in this rule were fully translated to KQL but the rule logic and data source weren't validated. + - **Partially Translated** - Queries in this rule weren't fully translated to KQL. + - **Not Translated** - Indicates an error in translation. + - **Manually Translated** - This status is set when any rule is edited and saved. :::image type="content" source="media/siem-migration/configure-rules.png" alt-text="Screenshot showing the results of the automatic rule mapping." lightbox="media/siem-migration/configure-rules.png"::: - > [!NOTE] - > Check the schema of the data types and fields used in the rule logic. Microsoft Sentinel Analytics require that the data type be present in the Log Analytics Workspace before the rule is enabled. It's also important the fields used in the query are accurate for the defined data type schema. --1. Highlight a rule to resolve translation and select **Edit**. When you are satisfied with the results, select **Save Changes**. +1. Highlight a rule to resolve translation and select **Edit**. When you're satisfied with the results, select **Save Changes**. -1. Switch on the **Ready to deploy** toggle for Analytics rules you want to deploy. +1. Switch on the **Deploy** toggle for analytics rules you want to deploy. 1. When the review is complete, select **Review and migrate**. Here are some of the priorities that are important to us as we continue to devel | Out of the box | The corresponding solutions from **Content hub** that contain the matched analytics rule templates are installed. The matched rules are deployed as active analytics rules in the disabled state. <br><br>For more information, see [Manage Analytics rule templates](manage-analytics-rule-templates.md). | | Custom | Rules are deployed as active analytics rules in the disabled state. | -1. (Optional) Choose Analytics rules and select **Export Templates** to download them as ARM templates for use in your CI/CD or custom deployment processes. +1. (Optional) Select **Export Templates** to download all the translated rules as ARM templates for use in your CI/CD or custom deployment processes. :::image type="content" source="media/siem-migration/export-templates.png" alt-text="Screenshot showing the Review and Migrate tab highlighting the Export Templates button."::: Here are some of the priorities that are important to us as we continue to devel :::image type="content" source="media/siem-migration/enable-deployed-translated-rules.png" alt-text="Screenshot showing Analytics rules with deployed Splunk rules highlighted ready to be enabled."::: -## Next step +## Related content In this article, you learned how to use the SIEM migration experience. -> [!div class="nextstepaction"] -> [Migrate Splunk detection rules](migration-splunk-detection-rules.md) +For more information on the SIEM migration experience, see the following articles: +- [Become a Microsoft Sentinel ninja - migration section](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/become-a-microsoft-sentinel-ninja-the-complete-level-400/ba-p/1246310#toc-hId-111398316) +- [SIEM migration update - Microsoft Sentinel blog](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/siem-migration-update-now-migrate-with-contextual-depth-in/ba-p/4241234) +- [SIEM migration experience generally available - Microsoft Sentinel blog](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-easily-migrate-to-microsoft-sentinel-with-the-new/ba-p/4100351) |
sentinel | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md | The listed features were released in the last three months. For information abou ## September 2024 +- [Schema mapping added to the SIEM migration experience](#schema-mapping-added-to-the-siem-migration-experience) - [Third-party enrichment widgets to be retired in February 2025](#third-party-enrichment-widgets-to-be-retired-in-february-2025) - [Azure reservations now have pre-purchase plans available for Microsoft Sentinel](#pre-purchase-plans-now-available-for-microsoft-sentinel) - [Import/export of automation rules now generally available (GA)](#importexport-of-automation-rules-now-generally-available-ga) - [Google Cloud Platform data connectors are now generally available (GA)](#google-cloud-platform-data-connectors-are-now-generally-available-ga) - [Microsoft Sentinel now generally available (GA) in Azure Israel Central](#microsoft-sentinel-now-generally-available-ga-in-azure-israel-central) +### Schema mapping added to the SIEM migration experience ++Since the SIEM migration experience became generally available in May 2024, steady improvements have been made to help migrate your security monitoring from Splunk. The following new features let customers provide more contextual details about their Splunk environment and usage to the Microsoft Sentinel SIEM Migration translation engine: ++- Schema Mapping +- Support for Splunk Macros in translation +- Support for Splunk Lookups in translation ++To learn more about these updates, see [SIEM migration experience](siem-migration.md). ++For more information about the SIEM migration experience, see the following articles: +- [Become a Microsoft Sentinel ninja - migration section](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/become-a-microsoft-sentinel-ninja-the-complete-level-400/ba-p/1246310#toc-hId-111398316) +- [SIEM migration update - Microsoft Sentinel blog](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/siem-migration-update-now-migrate-with-contextual-depth-in/ba-p/4241234) + ### Third-party enrichment widgets to be retired in February 2025 Effective immediately, you can no longer enable the feature to create enrichment widgets that retrieve data from external, third-party data sources. These widgets are displayed on Microsoft Sentinel entity pages and in other locations where entity information is presented. This change is happening because you can no longer create the Azure key vault required to access these external data sources. |
service-bus-messaging | Service Bus Authentication And Authorization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-authentication-and-authorization.md | Authorization rules configured at the namespace level can grant access to all en To access an entity, the client requires a SAS token generated using a specific shared access authorization rule. The SAS token is generated using the HMAC-SHA256 of a resource string that consists of the resource URI to which access is claimed, and an expiry with a cryptographic key associated with the authorization rule. -SAS authentication support for Service Bus is included in the Azure .NET SDK versions 2.0 and later. SAS includes support for a shared access authorization rule. All APIs that accept a connection string as a parameter include support for SAS connection strings. - For detailed information on using SAS for authentication, see [Authentication with Shared Access Signatures](service-bus-sas.md). |
service-bus-messaging | Service Bus Dead Letter Queues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dead-letter-queues.md | SubscriptionClient.FormatDeadLetterPath(topicPath, subscriptionName) ## DLQ message count -Obtaining count of messages in the dead-letter queue at the topic level isn't applicable because messages don't sit at the topic level. Instead, when a sender sends a message to a topic, the message is forwarded to subscriptions for the topic within milliseconds and thus no longer resides at the topic level. So, you can see messages in the DLQ associated with the subscription for the topic. In the following example, **Service Bus Explorer** shows that there are 62 messages currently in the DLQ for the subscription "test1". +Obtaining count of messages in the dead-letter queue at the topic level isn't applicable because messages don't sit at the topic level. Instead, when a sender sends a message to a topic, the message is forwarded to subscriptions for the topic within milliseconds and thus no longer resides at the topic level. So, you can see messages in the DLQ associated with the subscription for the topic. In the following example, [Service Bus Explorer ](https://github.com/paolosalvatori/ServiceBusExplorer)shows that there are 62 messages currently in the DLQ for the subscription "test1". :::image type="content" source="./media/service-bus-dead-letter-queues/dead-letter-queue-message-count.png" alt-text="Image showing 62 messages in the dead-letter queue."::: |
site-recovery | Azure To Azure How To Enable Replication Private Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-private-endpoints.md | domain names to private IPs. :::image type="content" source="./media/azure-to-azure-how-to-enable-replication-private-endpoints/add-record-set.png" alt-text="Shows the page to add a DNS A type record for the fully qualified domain name to the private endpoint in the Azure portal."::: > [!NOTE]- > After you enable replication, two more fully qualified domain names are created on the private - > endpoints in both regions. Ensure that you add the DNS records for these newly created - > fully qualified domain names as well. + > After you enable replication, two more fully qualified domain names are created on the private endpoints in both regions. Ensure that you add the DNS records for these newly created fully qualified domain names as well. + > Static IP for Azure Site Recovery private endpoint is not supported. ## Next steps |
site-recovery | Azure To Azure Troubleshoot Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-replication.md | A spike in data change rate might come from an occasional data burst. If the dat 1. You might see a banner in **Overview** that says an SAS URL has been generated. Select this banner and cancel the export. Ignore this step if you don't see the banner. 1. As soon as the SAS URL is revoked, go to **Size + Performance** for the managed disk. Increase the size so that Site Recovery supports the observed churn rate on the source disk. +### Disk tier/SKU change considerations ++Whenever Disk tier or SKU is changed, all the snapshots (bookmarks) corresponding to the disk are created by the disk resource provider. Thus, you may have recovery points where some of the underlying snapshots don't exist at the end of the disk resource provider. ++Once you trigger a failover with a recovery point created before the Tier/SKU change, the failover eventually fails with a `BookmarkNotFound` error. Since pruning of recovery points is a scheduled job, you may see such recovery points on the portal although with time it's deleted. ++> **Recommendation**: Wait for a recovery point created with a datetime post the change made to the disk. + ## Network connectivity problems ### Network latency to a cache storage account |
synapse-analytics | Query Data Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-data-storage.md | See syntax fragment below: You can find query samples for accessing elements from repeated columns in the [Query Parquet nested types](query-parquet-nested-types.md#access-elements-from-repeated-columns) article. -## Query samples --You can learn more about querying various types of data using the sample queries. --### Tools --The tools you need to issue queries: - - Azure Synapse Studio - - Azure Data Studio - - SQL Server Management Studio --### Demo setup --Your first step is to **create a database** where you'll execute the queries. Then you'll initialize the objects by executing [setup script](https://github.com/Azure-Samples/Synapse/blob/master/SQL/Samples/LdwSample/SampleDB.sql) on that database. --This setup script will create the data sources, database scoped credentials, and external file formats that are used to read data in these samples. --> [!NOTE] -> Databases are only used for viewing metadata, not for actual data. Write down the database name that you use, you will need it later on. --```sql -CREATE DATABASE mydbname; -``` --### Provided demo data --Demo data contains the following data sets: --- NYC Taxi - Yellow Taxi Trip Records - part of public NYC data set in CSV and Parquet format-- Population data set in CSV format-- Sample Parquet files with nested columns-- Books in JSON format--| Folder path | Description | -| | | -| /csv/ | Parent folder for data in CSV format | -| /csv/population/<br />/csv/population-unix/<br />/csv/population-unix-hdr/<br />/csv/population-unix-hdr-escape<br />/csv/population-unix-hdr-quoted | Folders with Population data files in different CSV formats. | -| /csv/taxi/ | Folder with NYC public data files in CSV format | -| /parquet/ | Parent folder for data in Parquet format | -| /parquet/taxi | NYC public data files in Parquet format, partitioned by year, and month using Hive/Hadoop partitioning scheme. | -| /parquet/nested/ | Sample Parquet files with nested columns | -| /json/ | Parent folder for data in JSON format | -| /json/books/ | JSON files with books data | -- ## Next steps For more information on how to query different file types, and to create and use views, see the following articles: For more information on how to query different file types, and to create and use - [Query nested values](query-parquet-nested-types.md) - [Query folders and multiple CSV files](query-folders-multiple-csv-files.md) - [Use file metadata in queries](query-specific-files.md)-- [Create and use views](create-use-views.md)+- [Create and use views](create-use-views.md) |
update-manager | Troubleshooter Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/troubleshooter-known-issues.md | - Title: Azure Update Manager Troubleshooter -description: Identify common issues using the Troubleshooter in Azure Update Manager. - Previously updated : 09/17/2024------# Troubleshoot issues in Azure Update Manager --This article describes how to use the Troubleshooter in Azure Update Manager to identify common issues and how to resolve them. --The Troubleshooter option is enabled when checking for history of only **Failed** operations in Azure Update Manager. ---The Troubleshooter can also be seen when checking history of **Failed** operations in the machineΓÇÖs updates history tab. ----## Prerequisites --- For Azure Machines, ensure that the guest agent version on the VM is 2.4.0.2 or higher and agent status is **Ready**. --- For Arc Machines, ensure that the arc agent version on the machine is 1.39 or higher and agent status is **Connected**. --- The Troubleshooter isn't applicable if the operation type is Azure Managed Safe Deployment, that is, Auto Patching. --- Ensure that the machine is present and running. --- For executing RUN commands, you must have the following permissions:-- - Microsoft.Compute/virtualMachines/runCommand/write permission (for Azure) - The [Virtual Machine Contributor](/azure/role-based-access-control/built-in-roles#virtual-machine-contributor) role and higher levels have this permission. - - Microsoft.HybridCompute/machines/runCommands/write permission (for Arc) - The [Azure Connected Machine Resource Administrator](/azure/role-based-access-control/built-in-roles) role and higher levels have this permission. --- Ensure the machine can access [user content](https://raw.githubusercontent.com/) as it needs to retrieve the [scripts](https://github.com/Azure/AzureUpdateManager) during the execution of the Troubleshooter.---## What does the Troubleshooter do? --The troubleshooter performs two types of checks for both Azure and Arc machines. --- For the machine under troubleshooting, the Troubleshooter runs Resource Graph Queries to obtain details about the machine's current state, assessment mode settings, patch mode settings, and the status of various services running on it. For example, for Azure machines, it gets details about the guest agent while for Arc machines it gets details about arc agent and its status. -- The troubleshooter executes Managed RUN Commands on the machine to execute scripts that fetch information about the update related service and configurations for the machines. *The script doesn't make any modifications to your machine.* --You can find the [scripts](https://github.com/Azure/AzureUpdateManager/tree/main/Troubleshooter) here. --Post performing the checks, the troubleshooter suggests possible mitigations to test for checks that failed. Follow the mitigation links and take appropriate actions. ----## What are Managed RUN Commands? --- Managed RUN commands use the Guest agent for Azure machines and Arc Agent for Arc machines to remotely and securely executed commands or scripts inside your machine. --- Managed RUN commands don't require any additional extension to be installed on your machines. --- Managed RUN commands are generally available for Azure while it is in preview for Arc.--- Learn more about [Managed RUN command for Azure](/azure/virtual-machines/run-command-overview) and [Managed RUN command for Arc](/azure/azure-arc/servers/run-command).--## Next steps -* To learn more about Update Manager, see the [Overview](overview.md). -* To view logged results from all your machines, see [Querying logs and results from Update Manager](query-logs.md). - |
virtual-desktop | Autoscale Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scenarios.md | Title: Autoscale scaling plans and example scenarios in Azure Virtual Desktop description: Information about autoscale and a collection of four example scenarios that illustrate how various parts of autoscale for Azure Virtual Desktop work. Previously updated : 11/01/2023 Last updated : 10/01/2024 Autoscale lets you scale your session host virtual machines (VMs) in a host pool > [!NOTE] > - Azure Virtual Desktop (classic) doesn't support autoscale. -> - Autoscale isn't supported on Azure Virtual Desktop for Azure Stack HCI. > - You can't use autoscale and [scale session hosts using Azure Automation](set-up-scaling-script.md) on the same host pool. You must use one or the other. > - Autoscale is available in Azure and Azure Government in the same regions you can [create host pools](create-host-pools-azure-marketplace.md) in.+> - Autoscale support for Azure Stack HCI with Azure Virtual Desktop is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. For best results, we recommend using autoscale with VMs you deployed with Azure Virtual Desktop Azure Resource Manager (ARM) templates or first-party tools from Microsoft. The following animation is a visual recap of what we just went over in Scenario - To learn how to create scaling plans for autoscale, see [Create autoscale scaling for Azure Virtual Desktop host pools](autoscale-scaling-plan.md). - To review terms associated with autoscale, see [the autoscale glossary](autoscale-glossary.md).-- For answers to commonly asked questions about autoscale, see [the autoscale FAQ](autoscale-faq.yml).+- For answers to commonly asked questions about autoscale, see [the autoscale FAQ](autoscale-faq.yml). |
virtual-desktop | Azure Stack Hci Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci-overview.md | description: Learn about using Azure Virtual Desktop on Azure Stack HCI, enablng Previously updated : 04/11/2024 Last updated : 09/17/2024 # Azure Virtual Desktop on Azure Stack HCI > [!IMPORTANT]-> Azure Virtual Desktop on Azure Stack HCI is currently in preview for Azure Government and Azure China. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +>- Azure Virtual Desktop on Azure Stack HCI for Azure Government and Azure operated by 21Vianet (Azure in China) is currently in preview with HCI version 22H2. Portal provisioning isn't available. +>- See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. Using Azure Virtual Desktop on Azure Stack HCI, you can deploy session hosts for Azure Virtual Desktop where you need them. If you already have an existing on-premises virtual desktop infrastructure (VDI) deployment, Azure Virtual Desktop on Azure Stack HCI can improve your experience. If you're already using Azure Virtual Desktop with your session hosts in Azure, you can extend your deployment to your on-premises infrastructure to better meet your performance or data locality needs. Azure Virtual Desktop on Azure Stack HCI has the following limitations: - Azure Stack HCI supports many types of hardware and on-premises networking capabilities, so performance and user density might vary compared to session hosts running on Azure. Azure Virtual Desktop's [virtual machine sizing guidelines](/windows-server/remote/remote-desktop-services/virtual-machine-recs) are broad, so you should use them for initial performance estimates and monitor after deployment. -- You can only join session hosts on Azure Stack HCI to an Active Directory Domain Services domain.+- You can only join session hosts on Azure Stack HCI to an Active Directory Domain Services (AD DS) domain. This includes using [Microsoft Entra hybrid join](/entra/identity/devices/concept-hybrid-join), where you can benefit from some of the functionality provided by Microsoft Entra ID. ## Next step |
virtual-desktop | Deploy Azure Virtual Desktop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-azure-virtual-desktop.md | Here's how to create a host pool by using the Azure portal: | **Network and security** | | | **Network dropdown** | Select an existing network to connect each session to. | | **Domain to join** | |- | **Select which directory you would like to join** | **Active Directory** is the only available option. | + | **Select which directory you would like to join** | **Active Directory** is the only available option. This includes using [Microsoft Entra hybrid join](/entra/identity/devices/concept-hybrid-join). | | **AD domain join UPN** | Enter the user principal name (UPN) of an Active Directory user who has permission to join the session hosts to your domain. | | **Password** | Enter the password for the Active Directory user. | | **Specify domain or unit** | Select **yes** if you want to join session hosts to a specific domain or be placed in a specific organizational unit (OU). If you select **no**, the suffix of the UPN is used as the domain. | |
virtual-desktop | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md | To access desktops and applications from your session hosts, your users need to You need to join session hosts that provide desktops and applications to the same Microsoft Entra tenant as your users, or an Active Directory domain (either AD DS or Microsoft Entra Domain Services). > [!NOTE]-> For Azure Stack HCI, you can only join session hosts to an Active Directory Domain Services domain. +> For Azure Stack HCI, you can only join session hosts to an Active Directory Domain Services domain. You can only join session hosts on Azure Stack HCI to an Active Directory Domain Services (AD DS) domain. This includes using [Microsoft Entra hybrid join](/entra/identity/devices/concept-hybrid-join), where you can benefit from some of the functionality provided by Microsoft Entra ID. To join session hosts to Microsoft Entra ID or an Active Directory domain, you need the following permissions: |
virtual-desktop | Windows 11 Language Packs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/windows-11-language-packs.md | Title: Install language packs on Windows 11 Enterprise VMs in Azure Virtual Desk description: How to install language packs for Windows 11 Enterprise VMs in Azure Virtual Desktop. Previously updated : 10/20/2023 Last updated : 09/20/2024 Before you can add languages to a Windows 11 Enterprise VM, you'll need to have - Language and Optional Features ISO: - [Windows 11, version 21H2 Language and Optional Features ISO](https://software-download.microsoft.com/download/sg/22000.1.210604-1628.co_release_amd64fre_CLIENT_LOF_PACKAGES_OEM.iso) - [Windows 11, version 22H2 and 23H2 Language and Optional Features ISO](https://software-static.download.prss.microsoft.com/dbazure/988969d5-f34g-4e03-ac9d-1f9786c66749/22621.1.220506-1250.ni_release_amd64fre_CLIENT_LOF_PACKAGES_OEM.iso)+ - [Windows 11, version 24H2 Language and Optional Features ISO](https://software-static.download.prss.microsoft.com/dbazure/888969d5-f34g-4e03-ac9d-1f9786c66749/26100.1.240331-1435.ge_release_amd64fre_CLIENT_LOF_PACKAGES_OEM.iso) - Inbox Apps ISO: - [Windows 11, version 21H2 Inbox Apps ISO](https://software-static.download.prss.microsoft.com/dbazure/888969d5-f34g-4e03-ac9d-1f9786c66749/22000.2003.230512-1746.co_release_svc_prod3_amd64fre_InboxApps.iso) - [Windows 11, version 22H2 and 23H2 Inbox Apps ISO](https://software-static.download.prss.microsoft.com/dbazure/888969d5-f34g-4e03-ac9d-1f9786c66749/22621.2501.231009-1937.ni_release_svc_prod3_amd64fre_InboxApps.iso)+ - [Windows 11, version 24H2 Inbox Apps ISO](https://software-static.download.prss.microsoft.com/dbazure/888969d5-f34g-4e03-ac9d-1f9786c66749/26100.1742.240904-1906.ge_release_svc_prod1_amd64fre_InboxApps.iso) - An Azure Files share or a file share on a Windows File Server VM >[!NOTE] |
vpn-gateway | Gateway Sku Consolidation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/gateway-sku-consolidation.md | Yes. The new pricing timeline is: Yes, you can deploy AZ SKUs in all regions. If a region doesn't currently support availability zones, you can still create VPN Gateway AZ SKUs, but the deployment will remain regional. When the region supports availability zones, we'll enable zone redundancy for the gateways +### Can I migrate my Gen 1 gateway to Gen 2 gateway? ++* As part of the Basic IP to Standard IP migration, the gateways will be upgraded to Gen2. This upgrade will occur automatically when you initiate the migration. +* For gateways already using Standard IP, we will migrate them to Gen2 separately before Sep 30, 2026. This will be done seamlessly during regular updates, with no downtime involved. + ### Will there be downtime during migrating my Non-AZ gateways? No. This migration is seamless and there's no expected downtime during migration. |
web-application-firewall | Waf Javascript Challenge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/waf-javascript-challenge.md | The WAF policy setting defines the JavaScript challenge cookie validity lifetime - If the first call that receives a JavaScript challenge has a POST body size greater than 128 KB, it blocks it. Additionally, challenges for non-HTML resources embedded in a page aren't supported. For example images, css, js, and so on. However, if there's a prior successful JavaScript challenge request, then the previous limitations are removed. - The challenge isn't supported on Microsoft Internet Explorer. The challenge is supported on the latest versions of the Microsoft Edge, Chrome, Firefox, and Safari web browsers. - The JavaScript challenge action on Web Application Firewall on Application Gateway isn't supported for *Rate Limit* type custom rules during the public preview.++## Related content ++- [Azure WAFΓÇÖs Bot Manager 1.1 and JavaScript Challenge (Preview): Navigating the Bot Threat Terrain](https://techcommunity.microsoft.com/t5/azure-network-security-blog/azure-waf-s-bot-manager-1-1-and-javascript-challenge-preview/ba-p/4249652) |