Updates from: 08/11/2022 01:16:40
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Application Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/application-types.md
Previously updated : 06/14/2022 Last updated : 08/10/2022
Every application that uses Azure AD B2C must be registered in your [Azure AD B2
* An **Application ID** that uniquely identifies your application. * A **Reply URL** that can be used to direct responses back to your application.
-Each request that is sent to Azure AD B2C specifies a **user flow** (a built-in policy) or a **custom policy** that controls the behavior of Azure AD B2C. Both policy types enable you to create a highly customizable set of user experiences.
+Each request that is sent to Azure AD B2C specifies a **[user flow](user-flow-overview.md)** (a built-in policy) or a **[custom policy](user-flow-overview.md)** that controls the behavior of Azure AD B2C. Both policy types enable you to create a highly customizable set of user experiences.
The interaction of every application follows a similar high-level pattern:
In addition to facilitating simple sign in, a web server application might also
## Single-page applications
-Many modern web applications are built as client-side single-page applications ("SPAs"). Developers write them by using JavaScript or a SPA framework such as Angular, Vue, and React. These applications run on a web browser and have different authentication characteristics than traditional server-side web applications.
+Many modern web applications are built as client-side single-page applications ("SPAs"). Developers write them by using JavaScript or a SPA framework such as Angular, Vue, or React. These applications run on a web browser and have different authentication characteristics than traditional server-side web applications.
Azure AD B2C provides **two** options to enable single-page applications to sign in users and get tokens to access back-end services or web APIs:
Azure AD B2C provides **two** options to enable single-page applications to sign
[OAuth 2.0 Authorization code flow (with PKCE)](./authorization-code-flow.md) allows the application to exchange an authorization code for **ID** tokens to represent the authenticated user and **Access** tokens needed to call protected APIs. In addition, it returns **Refresh** tokens that provide long-term access to resources on behalf of users without requiring interaction with those users.
-This is the **recommended** approach. Having limited-lifetime refresh tokens helps your application adapt to [modern browser cookie privacy limitations](../active-directory/develop/reference-third-party-cookies-spas.md), like Safari ITP.
+We **recommended** this approach. Having limited-lifetime refresh tokens helps your application adapt to [modern browser cookie privacy limitations](../active-directory/develop/reference-third-party-cookies-spas.md), like Safari ITP.
To take advantage of this flow, your application can use an authentication library that supports it, like [MSAL.js 2.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser).
To take advantage of this flow, your application can use an authentication libra
### Implicit grant flow
-Some libraries, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-core), only support the implicit grant flow or your application is implemented to use implicit flow. In these cases, Azure AD B2C supports the [OAuth 2.0 implicit flow](implicit-flow-single-page-application.md). The implicit grant flow allows the application to get **ID** and **Access** tokens. Unlike the authorization code flow, implicit grant flow doesn't return a **Refresh token**.
+Some libraries, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-core), only support the [implicit grant flow](implicit-flow-single-page-application.md) or your application is implemented to use implicit flow. In these cases, Azure AD B2C supports the [OAuth 2.0 implicit flow](implicit-flow-single-page-application.md). The implicit grant flow allows the application to get **ID** and **Access** tokens. Unlike the authorization code flow, implicit grant flow doesn't return a **Refresh token**.
+
+We **don't recommended** this approach.
This authentication flow doesn't include application scenarios that use cross-platform JavaScript frameworks such as Electron and React-Native. Those scenarios require further capabilities for interaction with the native platforms.
active-directory-b2c Configure Authentication Sample Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-web-app-with-api.md
A computer that's running either:
# [Visual Studio](#tab/visual-studio)
-* [Visual Studio 2022 17.0 or later](https://visualstudio.microsoft.com/downloads/?utm_medium=microsoft&utm_source=docs.microsoft.com&utm_campaign=inline+link&utm_content=download+vs2019) with the **ASP.NET and web development** workload
+* [Visual Studio 2022 17.0 or later](https://visualstudio.microsoft.com/downloads) with the **ASP.NET and web development** workload
* [.NET 6.0 SDK](https://dotnet.microsoft.com/download/dotnet) # [Visual Studio Code](#tab/visual-studio-code)
active-directory-b2c Configure Authentication Sample Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-web-app.md
A computer that's running either of the following:
# [Visual Studio](#tab/visual-studio)
-* [Visual Studio 2022 17.0 or later](https://visualstudio.microsoft.com/downloads/?utm_medium=microsoft&utm_source=docs.microsoft.com&utm_campaign=inline+link&utm_content=download+vs2019), with the ASP.NET and web development workload
+* [Visual Studio 2022 17.0 or later](https://visualstudio.microsoft.com/downloads), with the ASP.NET and web development workload
* [.NET 6.0 SDK](https://dotnet.microsoft.com/download/dotnet) # [Visual Studio Code](#tab/visual-studio-code)
active-directory-b2c Find Help Open Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/find-help-open-support-ticket.md
Microsoft provides global technical, pre-sales, billing, and subscription suppor
Before creating a support ticket, check out the following resources for answers and information.
-* For content such as how-to information or code samples for IT professionals and developers, see the [technical documentation for Azure AD B2C at docs.microsoft.com](../active-directory-b2c/index.yml).
+* For content such as how-to information or code samples for IT professionals and developers, see the [technical documentation for Azure AD B2C](../active-directory-b2c/index.yml).
* The [Microsoft Technical Community](https://techcommunity.microsoft.com/) is the place for our IT pro partners and customers to collaborate, share, and learn. The [Microsoft Technical Community Info Center](https://techcommunity.microsoft.com/t5/Community-Info-Center/ct-p/Community-Info-Center) is used for announcements, blog posts, ask-me-anything (AMA) interactions with experts, and more. You can also [join the community to submit your ideas](https://techcommunity.microsoft.com/t5/Communities/ct-p/communities).
If you're unable to find answers by using self-help resources, you can open an o
* [Microsoft Tech Community](https://techcommunity.microsoft.com/)
-* [Technical documentation for Azure AD B2C at docs.microsoft.com](../active-directory-b2c/index.yml)
-
+* [Technical documentation for Azure AD B2C](../active-directory-b2c/index.yml)
active-directory-b2c Json Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/json-transformations.md
Previously updated : 02/16/2022 Last updated : 08/10/2022
Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/c
| InputParameter | Any string following dot notation | string | The JsonPath of the JSON where the constant string value will be inserted into. | | OutputClaim | outputClaim | string | The generated JSON string. |
+### JSON Arrays
+
+To add JSON objects to a JSON array, use the format of **array name** and the **index** in the array. The array is zero based. Start with zero to N, without skipping any number. The items in the array can contain any object. For example, the first item contains two objects, *app* and *appId*. The second item contains a single object, *program*. The third item contains four objects, *color*, *language*, *logo* and *background*.
+
+The following example demonstrates how to configure JSON arrays. The **emails** array uses the `InputClaims` with dynamic values. The **values** array uses the `InputParameters` with static values.
+
+```xml
+<ClaimsTransformation Id="GenerateJsonPayload" TransformationMethod="GenerateJson">
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="mailToName1" TransformationClaimType="emails.0.name" />
+ <InputClaim ClaimTypeReferenceId="mailToAddress1" TransformationClaimType="emails.0.address" />
+ <InputClaim ClaimTypeReferenceId="mailToName2" TransformationClaimType="emails.1.name" />
+ <InputClaim ClaimTypeReferenceId="mailToAddress2" TransformationClaimType="emails.1.address" />
+ </InputClaims>
+ <InputParameters>
+ <InputParameter Id="values.0.app" DataType="string" Value="Mobile app" />
+ <InputParameter Id="values.0.appId" DataType="string" Value="123" />
+ <InputParameter Id="values.1.program" DataType="string" Value="Holidays" />
+ <InputParameter Id="values.2.color" DataType="string" Value="Yellow" />
+ <InputParameter Id="values.2.language" DataType="string" Value="Spanish" />
+ <InputParameter Id="values.2.logo" DataType="string" Value="contoso.png" />
+ <InputParameter Id="values.2.background" DataType="string" Value="White" />
+ </InputParameters>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="result" TransformationClaimType="outputClaim" />
+ </OutputClaims>
+</ClaimsTransformation>
+```
+
+The result of this claims transformation:
+
+```json
+{
+ "values": [
+ {
+ "app": "Mobile app",
+ "appId": "123"
+ },
+ {
+ "program": "Holidays"
+ },
+ {
+ "color": "Yellow",
+ "language": "Spanish",
+ "logo": "contoso.png",
+ "background": "White"
+ }
+ ],
+ "emails": [
+ {
+ "name": "Joni",
+ "address": "joni@contoso.com"
+ },
+ {
+ "name": "Emily",
+ "address": "emily@contoso.com"
+ }
+ ]
+}
+```
+
+To specify a JSON array in both the input claims and the input parameters, you must start the array in the `InputClaims` element, zero to N. Then, in the `InputParameters` element continue the index from the last index.
+
+The following example demonstrates an array that is defined in both the input claims and the input parameters. The first item of the *values* array `values.0` is defined in the input claims. The input parameters continue from index one `values.1` through two index `values.2`.
+
+```xml
+<ClaimsTransformation Id="GenerateJsonPayload" TransformationMethod="GenerateJson">
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="app" TransformationClaimType="values.0.app" />
+ <InputClaim ClaimTypeReferenceId="appId" TransformationClaimType="values.0.appId" />
+ </InputClaims>
+ <InputParameters>
+ <InputParameter Id="values.1.program" DataType="string" Value="Holidays" />
+ <InputParameter Id="values.2.color" DataType="string" Value="Yellow" />
+ <InputParameter Id="values.2.language" DataType="string" Value="Spanish" />
+ <InputParameter Id="values.2.logo" DataType="string" Value="contoso.png" />
+ <InputParameter Id="values.2.background" DataType="string" Value="White" />
+ </InputParameters>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="result" TransformationClaimType="outputClaim" />
+ </OutputClaims>
+</ClaimsTransformation>
+```
+ ### Example of GenerateJson The following example generates a JSON string based on the claim value of "email" and "OTP" and constant strings.
active-directory Application Proxy Application Gateway Waf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-application-gateway-waf.md
+
+ Title: Using Application Gateway WAF to protect your application
+description: How to add Web Application Firewall protection for apps published with Azure Active Directory Application Proxy.
++++++ Last updated : 07/22/2022++++
+# Using Application Gateway WAF to protect your application
+
+When using Azure Active Directory (Azure AD) Application Proxy to expose applications deployed on-premises, on sealed Azure Virtual Networks, or in other public clouds, you can integrate a Web Application Firewall (WAF) in the data flow in order to protect your application from malicious attacks.
+
+## What is Azure Web Application Firewall?
+
+Azure Web Application Firewall (WAF) on Azure Application Gateway provides centralized protection of your web applications from common exploits and vulnerabilities. Web applications are increasingly targeted by malicious attacks that exploit commonly known vulnerabilities. SQL injection and cross-site scripting are among the most common attacks. For more information about Azure WAF on Application Gateway, see [What is Azure Web Application Firewall on Azure Application Gateway?][waf-overview].
+
+## Deployment steps
+
+This article guides you through the steps to securely expose a web application on the Internet, by integrating the Azure AD Application Proxy with Azure WAF on Application Gateway. In this guide we'll be using the Azure portal. The reference architecture for this deployment is represented below.
+
+![Diagram of deployment described.](./media/application-proxy-waf/application-proxy-waf.png)
+
+### Configure Azure Application Gateway to send traffic to your internal application.
+
+Some steps of the Application Gateway configuration will be omitted in this article. For a detailed guide on how to create and configure an Application Gateway, see [Quickstart: Direct web traffic with Azure Application Gateway - Azure portal][appgw_quick].
+
+##### 1. Create a private-facing HTTPS listener.
+
+This will allow users to access the web application privately when connected to the corporate network.
+
+![Screenshot of Application Gateway listener.](./media/application-proxy-waf/application-gateway-listener.png)
+
+##### 2. Create a backend pool with the web servers.
+
+In this example, the backend servers have Internet Information Services (IIS) installed.
+
+![Screenshot of Application Gateway backend.](./media/application-proxy-waf/application-gateway-backend.png)
+
+##### 3. Create a backend setting.
+
+This will determine how requests will reach the backend pool servers.
+
+![Screenshot of Application Gateway backend setting.](./media/application-proxy-waf/application-gateway-backend-settings.png)
+
+ ##### 4. Create a routing rule that ties the listener, the backend pool, and the backend setting created in the previous steps.
+
+ ![Screenshot of adding rule to Application Gateway 1.](./media/application-proxy-waf/application-gateway-add-rule-1.png)
+ ![Screenshot of adding rule to Application Gateway 2.](./media/application-proxy-waf/application-gateway-add-rule-2.png)
+
+ ##### 5. Enable the WAF in the Application Gateway and set it to Prevention mode.
+
+ ![Screenshot of enabling waf in Application Gateway.](./media/application-proxy-waf/application-gateway-enable-waf.png)
+
+ ### Configure your application to be remotely accessed through Application Proxy in Azure AD.
+
+As represented in the diagram above, both connector VMs, the Application Gateway, and the backend servers were deployed in the same VNET in Azure. This setup also applies to applications and connectors deployed on-premises.
+
+For a detailed guide on how to add your application to Application Proxy in Azure AD, see [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory][appproxy-add-app]. For more information about performance considerations concerning the Application Proxy connectors, see [Optimize traffic flow with Azure Active Directory Application Proxy][appproxy-optimize].
+
+![Screenshot of Application Proxy configuration.](./media/application-proxy-waf/application-proxy-configuration.png)
+
+In this example, the same URL was configured as the internal and external URL. Remote clients will access the application over the Internet on port 443, through the Application Proxy, whereas clients connected to the corporate network will access the application privately through the Application Gateway directly, also on port 443. For a detailed step on how to configure custom domains in Application Proxy, see [Configure custom domains with Azure AD Application Proxy][appproxy-custom-domain].
+
+To ensure the connector VMs send requests to the Application Gateway, an [Azure Private DNS zone][private-dns] was created with an A record pointing www.fabrikam.one to the private frontend IP of the Application Gateway.
+
+### Test the application.
+
+After [adding a user for testing](/azure/active-directory/app-proxy/application-proxy-add-on-premises-application#add-a-user-for-testing), you can test the application by accessing https://www.fabrikam.one. The user will be prompted to authenticate in Azure AD, and upon successful authentication, will access the application.
+
+![Screenshot of authentication step.](./media/application-proxy-waf/sign-in-2.png)
+![Screenshot of server response.](./media/application-proxy-waf/application-gateway-response.png)
+
+### Simulate an attack.
+
+To test if the WAF is blocking malicious requests, you can simulate an attack using a basic SQL injection signature. For example, "https://www.fabrikam.one/api/sqlquery?query=x%22%20or%201%3D1%20--".
+
+![Screenshot of WAF response.](./media/application-proxy-waf/waf-response.png)
+
+An HTTP 403 response confirms that the request was blocked by the WAF.
+
+The Application Gateway [Firewall logs][waf-logs] provide more details about the request and why it was blocked by the WAF.
+
+![Screenshot of waf logs.](./media/application-proxy-waf/waf-log.png)
+
+## Next steps
+
+To prevent false positives, learn how to [Customize Web Application Firewall rules](/azure/web-application-firewall/ag/application-gateway-customize-waf-rules-portal), configure [Web Application Firewall exclusion lists](/azure/web-application-firewall/ag/application-gateway-waf-configuration?tabs=portal), or [Web Application Firewall custom rules](/azure/web-application-firewall/ag/create-custom-waf-rules).
+
+[waf-overview]: /azure/web-application-firewall/ag/ag-overview
+[appgw_quick]: /azure/application-gateway/quick-create-portal
+[appproxy-add-app]: /azure/active-directory/app-proxy/application-proxy-add-on-premises-application
+[appproxy-optimize]: /azure/active-directory/app-proxy/application-proxy-network-topology
+[appproxy-custom-domain]: /azure/active-directory/app-proxy/application-proxy-configure-custom-domain
+[private-dns]: /azure/dns/private-dns-getstarted-portal
+[waf-logs]: /azure/application-gateway/application-gateway-diagnostics#firewall-log
+
active-directory Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/faqs.md
Where xx-XX is one of the following available language parameters: 'cs-CZ', 'de-
- [Permissions Management web page](https://microsoft.com/security/business/identity-access-management/permissions-management) - For more information about Microsoft's privacy and security terms, seeΓÇ»[Commercial Licensing Terms](https://www.microsoft.com/licensing/terms/product/ForallOnlineServices/all). - For more information about Microsoft's data processing and security terms when you subscribe to a product, see [Microsoft Products and Services Data Protection Addendum (DPA)](https://www.microsoft.com/licensing/docs/view/Microsoft-Products-and-Services-Data-Protection-Addendum-DPA).-- For more information about MicrosoftΓÇÖs policy and practices for Data Subject Requests for GDPR and CCPA: [https://docs.microsoft.com/en-us/compliance/regulatory/gdpr-dsr-azure](/compliance/regulatory/gdpr-dsr-azure).-
+- For more information, see [Azure Data Subject Requests for the GDPR and CCPA](/compliance/regulatory/gdpr-dsr-azure).
## Next steps - For an overview of Permissions Management, see [What's Permissions Management?](overview.md).-- For information on how to onboard Permissions Management in your organization, see [Enable Permissions Management in your organization](onboard-enable-tenant.md).
+- For information on how to onboard Permissions Management in your organization, see [Enable Permissions Management in your organization](onboard-enable-tenant.md).
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
Previously updated : 07/18/2022 Last updated : 08/09/2022
active-directory What If Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/what-if-tool.md
Previously updated : 06/22/2020 Last updated : 08/09/2022 -
-#Customer intent: As an IT admin, I want to know how to use the What If tool for my existing Conditional Access policies, so that I can understand the impact they have on my environment.
-# Troubleshoot using the What If tool in Conditional Access
+# Use the What If tool to troubleshoot Conditional Access policies
-[Conditional Access](./overview.md) is a capability of Azure Active Directory (Azure AD) that enables you to control how authorized users access your cloud apps. How do you know what to expect from the Conditional Access policies in your environment? To answer this question, you can use the **Conditional Access What If tool**.
+The **Conditional Access What If policy tool** allows you to understand the impact of [Conditional Access](./overview.md) policies in your environment. Instead of test driving your policies by performing multiple sign-ins manually, this tool enables you to evaluate a simulated sign-in of a user. The simulation estimates the impact this sign-in has on your policies and generates a simulation report.
-This article explains how you can use this tool to test your Conditional Access policies.
+The **What If** tool provides a way to quickly determine the policies that apply to a specific user. You can use the information, for example, if you need to troubleshoot an issue.
> [!VIDEO https://www.youtube.com/embed/M_iQVM-3C3E]
-## What it is
-
-The **Conditional Access What If policy tool** allows you to understand the impact of your Conditional Access policies on your environment. Instead of test driving your policies by performing multiple sign-ins manually, this tool enables you to evaluate a simulated sign-in of a user. The simulation estimates the impact this sign-in has on your policies and generates a simulation report. The report does not only list the applied Conditional Access policies but also [classic policies](policy-migration.md#classic-policies) if they exist.
-
-The **What If** tool provides a way to quickly determine the policies that apply to a specific user. You can use the information, for example, if you need to troubleshoot an issue.
- ## How it works
-In the **Conditional Access What If tool**, you first need to configure the settings of the sign-in scenario you want to simulate. These settings include:
+In the **Conditional Access What If tool**, you first need to configure the conditions of the sign-in scenario you want to simulate. These settings may include:
- The user you want to test - The cloud apps the user would attempt to access - The conditions under which access to the configured cloud apps is performed+
+The What If tool doesn't test for [Conditional Access service dependencies](service-dependencies.md). For example, if you're using What If to test a Conditional Access policy for Microsoft Teams, the result doesn't take into consideration any policy that would apply to Office 365 Exchange Online, a Conditional Access service dependency for Microsoft Teams.
As a next step, you can initiate a simulation run that evaluates your settings. Only policies that are enabled are part of an evaluation run.
-When the evaluation has finished, the tool generates a report of the affected policies. To gather more information about a Conditional Access policy, the [Conditional Access insights and reporting workbook](howto-conditional-access-insights-reporting.md) can provide additional details about policies in report-only mode and those policies currently enabled.
+When the evaluation has finished, the tool generates a report of the affected policies. To gather more information about a Conditional Access policy, the [Conditional Access insights and reporting workbook](howto-conditional-access-insights-reporting.md) can provide more details about policies in report-only mode and those policies currently enabled.
## Running the tool
-You can find the **What If** tool on the **[Conditional Access - Policies](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ConditionalAccessBlade/Policies)** page in the Azure portal.
-
-To start the tool, in the toolbar on top of the list of policies, click **What If**.
--
-Before you can run an evaluation, you must configure the settings.
-
-## Settings
-
-This section provides you with information about the settings of simulation run.
--
-### User
-
-You can only select one user. This is the only required field.
-
-### Cloud apps
-
-The default for this setting is **All cloud apps**. The default setting performs an evaluation of all available policies in your environment. You can narrow down the scope to policies affecting specific cloud apps.
-
-> [!NOTE]
-> When using the What If tool, it does not test for [Conditional Access service dependencies](service-dependencies.md). For example, if you are using What If to test a Conditional Access policy for Microsoft Teams, the result will not take into consideration any policy that would apply to Office 365 Exchange Online, a Conditional Access service dependency for Microsoft Teams.
-
-### IP address
-
-The IP address is a single IPv4 address to mimic the [location condition](location-condition.md). The address represents Internet facing address of the device used by your user to sign in. You can verify the IP address of a device by, for example, navigating to [What is my IP address](https://whatismyipaddress.com).
-
-### Device platforms
-
-This setting mimics the [device platforms condition](concept-conditional-access-conditions.md#device-platforms) and represents the equivalent of **All platforms (including unsupported)**.
+You can find the **What If** tool in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **What If**.
-### Client apps
-This setting mimics the [client apps condition](concept-conditional-access-conditions.md#client-apps).
-By default, this setting causes an evaluation of all policies having **Browser** or **Mobile apps and desktop clients** either individually or both selected. It also detects policies that enforce **Exchange ActiveSync (EAS)**. You can narrow this setting down by selecting:
+Before you can run the What If tool, you must provide the conditions you want to evaluate.
-- **Browser** to evaluate all policies having at least **Browser** selected. -- **Mobile apps and desktop clients** to evaluate all policies having at least **Mobile apps and desktop clients** selected.
+## Conditions
-### Sign-in risk
+The only condition you must make is selecting a user or workload identity. All other conditions are optional. For a definition of these conditions, see the article [Building a Conditional Access policy](concept-conditional-access-policies.md).
-This setting mimics the [sign-in risk condition](concept-conditional-access-conditions.md#sign-in-risk).
-## Evaluation
+## Evaluation
You start an evaluation by clicking **What If**. The evaluation result provides you with a report that consists of:
+- An indicator whether classic policies exist in your environment.
+- Policies that will apply to your user or workload identity.
+- Policies that don't apply to your user or workload identity.
-- An indicator whether classic policies exist in your environment-- Policies that apply to your user-- Policies that don't apply to your user
+If [classic policies](policy-migration.md#classic-policies) exist for the selected cloud apps, an indicator is presented to you. By clicking the indicator, you're redirected to the classic policies page. On the classic policies page, you can migrate a classic policy or just disable it. You can return to your evaluation result by closing this page.
-If [classic policies](policy-migration.md#classic-policies) exist for the selected cloud apps, an indicator is presented to you. By clicking the indicator, you are redirected to the classic policies page. On the classic policies page, you can migrate a classic policy or just disable it. You can return to your evaluation result by closing this page.
-On the list of policies that apply to your selected user, you can also find a list of [grant controls](concept-conditional-access-grant.md) and [session controls](concept-conditional-access-session.md) your user must satisfy.
+On the list of policies that apply, you can also find a list of [grant controls](concept-conditional-access-grant.md) and [session controls](concept-conditional-access-session.md) that must be satisfied.
-On the list of policies that don't apply to your user, you can and also find the reasons why these policies don't apply. For each listed policy, the reason represents the first condition that was not satisfied. A possible reason for a policy that is not applied is a disabled policy because they are not further evaluated.
+On the list of policies that don't apply, you can find the reasons why these policies don't apply. For each listed policy, the reason represents the first condition that wasn't satisfied.
## Next steps - More information about Conditional Access policy application can be found using the policies report-only mode using [Conditional Access insights and reporting](howto-conditional-access-insights-reporting.md).-- If you are ready to configure Conditional Access policies for your environment, see the [Conditional Access common policies](concept-conditional-access-policy-common.md).
+- If you're ready to configure Conditional Access policies for your environment, see the [Conditional Access common policies](concept-conditional-access-policy-common.md).
active-directory App Resilience Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-resilience-continuous-access-evaluation.md
Your app would check for:
- an "error" parameter with the value "insufficient_claims" - a "claims" parameter
+# [.NET](#tab/dotnet)
+ When these conditions are met, the app can extract and decode the claims challenge using MSAL.NET `WwwAuthenticateParameters` class. ```csharp
_clientApp = PublicClientApplicationBuilder.Create(App.ClientId)
You can test your application by signing in a user to the application then using the Azure portal to Revoke the user's sessions. The next time the app calls the CAE enabled API, the user will be asked to reauthenticate.
+# [JavaScript](#tab/JavaScript)
+
+When these conditions are met, the app can extract the claims challenge from the API response header as follows:
+
+```javascript
+const authenticateHeader = response.headers.get('www-authenticate');
+const claimsChallenge = authenticateHeader
+ .split(' ')
+ .find((entry) => entry.includes('claims='))
+ .split('claims="')[1]
+ .split('",')[0];
+```
+
+Your app would then use the claims challenge to acquire a new access token for the resource.
+
+```javascript
+let tokenResponse;
+
+try {
+
+ tokenResponse = await msalInstance.acquireTokenSilent({
+ claims: window.atob(claimsChallenge), // decode the base64 string
+ scopes: scopes, // e.g ['User.Read', 'Contacts.Read']
+ account: account, // current active account
+ });
+
+} catch (error) {
+
+ if (error instanceof InteractionRequiredAuthError) {
+
+ tokenResponse = await msalInstance.acquireTokenPopup({
+ claims: window.atob(claimsChallenge), // decode the base64 string
+ scopes: scopes, // e.g ['User.Read', 'Contacts.Read']
+ account: account, // current active account
+ });
+ }
+
+}
+```
+
+Once your application is ready to handle the claim challenge returned by a CAE-enabled resource, you can tell Microsoft Identity your app is CAE-ready by adding a `clientCapabilities` property in the MSAL configuration.
+
+```javascript
+const msalConfig = {
+ auth: {
+ clientId: 'Enter_the_Application_Id_Here',
+ clientCapabilities: ["CP1"]
+ // the remaining settings
+ // ...
+ }
+}
+
+const msalInstance = new PublicClientApplication(msalConfig);
+
+```
+++
+You can test your application by signing in a user and then using the Azure portal to revoke the user's session. The next time the app calls the CAE-enabled API, the user will be asked to reauthenticate.
+ ## Next steps - [Continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md) conceptual overview - [Claims challenges, claims requests, and client capabilities](claims-challenge.md)
+- [React single-page application using MSAL React to sign-in users against Azure Active Directory](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/2-Authorization-I/1-call-graph)
+- [Enable your ASP.NET Core web app to sign in users and call Microsoft Graph with the Microsoft identity platform](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/2-WebApp-graph-user/2-1-Call-MSGraph)
active-directory Claims Challenge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/claims-challenge.md
The following example claims parameter shows how a client application communicat
Claims: {"access_token":{"xms_cc":{"values":["cp1"]}}} ```
+#### [.NET](#tab/dotnet)
+ Those using MSAL library will use the following code: ```c#
Those using Microsoft.Identity.Web can add the following code to the configurati
"ClientCapabilities": [ "cp1" ] }, ```
+#### [JavaScript](#tab/JavaScript)
+
+Those using MSAL.js can add `clientCapabilities` property to the configuration object.
+
+```javascript
+const msalConfig = {
+ auth: {
+ clientId: 'Enter_the_Application_Id_Here',
+ clientCapabilities: ["CP1"]
+ // the remaining settings
+ // ...
+ }
+}
+
+const msalInstance = new msal.PublicClientApplication(msalConfig);
+```
++ An example of how the request to Azure AD will look like:
The **xms_cc** claim with a value of "cp1" in the access token is the authoritat
The values are not case-sensitive and unordered. If more than one value is specified in the **xms_cc** claim request, those values will be a multi-valued collection as the value of the **xms_cc** claim.
-A request of :
+A request of:
```json { "access_token": { "xms_cc":{"values":["cp1","foo", "bar"] } }}
This is how the app's manifest looks like after the **xms_cc** [optional claim](
The API can then customize their responses based on whether the client is capable of handling claims challenge or not.
-An example in C#
+### [.NET](#tab/dotnet)
```c# Claim ccClaim = context.User.FindAll(clientCapabilitiesClaim).FirstOrDefault(x => x.Type == "xms_cc");
else
} ```
+### [JavaScript](#tab/JavaScript)
+
+```javascript
+const checkIsClientCapableOfClaimsChallenge = (req, res, next) => {
+ // req.authInfo contains the decoded access token payload
+ if (req.authInfo['xms_cc'] && req.authInfo['xms_cc'].includes('CP1')) {
+ // Return formatted claims challenge as this client understands this
+
+ } else {
+ return res.status(403).json({ error: 'Client is not capable' });
+ }
+}
+
+```
+++ ## Next steps - [Microsoft identity platform and OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md#request-an-authorization-code) - [How to use Continuous Access Evaluation enabled APIs in your applications](app-resilience-continuous-access-evaluation.md) - [Granular Conditional Access for sensitive data and actions](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/granular-conditional-access-for-sensitive-data-and-actions/ba-p/1751775)
+- [React single-page application using MSAL React to sign-in users against Azure Active Directory](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/2-Authorization-I/1-call-graph)
+- [Enable your ASP.NET Core web app to sign in users and call Microsoft Graph with the Microsoft identity platform](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/2-WebApp-graph-user/2-1-Call-MSGraph)
active-directory Cross Tenant Access Settings B2b Collaboration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-collaboration.md
With outbound settings, you select which of your users and groups will be able t
- When you're done selecting the users and groups you want to add, choose **Select**. > [!NOTE]
- > When targeting your users and groups, you won't be able to select users who have configured [SMS-based authentication](https://docs.microsoft.com/azure/active-directory/authentication/howto-authentication-sms-signin). This is because users who have a "federated credential" on their user object are blocked to prevent external users from being added to outbound access settings. As a workaround, you can use the [Microsoft Graph API](https://docs.microsoft.com/graph/api/resources/crosstenantaccesspolicy-overview?view=graph-rest-1.0) to add the user's object ID directly or target a group the user belongs to.
+ > When targeting your users and groups, you won't be able to select users who have configured [SMS-based authentication](/azure/active-directory/authentication/howto-authentication-sms-signin). This is because users who have a "federated credential" on their user object are blocked to prevent external users from being added to outbound access settings. As a workaround, you can use the [Microsoft Graph API](/graph/api/resources/crosstenantaccesspolicy-overview?view=graph-rest-1.0) to add the user's object ID directly or target a group the user belongs to.
1. Select the **External applications** tab.
active-directory Active Directory Troubleshooting Support Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-troubleshooting-support-howto.md
Microsoft provides global technical, pre-sales, billing, and subscription suppor
Before creating a support ticket, check out the following resources for answers and information.
-* For content such as how-to information or code samples for IT professionals and developers, see the [technical documentation at docs.microsoft.com](../index.yml).
+* For content such as how-to information or code samples for IT professionals and developers, see the [technical documentation for Azure Active Directory](../index.yml).
* The [Microsoft Technical Community](https://techcommunity.microsoft.com/) is the place for our IT pro partners and customers to collaborate, share, and learn. The [Microsoft Technical Community Info Center](https://techcommunity.microsoft.com/t5/Community-Info-Center/ct-p/Community-Info-Center) is used for announcements, blog posts, ask-me-anything (AMA) interactions with experts, and more. You can also [join the community to submit your ideas](https://techcommunity.microsoft.com/t5/Communities/ct-p/communities).
See the [Contact Microsoft for support](https://portal.office.com/Support/Contac
* [Microsoft Tech Community](https://techcommunity.microsoft.com/)
-* [Technical documentation at docs.microsoft.com](../index.yml)
+* [Technical documentation for Azure Active Directory](../index.yml)
active-directory Scenario Azure First Sap Identity Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/scenario-azure-first-sap-identity-integration.md
While that could be a valid reason for using "User assignment required", it does
#### Summary of implementation
-On the Azure AD Enterprise Application representing the federation relation with IAS, disable "[User assignment required](../manage-apps/assign-user-or-group-access-portal.md)". This also means you can safely skip [assignment of users as detailed in Microsoft Docs](../saas-apps/sap-hana-cloud-platform-identity-authentication-tutorial.md#assign-the-azure-ad-test-user).
+On the Azure AD Enterprise Application representing the federation relation with IAS, disable "[User assignment required](../manage-apps/assign-user-or-group-access-portal.md)". This also means you can safely skip [assignment of users](../saas-apps/sap-hana-cloud-platform-identity-authentication-tutorial.md#assign-the-azure-ad-test-user).
### 3 - Use Azure AD groups for Authorization through Role Collections in IAS/BTP
active-directory Secure With Azure Ad Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-best-practices.md
When designing isolated environments, it's important to consider the following p
* **Use only modern authentication** - Applications deployed in isolated environments must use claims-based modern authentication (for example, SAML, * Auth, OAuth2, and OpenID Connect) to use capabilities such as federation, Azure AD B2B collaboration, delegation, and the consent framework. This way, legacy applications that have dependency on legacy authentication methods such as NT LAN Manager (NTLM) won't carry forward in isolated environments.
-* **Enforce strong authentication** - Strong authentication must always be used when accessing the isolated environment services and infrastructure. Whenever possible, [passwordless authentication](https://docs.microsoft.com/azure/active-directory/authentication/concept-authentication-passwordless) such as [Windows for Business Hello](https://docs.microsoft.com/windows/security/identity-protection/hello-for-business/hello-overview) or a [FIDO2 security keys](https://docs.microsoft.com/azure/active-directory/authentication/howto-authentication-passwordless-security-key)) should be used.
+* **Enforce strong authentication** - Strong authentication must always be used when accessing the isolated environment services and infrastructure. Whenever possible, [passwordless authentication](/azure/active-directory/authentication/concept-authentication-passwordless) such as [Windows for Business Hello](/windows/security/identity-protection/hello-for-business/hello-overview) or a [FIDO2 security keys](/azure/active-directory/authentication/howto-authentication-passwordless-security-key)) should be used.
* **Deploy secure workstations** - [Secure workstations](/security/compass/privileged-access-devices) provide the mechanism to ensure that the platform and the identity that platform represents is properly attested and secured against exploitation. Two other approaches to consider are:
In addition to the guidance in the [Azure Active Directory general operations gu
### Privileged Accounts
-Provision accounts in the isolated environment for administrative personnel and IT teams who will be operating the environment. This will enable you to add stronger security policies such as device-based access control for [secure workstations](https://docs.microsoft.com/security/compass/privileged-access-deployment). As discussed in previous sections, non-production environments can potentially utilize Azure AD B2B collaboration to onboard privileged accounts to the non-production tenants using the same posture and security controls designed for privileged access in their production environment.
+Provision accounts in the isolated environment for administrative personnel and IT teams who will be operating the environment. This will enable you to add stronger security policies such as device-based access control for [secure workstations](/security/compass/privileged-access-deployment). As discussed in previous sections, non-production environments can potentially utilize Azure AD B2B collaboration to onboard privileged accounts to the non-production tenants using the same posture and security controls designed for privileged access in their production environment.
Cloud-only accounts are the simplest way to provision human identities in an Azure AD tenant and it's a good fit for greenfield environments. However, if there's an existing on-premises infrastructure that corresponds to the isolated environment (for example, pre-production or management Active Directory forest), you could consider synchronizing identities from there. This holds especially true if the on-premises infrastructure described above is also used for IaaS solutions that require server access to manage the solution data plane. For more information on this scenario, see [Protecting Microsoft 365 from on-premises attacks](../fundamentals/protect-m365-from-on-premises-attacks.md). Synchronizing from isolated on-premises environments might also be needed if there are specific regulatory compliance requirements such as smart-card only authentication.
Provision [emergency access accounts](../roles/security-emergency-access.md) for
Use [Azure managed identities](../managed-identities-azure-resources/overview.md) for Azure resources that require a service identity. Check the [list of services that support managed identities](../managed-identities-azure-resources/managed-identities-status.md) when designing your Azure solutions.
-If managed identities aren't supported or not possible, consider [provisioning service principal objects](https://docs.microsoft.com/azure/active-directory/develop/app-objects-and-service-principals).
+If managed identities aren't supported or not possible, consider [provisioning service principal objects](/azure/active-directory/develop/app-objects-and-service-principals).
### Hybrid service accounts
All human identities (local accounts and external identities provisioned through
#### Passwordless credentials
-A [passwordless solution](../authentication/concept-authentication-passwordless.md) is the best solution for ensuring the most convenient and secure method of authentication. Passwordless credentials such as [FIDO security keys](../authentication/howto-authentication-passwordless-security-key.md) and [Windows Hello for Business](https://docs.microsoft.com/windows/security/identity-protection/hello-for-business/hello-overview) are recommended for human identities with privileged roles.
+A [passwordless solution](../authentication/concept-authentication-passwordless.md) is the best solution for ensuring the most convenient and secure method of authentication. Passwordless credentials such as [FIDO security keys](../authentication/howto-authentication-passwordless-security-key.md) and [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) are recommended for human identities with privileged roles.
#### Password protection
Check this example to [create service principals with self-signed certificate](.
### Access policies
-Below are some specific recommendations for Azure solutions. For general guidance on Conditional Access policies for individual environments, check the [CA Best practices](../conditional-access/overview.md), [Azure AD Operations Guide](../fundamentals/active-directory-ops-guide-auth.md), and [Conditional Access for Zero Trust](https://docs.microsoft.com/azure/architecture/guide/security/conditional-access-zero-trust):
+Below are some specific recommendations for Azure solutions. For general guidance on Conditional Access policies for individual environments, check the [CA Best practices](../conditional-access/overview.md), [Azure AD Operations Guide](../fundamentals/active-directory-ops-guide-auth.md), and [Conditional Access for Zero Trust](/azure/architecture/guide/security/conditional-access-zero-trust):
* Define [Conditional Access policies](../conditional-access/workload-identity.md) for the [Microsoft Azure Management](../authentication/howto-password-smart-lockout.md) cloud app to enforce identity security posture when accessing Azure Resource Manager. This should include controls on MFA and device-based controls to enable access only through secure workstations (more on this in the Privileged Roles section under Identity Governance). Additionally, use [Conditional Access to filter for devices](../conditional-access/concept-condition-filters-for-devices.md).
Below are some specific recommendations for Azure solutions. For general guidanc
* Define Conditional Access policies for [security information registration](../conditional-access/howto-conditional-access-policy-registration.md) that reflects a secure root of trust process on-premises (for example, for workstations in physical locations, identifiable by IP addresses, that employees must visit in person for verification).
-* Consider managing Conditional Access policies at scale with automation using [MS Graph CA API](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-apis)). For example, you can use the API to configure, manage, and monitor CA policies consistently across tenants.
+* Consider managing Conditional Access policies at scale with automation using [MS Graph CA API](/azure/active-directory/conditional-access/howto-conditional-access-apis)). For example, you can use the API to configure, manage, and monitor CA policies consistently across tenants.
* Consider using Conditional Access to restrict workload identities. Create a policy to limit or better control access based on location or other relevant circumstances.
Below are some considerations when designing a governed subscription lifecycle p
## Operations
-The following are additional operational considerations for Azure AD, specific to multiple isolated environments. Check the [Azure Cloud Adoption Framework](https://docs.microsoft.com/azure/cloud-adoption-framework/manage/), [Azure Security Benchmark](/security/benchmark/azure/) and [Azure AD Operations guide](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-ops-guide-ops) for detailed guidance to operate individual environments.
+The following are additional operational considerations for Azure AD, specific to multiple isolated environments. Check the [Azure Cloud Adoption Framework](/azure/cloud-adoption-framework/manage/), [Azure Security Benchmark](/security/benchmark/azure/) and [Azure AD Operations guide](/azure/active-directory/fundamentals/active-directory-ops-guide-ops) for detailed guidance to operate individual environments.
### Cross-environment roles and responsibilities
The following scenarios must be explicitly monitored and investigated:
* Assignment to Azure resources using dedicated accounts for MCA billing tasks.
-* **Privileged role activity** - Configure and review security [alerts generated by Azure AD PIM](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-how-to-configure-security-alerts). If locking down direct RBAC assignments isn't fully enforceable with technical controls (for example, Owner role has to be granted to product teams to do their job), then monitor direct assignment of privileged roles outside PIM by generating alerts whenever a user is assigned directly to access the subscription with Azure RBAC.
+* **Privileged role activity** - Configure and review security [alerts generated by Azure AD PIM](/azure/active-directory/privileged-identity-management/pim-how-to-configure-security-alerts). If locking down direct RBAC assignments isn't fully enforceable with technical controls (for example, Owner role has to be granted to product teams to do their job), then monitor direct assignment of privileged roles outside PIM by generating alerts whenever a user is assigned directly to access the subscription with Azure RBAC.
* **Classic role assignments** - Organizations should use the modern Azure RBAC role infrastructure instead of the classic roles. As a result, the following events should be monitored:
active-directory Secure With Azure Ad Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-fundamentals.md
Non-production environments are commonly referred to as sandbox environments.
* Devices
-**Human identities** are user objects that generally represent people in an organization. These identities are either created and managed directly in Azure AD or are synchronized from an on-premises Active Directory to Azure AD for a given organization. These types of identities are referred to as **local identities**. There can also be user objects invited from a partner organization or a social identity provider using [Azure AD B2B collaboration](https://docs.microsoft.com/azure/active-directory/external-identities/what-is-b2b). In this content, we refer to these types of identity as **external identities**.
+**Human identities** are user objects that generally represent people in an organization. These identities are either created and managed directly in Azure AD or are synchronized from an on-premises Active Directory to Azure AD for a given organization. These types of identities are referred to as **local identities**. There can also be user objects invited from a partner organization or a social identity provider using [Azure AD B2B collaboration](/azure/active-directory/external-identities/what-is-b2b). In this content, we refer to these types of identity as **external identities**.
-**Non-human identities** include any identity not associated with a human. This type of identity is an object such as an application that requires an identity to run. In this content, we refer to this type of identity as a **workload identity**. Various terms are used to describe this type of identity, including [application objects and service principals](https://docs.microsoft.com/azure/marketplace/manage-aad-apps).
+**Non-human identities** include any identity not associated with a human. This type of identity is an object such as an application that requires an identity to run. In this content, we refer to this type of identity as a **workload identity**. Various terms are used to describe this type of identity, including [application objects and service principals](/azure/marketplace/manage-aad-apps).
* **Application object**. An Azure AD application is defined by its one and only application object. The object resides in the Azure AD tenant where the application registered. The tenant is known as the application's "home" tenant.
Non-production environments are commonly referred to as sandbox environments.
* **Multi-tenant** applications allow identities from any Azure AD tenant to authenticate.
-* **Service principal object**. Although there are [exceptions](https://docs.microsoft.com/azure/marketplace/manage-aad-apps), application objects can be considered the *definition* of an application. Service principal objects can be considered an instance of an application. Service principals generally reference an application object, and one application object can be referenced by multiple service principals across directories.
+* **Service principal object**. Although there are [exceptions](/azure/marketplace/manage-aad-apps), application objects can be considered the *definition* of an application. Service principal objects can be considered an instance of an application. Service principals generally reference an application object, and one application object can be referenced by multiple service principals across directories.
**Service principal objects** are also directory identities that can perform tasks independently from human intervention. The service principal defines the access policy and permissions for a user or application in the Azure AD tenant. This mechanism enables core features such as authentication of the user or application during sign-in and authorization during resource access.
-Azure AD allows application and service principal objects to authenticate with a password (also known as an application secret), or with a certificate. The use of passwords for service principals is discouraged and [we recommend using a certificate](https://docs.microsoft.com/azure/active-directory/develop/howto-create-service-principal-portal) whenever possible.
+Azure AD allows application and service principal objects to authenticate with a password (also known as an application secret), or with a certificate. The use of passwords for service principals is discouraged and [we recommend using a certificate](/azure/active-directory/develop/howto-create-service-principal-portal) whenever possible.
-* **Managed identities for Azure resources**. Managed identities are special service principals in Azure AD. This type of service principal can be used to authenticate against services that support Azure AD authentication without needing to store credentials in your code or handle secrets management. For more information, see [What are managed identities for Azure resources?](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview)
+* **Managed identities for Azure resources**. Managed identities are special service principals in Azure AD. This type of service principal can be used to authenticate against services that support Azure AD authentication without needing to store credentials in your code or handle secrets management. For more information, see [What are managed identities for Azure resources?](/azure/active-directory/managed-identities-azure-resources/overview)
* **Device identity**: A device identity is an identity that verifies that the device being used in the authentication flow has undergone a process to attest that the device is legitimate and meets the technical requirements specified by the organization. Once the device has successfully completed this process, the associated identity can be used to further control access to an organization's resources. With Azure AD, devices can authenticate with a certificate. Some legacy scenarios required a human identity to be used in *non-human* scenarios. For example, when service accounts being used in on-premises applications such as scripts or batch jobs require access to Azure AD. This pattern isn't recommended and we recommend you use [certificates](../authentication/concept-certificate-based-authentication-technical-deep-dive.md). However, if you do use a human identity with password for authentication, protect your Azure AD accounts with [Azure Active Directory Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md).
-**Hybrid identity**. A hybrid identity is an identity that spans on-premises and cloud environments. This provides the benefit of being able to use the same identity to access on-premises and cloud resources. The source of authority in this scenario is typically an on-premises directory, and the identity lifecycle around provisioning, de-provisioning and resource assignment is also driven from on-premises. For more information, see [Hybrid identity documentation](https://docs.microsoft.com/azure/active-directory/hybrid/).
+**Hybrid identity**. A hybrid identity is an identity that spans on-premises and cloud environments. This provides the benefit of being able to use the same identity to access on-premises and cloud resources. The source of authority in this scenario is typically an on-premises directory, and the identity lifecycle around provisioning, de-provisioning and resource assignment is also driven from on-premises. For more information, see [Hybrid identity documentation](/azure/active-directory/hybrid/).
**Directory objects**. An Azure AD tenant contains the following common objects:
Some legacy scenarios required a human identity to be used in *non-human* scenar
* **Azure AD Domain Joined**. Devices that are owned by the organization and joined to the organization's Azure AD tenant. Typically a device purchased and managed by an organization that is joined to Azure AD and managed by a service such as [Microsoft Intune](https://www.microsoft.com/microsoft-365/enterprise-mobility-security/microsoft-intune).
- * **Azure AD Registered**. Devices not owned by the organization, for example, a personal device, used to access company resources. Organizations may require the device be enrolled via [Mobile Device Management (MDM)](https://www.microsoft.com/itshowcase/mobile-device-management-at-microsoft), or enforced through [Mobile Application Management (MAM)](https://docs.microsoft.com/office365/enterprise/office-365-client-support-mobile-application-management) without enrollment to access resources. This capability can be provided by a service such as Microsoft Intune.
+ * **Azure AD Registered**. Devices not owned by the organization, for example, a personal device, used to access company resources. Organizations may require the device be enrolled via [Mobile Device Management (MDM)](https://www.microsoft.com/itshowcase/mobile-device-management-at-microsoft), or enforced through [Mobile Application Management (MAM)](/office365/enterprise/office-365-client-support-mobile-application-management) without enrollment to access resources. This capability can be provided by a service such as Microsoft Intune.
* **Group objects** contain objects for the purposes of assigning resource access, applying controls, or configuration. Group objects contain attributes that have the required information about the group including the name, description, group members, group owners, and the group type. Groups in Azure AD take multiple forms based on an organization's requirements and can be mastered in Azure AD or synchronized from on-premises Active Directory Domain Services (AD DS).
Azure AD provides industry-leading strong authentication options that organizati
**Application access policies**. Azure AD provides capabilities to further control and secure access to your organization's applications.
-**Conditional Access**. Azure AD Conditional Access policies are tools to bring user and device context into the authorization flow when accessing Azure AD resources. Organizations should explore use of Conditional Access policies to allow, deny, or enhance authentication based on user, risk, device, and network context. For more information, see the [Azure AD Conditional Access documentation](https://docs.microsoft.com/azure/active-directory/conditional-access/).
+**Conditional Access**. Azure AD Conditional Access policies are tools to bring user and device context into the authorization flow when accessing Azure AD resources. Organizations should explore use of Conditional Access policies to allow, deny, or enhance authentication based on user, risk, device, and network context. For more information, see the [Azure AD Conditional Access documentation](/azure/active-directory/conditional-access/).
**Azure AD Identity Protection**. This feature enables organizations to automate the detection and remediation of identity-based risks, investigate risks, and export risk detection data to third-party utilities for further analysis. For more information, see [overview on Azure AD Identity Protection](../identity-protection/overview-identity-protection.md).
Azure AD provides industry-leading strong authentication options that organizati
Azure AD also provides a portal and the Microsoft Graph API to allow organizations to manage identities or integrate Azure AD identity management into existing workflows or automation. To learn more about Microsoft Graph, see [Use the Microsoft Graph API](/graph/use-the-api).
-**Device management**. Azure AD is used to manage the lifecycle and integration with cloud and on-premises device management infrastructures. It also is used to define policies to control access from cloud or on-premises devices to your organizational data. Azure AD provides the lifecycle services of devices in the directory and the credential provisioning to enable authentication. It also manages a key attribute of a device in the system that is the level of trust. This detail is important when designing a resource access policy. For more information, see [Azure AD Device Management documentation](https://docs.microsoft.com/azure/active-directory/devices/).
+**Device management**. Azure AD is used to manage the lifecycle and integration with cloud and on-premises device management infrastructures. It also is used to define policies to control access from cloud or on-premises devices to your organizational data. Azure AD provides the lifecycle services of devices in the directory and the credential provisioning to enable authentication. It also manages a key attribute of a device in the system that is the level of trust. This detail is important when designing a resource access policy. For more information, see [Azure AD Device Management documentation](/azure/active-directory/devices/).
-**Configuration management**. Azure AD has service elements that need to be configured and managed to ensure the service is configured to an organization's requirements. These elements include domain management, SSO configuration, and application management to name but a few. Azure AD provides a portal and the Microsoft Graph API to allow organizations to manage these elements or integrate into existing processes. To learn more about Microsoft Graph, see [Use the Microsoft Graph API](https://docs.microsoft.com/graph/use-the-api).
+**Configuration management**. Azure AD has service elements that need to be configured and managed to ensure the service is configured to an organization's requirements. These elements include domain management, SSO configuration, and application management to name but a few. Azure AD provides a portal and the Microsoft Graph API to allow organizations to manage these elements or integrate into existing processes. To learn more about Microsoft Graph, see [Use the Microsoft Graph API](/graph/use-the-api).
### Governance
Azure AD also provides a portal and the Microsoft Graph API to allow organizatio
* Applications used to access
-Azure AD also provides information on the actions that are being performed within Azure AD, and reports on security risks. For more information, see [Azure Active Directory reports and monitoring](https://docs.microsoft.com/azure/active-directory/reports-monitoring/).
+Azure AD also provides information on the actions that are being performed within Azure AD, and reports on security risks. For more information, see [Azure Active Directory reports and monitoring](/azure/active-directory/reports-monitoring/).
**Auditing**. Auditing provides traceability through logs for all changes done by specific features within Azure AD. Examples of activities found in audit logs include changes made to any resources within Azure AD like adding or removing users, apps, groups, roles, and policies. Reporting in Azure AD enables you to audit sign-in activities, risky sign-ins, and users flagged for risk. For more information, see [Audit activity reports in the Azure Active Directory portal](../reports-monitoring/concept-audit-logs.md).
active-directory Secure With Azure Ad Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-introduction.md
Having a set of directory objects in the Azure AD tenant boundary engenders the
## Administrative units for role management
-Administrative units restrict permissions in a role to any portion of your organization that you define. You could, for example, use administrative units to delegate the [Helpdesk Administrator](https://docs.microsoft.com/azure/active-directory/roles/permissions-reference) role to regional support specialists, so they can manage users only in the region that they support. An administrative unit is an Azure AD resource that can be a container for other Azure AD resources. An administrative unit can contain only:
+Administrative units restrict permissions in a role to any portion of your organization that you define. You could, for example, use administrative units to delegate the [Helpdesk Administrator](/azure/active-directory/roles/permissions-reference) role to regional support specialists, so they can manage users only in the region that they support. An administrative unit is an Azure AD resource that can be a container for other Azure AD resources. An administrative unit can contain only:
* Users
In the following diagram, administrative units are used to segment the Azure AD
![Diagram that shows Azure AD Administrative units.](media/secure-with-azure-ad-introduction/administrative-units.png)
-For more information on administrative units, see [Administrative units in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/roles/administrative-units).
+For more information on administrative units, see [Administrative units in Azure Active Directory](/azure/active-directory/roles/administrative-units).
### Common reasons for resource isolation
Configuration settings in Azure AD can impact any resource in the Azure AD tenan
* Bypass security requirements >[!NOTE]
->Using [Named Locations](https://docs.microsoft.com/azure/active-directory/conditional-access/location-condition) can present some challenges to your [zero-trust journey](https://www.microsoft.com/security/business/zero-trust). Verify that using Named Locations fits into your security strategy and principles.
+>Using [Named Locations](/azure/active-directory/conditional-access/location-condition) can present some challenges to your [zero-trust journey](https://www.microsoft.com/security/business/zero-trust). Verify that using Named Locations fits into your security strategy and principles.
Allowed authentication methods: Global administrators set the authentication methods allowed for the tenant. * **Self-service options**. Global Administrators set self-service options such as self-service-password reset and create Microsoft 365 groups at the tenant level.
Administrators manage how identity objects can access resources, and under what
* Security groups
- * [Microsoft 365 groups](https://docs.microsoft.com/microsoft-365/community/all-about-groups)
+ * [Microsoft 365 groups](/microsoft-365/community/all-about-groups)
* Dynamic Groups
Who should have the ability to administer the environment and its resources? The
Given the interdependence between an Azure AD tenant and its resources, it's critical to understand the security and operational risks of compromise or error. If you're operating in a federated environment with synchronized accounts, an on-premises compromise can lead to an Azure AD compromise.
-* **Identity compromise** - Within the boundary of a tenant, any identity can be assigned any role, given the one providing access has sufficient privileges. While the impact of compromised non-privileged identities is largely contained, compromised administrators can have broad impact. For example, if an Azure AD global administrator account is compromised, Azure resources can become compromised. To mitigate risk of identity compromise, or bad actors, implement [tiered administration](https://docs.microsoft.com/security/compass/privileged-access-access-model) and ensure that you follow principles of least privilege for [Azure AD Administrator Roles](https://docs.microsoft.com/azure/active-directory/roles/delegate-by-task). Similarly, ensure that you create CA policies that specifically exclude test accounts and test service principals from accessing resources outside of the test applications. For more information on privileged access strategy, see [Privileged access: Strategy](https://docs.microsoft.com/security/compass/privileged-access-strategy).
+* **Identity compromise** - Within the boundary of a tenant, any identity can be assigned any role, given the one providing access has sufficient privileges. While the impact of compromised non-privileged identities is largely contained, compromised administrators can have broad impact. For example, if an Azure AD global administrator account is compromised, Azure resources can become compromised. To mitigate risk of identity compromise, or bad actors, implement [tiered administration](/security/compass/privileged-access-access-model) and ensure that you follow principles of least privilege for [Azure AD Administrator Roles](/azure/active-directory/roles/delegate-by-task). Similarly, ensure that you create CA policies that specifically exclude test accounts and test service principals from accessing resources outside of the test applications. For more information on privileged access strategy, see [Privileged access: Strategy](/security/compass/privileged-access-strategy).
* **Federated environment compromise**
-* **Trusting resource compromise** - Human identities aren't the only security consideration. Any compromised component of the Azure AD tenant can impact trusting resources based on its level of permissions at the tenant and resource level. The impact of a compromised component of an Azure AD trusting resource is determined by the privileges of the resource; resources that are deeply integrated with the directory to perform write operations can have profound impact in the entire tenant. Following [guidance for zero trust](https://docs.microsoft.com/azure/architecture/guide/security/conditional-access-zero-trust) can help limit the impact of compromise.
+* **Trusting resource compromise** - Human identities aren't the only security consideration. Any compromised component of the Azure AD tenant can impact trusting resources based on its level of permissions at the tenant and resource level. The impact of a compromised component of an Azure AD trusting resource is determined by the privileges of the resource; resources that are deeply integrated with the directory to perform write operations can have profound impact in the entire tenant. Following [guidance for zero trust](/azure/architecture/guide/security/conditional-access-zero-trust) can help limit the impact of compromise.
* **Application development** - Early stages of the development lifecycle for applications with writing privileges to Azure AD, where bugs can unintentionally write changes to the Azure AD objects, present a risk. Follow [Microsoft Identity platform best practices](../develop/identity-platform-integration-checklist.md) during development to mitigate these risks.
active-directory Secure With Azure Ad Multiple Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-multiple-tenants.md
Another approach could have been to utilize the capabilities of Azure AD Connect
## Multi-tenant resource isolation
-A new tenant provides the ability to have a separate set of administrators. Organizations can choose to use corporate identities through [Azure AD B2B collaboration](https://docs.microsoft.com/azure/active-directory/external-identities/what-is-b2b). Similarly, organizations can implement [Azure Lighthouse](https://docs.microsoft.com/azure/lighthouse/overview) for cross-tenant management of Azure resources so that non-production Azure subscriptions can be managed by identities in the production counterpart. Azure Lighthouse can't be used to manage services outside of Azure, such as Intune or Microsoft Endpoint Manager. For Managed Service Providers (MSPs), [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide) is an admin portal that helps secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
+A new tenant provides the ability to have a separate set of administrators. Organizations can choose to use corporate identities through [Azure AD B2B collaboration](/azure/active-directory/external-identities/what-is-b2b). Similarly, organizations can implement [Azure Lighthouse](/azure/lighthouse/overview) for cross-tenant management of Azure resources so that non-production Azure subscriptions can be managed by identities in the production counterpart. Azure Lighthouse can't be used to manage services outside of Azure, such as Intune or Microsoft Endpoint Manager. For Managed Service Providers (MSPs), [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide) is an admin portal that helps secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
This will allow users to continue to use their corporate credentials, while achieving the benefits of separation as described above.
active-directory Secure With Azure Ad Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-resource-management.md
Before any resource management request can be executed by Resource Manager, a se
* **Valid user check** - The user requesting to manage the resource must have an account in the Azure AD tenant associated with the subscription of the managed resource.
-* **User permission check** - Permissions are assigned to users using [role-based access control (RBAC)](https://docs.microsoft.com/azure/role-based-access-control/overview). An RBAC role specifies a set of permissions a user may take on a specific resource. RBAC helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.
+* **User permission check** - Permissions are assigned to users using [role-based access control (RBAC)](/azure/role-based-access-control/overview). An RBAC role specifies a set of permissions a user may take on a specific resource. RBAC helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.
* **Azure policy check** - [Azure policies](../../governance/policy/overview.md) specify the operations allowed or explicitly denied for a specific resource. For example, a policy can specify that users are only allowed (or not allowed) to deploy a specific type of virtual machine.
Subscriptions that enable [delegated resource management](../../lighthouse/conce
It's worth noting that Azure Lighthouse itself is modeled as an Azure resource provider, which means that aspects of the delegation across a tenant can be targeted through Azure Policies.
-**Microsoft 365 Lighthouse** - [Microsoft 365 Lighthouse](https://docs.microsoft.com/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide) is an admin portal that helps Managed Service Providers (MSPs) secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
+**Microsoft 365 Lighthouse** - [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide) is an admin portal that helps Managed Service Providers (MSPs) secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
## Azure resource management with Azure AD
Conditional Access: A key benefit of using Azure AD for signing into Azure virtu
**Challenges**: The list below highlights key challenges with using this option for identity isolation.
-* No central management or configuration of servers. For example, there's no Group Policy that can be applied to a group of servers. Organizations should consider deploying [Update Management in Azure](https://docs.microsoft.com/azure/automation/update-management/overview) to manage patching and updates of these servers.
+* No central management or configuration of servers. For example, there's no Group Policy that can be applied to a group of servers. Organizations should consider deploying [Update Management in Azure](/azure/automation/update-management/overview) to manage patching and updates of these servers.
* Not suitable for multi-tiered applications that have requirements to authenticate with on-premises mechanisms such as Windows Integrated Authentication across these servers or services. If this is a requirement for the organization, then it's recommended that you explore the Standalone Active Directory Domain Services, or the Azure Active Directory Domain Services scenarios described in this section.
active-directory Secure With Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-single-tenant.md
Many separation scenarios can be achieved within a single tenant. If possible, w
If a set of resources require unique tenant-wide settings, or there is minimal risk tolerance for unauthorized access by tenant members, or critical impact could be caused by configuration changes, you must achieve isolation in multiple tenants.
-**Configuration separation** - In some cases, resources such as applications have dependencies on tenant-wide configurations like authentication methods or [named locations](https://docs.microsoft.com/azure/active-directory/conditional-access/location-condition#named-locations). You should consider these dependencies when isolating resources. Global administrators can configure the resource settings and tenant-wide settings that affect resources.
+**Configuration separation** - In some cases, resources such as applications have dependencies on tenant-wide configurations like authentication methods or [named locations](/azure/active-directory/conditional-access/location-condition#named-locations). You should consider these dependencies when isolating resources. Global administrators can configure the resource settings and tenant-wide settings that affect resources.
If a set of resources require unique tenant-wide settings, or the tenant's settings must be administered by a different entity, you must achieve isolation with multiple tenants.
If you must ensure full isolation (including staging of organization-level confi
Azure RBAC allows you to design an administration model with granular scopes and surface area. Consider the management hierarchy in the following example: >[!NOTE]
->There are multiple ways to define the management hierarchy based on an organization's individual requirements, constraints, and goals. For more information, consult the Cloud Adoption Framework guidance on how to [Organize Azure Resources](https://docs.microsoft.com/azure/cloud-adoption-framework/ready/azure-setup-guide/organize-resources)).
+>There are multiple ways to define the management hierarchy based on an organization's individual requirements, constraints, and goals. For more information, consult the Cloud Adoption Framework guidance on how to [Organize Azure Resources](/azure/cloud-adoption-framework/ready/azure-setup-guide/organize-resources)).
![Diagram that shows resource isolation in a single tenant.](media/secure-with-azure-ad-single-tenant/azure-ad-resource-hierarchy.png)
This is a hierarchical structure, so the higher up in the hierarchy, the more sc
Both top-level scopes should be strictly monitored. It is important to plan for other dimensions of resource isolation such as networking. For general guidance on Azure networking, see [Azure best practices for network security](../../security/fundamentals/network-best-practices.md). Infrastructure as a Service (IaaS) workloads have special scenarios where both identity and resource isolation need to be part of the overall design and strategy.
-Consider isolating sensitive or test resources according to [Azure landing zone conceptual architecture](https://docs.microsoft.com/azure/cloud-adoption-framework/ready/landing-zone/). For example, Identity subscription should be assigned to separated management group and all subscriptions for development purposes could be separated in "Sandbox" management group. More details can be found in the [Enterprise-Scale documentation](https://docs.microsoft.com/azure/cloud-adoption-framework/ready/enterprise-scale/faq). Separation for testing purposes within a single tenant is also considered in the [management group hierarchy of the reference architecture](https://docs.microsoft.com/azure/cloud-adoption-framework/ready/enterprise-scale/testing-approach).
+Consider isolating sensitive or test resources according to [Azure landing zone conceptual architecture](/azure/cloud-adoption-framework/ready/landing-zone/). For example, Identity subscription should be assigned to separated management group and all subscriptions for development purposes could be separated in "Sandbox" management group. More details can be found in the [Enterprise-Scale documentation](/azure/cloud-adoption-framework/ready/enterprise-scale/faq). Separation for testing purposes within a single tenant is also considered in the [management group hierarchy of the reference architecture](/azure/cloud-adoption-framework/ready/enterprise-scale/testing-approach).
### Scoped management for Azure AD trusting applications
active-directory Support Help Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/support-help-options.md
If you need an answer to a question or help in solving a problem not covered in
## Create an Azure support request <div class='icon is-large'>
- <img alt='Azure support' src='https://docs.microsoft.com/media/logos/logo_azure.svg'>
+ <img alt='Azure support' src='/media/logos/logo_azure.svg'>
</div> Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're an IT admin managing your organization's tenant, a developer just starting your cloud journey, or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
If you can't find an answer to your problem by searching Microsoft Q&A, submit a
## Stay informed of updates and new releases <div class='icon is-large'>
- <img alt='Stay informed' src='https://docs.microsoft.com/media/common/i_blog.svg'>
+ <img alt='Stay informed' src='/media/common/i_blog.svg'>
</div> - [Azure Updates](https://azure.microsoft.com/updates/?category=identity): Learn about important product updates, roadmap, and announcements.
If you can't find an answer to your problem by searching Microsoft Q&A, submit a
- [Azure Active Directory Identity Blog](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/bg-p/Identity): Get news and information about Azure AD. -- [Tech Community](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/bg-p/Identity/): Share your experiences, engage and learn from experts.
+- [Tech Community](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/bg-p/Identity/): Share your experiences, engage and learn from experts.
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
With phone number sign-up and sign-in, developers and enterprises can allow thei
You can now automate creating, updating, and deleting user accounts for these newly integrated apps: -- [Promapp]( ../saas-apps/promapp-provisioning-tutorial.md)
+- [Promapp](../saas-apps/promapp-provisioning-tutorial.md)
- [Zscaler Private Access](../saas-apps/zscaler-private-access-provisioning-tutorial.md) For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
A hotfix roll-up package (build 4.4.1642.0) is available as of September 25, 201
For more information, see [Hotfix rollup package (build 4.4.1642.0) is available for Identity Manager 2016 Service Pack 1](https://support.microsoft.com/help/4021562). -+
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
This page is updated monthly, so revisit it regularly. If you're looking for ite
**Service category:** Enterprise Apps **Product capability:** SSO
-Users can now configure multiple instances of the same application within an Azure AD tenant. It's now supported for both IdP, and Service Provider (SP), initiated single sign-on requests. Multiple application accounts can now have a separate service principle to handle instance-specific claims mapping and roles assignment. For more information, see:
+Users can now configure multiple instances of the same application within an Azure AD tenant. It's now supported for both IdP, and Service Provider (SP), initiated single sign-on requests. Multiple application accounts can now have a separate service principal to handle instance-specific claims mapping and roles assignment. For more information, see:
- [Configure SAML app multi-instancing for an application - Microsoft Entra | Microsoft Docs](../develop/reference-app-multi-instancing.md) - [Customize app SAML token claims - Microsoft Entra | Microsoft Docs](../develop/active-directory-saml-claims-customization.md)
active-directory Identity Governance Applications Existing Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-existing-users.md
Before you create new assignments, configure [provisioning of Azure AD users](..
* If the application uses an LDAP directory, follow the [guide for configuring Azure AD to provision users into LDAP directories](../app-provisioning/on-premises-ldap-connector-configure.md). * If the application uses a SQL database, follow the [guide for configuring Azure AD to provision users into SQL-based applications](../app-provisioning/on-premises-sql-connector-configure.md).
+ * For other applications, follow steps 1-3 to [configure provisioning via Graph APIs](../app-provisioning/application-provisioning-configuration-api.md).
1. Check the [attribute mappings](../app-provisioning/customize-application-attributes.md) for provisioning to that application. Make sure that **Match objects using this attribute** is set for the Azure AD attribute and column that you used in the previous sections for matching.
When an application role assignment is created in Azure AD for a user to an appl
If any users aren't assigned to application roles, check the Azure AD audit log for an error from a previous step.
-1. If **Provisioning Status** for the application is **Off**, turn it to **On**.
+1. If **Provisioning Status** for the application is **Off**, turn it to **On**. You can also start provisioning [using Graph APIs](../app-provisioning/application-provisioning-configuration-api.md#step-4-start-the-provisioning-job).
1. Based on the guidance for [how long will it take to provision users](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md#how-long-will-it-take-to-provision-users), wait for Azure AD provisioning to match the existing users of the application to those users just assigned.
-1. Monitor the [provisioning status](../app-provisioning/check-status-user-account-provisioning.md) to ensure that all users were matched successfully.
+1. Monitor the [provisioning status](../app-provisioning/check-status-user-account-provisioning.md) through the Portal or [Graph APIs](../app-provisioning/application-provisioning-configuration-api.md#monitor-the-provisioning-job-status) to ensure that all users were matched successfully.
If you don't see users being provisioned, check the [troubleshooting guide for no users being provisioned](../app-provisioning/application-provisioning-config-problem-no-users-provisioned.md). If you see an error in the provisioning status and are provisioning to an on-premises application, check the [troubleshooting guide for on-premises application provisioning](../app-provisioning/on-premises-ecma-troubleshoot.md).
-1. Check the [provisioning log](../reports-monitoring/concept-provisioning-logs.md). Filter the log to the status **Failure**. If there are failures with an ErrorCode of **DuplicateTargetEntries**, this indicates an ambiguity in your provisioning matching rules, and you'll need to update the Azure AD users or the mappings that are used for matching to ensure each Azure AD user matches one application user. Then filter the log to the action **Create** and status **Skipped**. If users were skipped with the SkipReason code of **NotEffectivelyEntitled**, this may indicate that the user accounts in Azure AD were not matched because the user account status was **Disabled**.
+1. Check the provisioning log through the [Azure portal](../reports-monitoring/concept-provisioning-logs.md) or [Graph APIs](../app-provisioning/application-provisioning-configuration-api.md#monitor-provisioning-events-using-the-provisioning-logs). Filter the log to the status **Failure**. If there are failures with an ErrorCode of **DuplicateTargetEntries**, this indicates an ambiguity in your provisioning matching rules, and you'll need to update the Azure AD users or the mappings that are used for matching to ensure each Azure AD user matches one application user. Then filter the log to the action **Create** and status **Skipped**. If users were skipped with the SkipReason code of **NotEffectivelyEntitled**, this may indicate that the user accounts in Azure AD were not matched because the user account status was **Disabled**.
After the Azure AD provisioning service has matched the users based on the application role assignments you've created, subsequent changes will be sent to the application.
active-directory Identity Governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-overview.md
na Previously updated : 12/22/2021 Last updated : 8/10/2022
Organizations need a process to manage access beyond what was initially provisio
![Access lifecycle](./media/identity-governance-overview/access-lifecycle.png)
-Typically, IT delegates access approval decisions to business decision makers. Furthermore, IT can involve the users themselves. For example, users that access confidential customer data in a company's marketing application in Europe need to know the company's policies. Guest users may be unaware of the handling requirements for data in an organization to which they have been invited.
+Typically, IT delegates access approval decisions to business decision makers. Furthermore, IT can involve the users themselves. For example, users that access confidential customer data in a company's marketing application in Europe need to know the company's policies. Guest users may be unaware of the handling requirements for data in an organization to which they've been invited.
-Organizations can automate the access lifecycle process through technologies such as [dynamic groups](../enterprise-users/groups-dynamic-membership.md), coupled with user provisioning to [SaaS apps](../saas-apps/tutorial-list.md) or [apps integrated with SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md). Organizations can also control which [guest users have access to on-premises applications](../external-identities/hybrid-cloud-to-on-premises.md). These access rights can then be regularly reviewed using recurring [Azure AD access reviews](access-reviews-overview.md). [Azure AD entitlement management](entitlement-management-overview.md) also enables you to define how users request access across packages of group and team memberships, application roles, and SharePoint Online roles.
+Organizations can automate the access lifecycle process through technologies such as [dynamic groups](../enterprise-users/groups-dynamic-membership.md), coupled with user provisioning to [SaaS apps](../saas-apps/tutorial-list.md) or [apps integrated with SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md). Organizations can also control which [guest users have access to on-premises applications](../external-identities/hybrid-cloud-to-on-premises.md). These access rights can then be regularly reviewed using recurring [Azure AD access reviews](access-reviews-overview.md). [Azure AD entitlement management](entitlement-management-overview.md) also enables you to define how users request access across packages of group and team memberships, application roles, and SharePoint Online roles. For more information, see the [simplifying identity governance tasks with automation](#simplifying-identity-governance-tasks-with-automation) section below to select the appropriate Azure AD features for your access lifecycle automation scenarios.
When a user attempts to access applications, Azure AD enforces [Conditional Access](../conditional-access/index.yml) policies. For example, Conditional Access policies can include displaying a [terms of use](../conditional-access/terms-of-use.md) and [ensuring the user has agreed to those terms](../conditional-access/require-tou.md) prior to being able to access an application. For more information, see [govern access to applications in your environment](identity-governance-applications-prepare.md).
Check out the [Getting started tab](https://portal.azure.com/#view/Microsoft_AAD
![Identity Governance getting started](./media/identity-governance-overview/getting-started.png)
-There are also tutorials for [managing access to resources in entitlement management](entitlement-management-access-package-first.md), [onboarding external users to Azure AD through an approval process](entitlement-management-onboard-external-user.md), [governing access to existing applications](identity-governance-applications-prepare.md). You can also automate identity governance tasks through Microsoft Graph and PowerShell.
+There are also tutorials for [managing access to resources in entitlement management](entitlement-management-access-package-first.md), [onboarding external users to Azure AD through an approval process](entitlement-management-onboard-external-user.md), [governing access to your applications](identity-governance-applications-prepare.md) and the [application's existing users](identity-governance-applications-existing-users.md).
If you have any feedback about Identity Governance features, click **Got feedback?** in the Azure portal to submit your feedback. The team regularly reviews your feedback.
-While there is no perfect solution or recommendation for every customer, the following configuration guides also provide the baseline policies Microsoft recommends you follow to ensure a more secure and productive workforce.
+While there's no perfect solution or recommendation for every customer, the following configuration guides also provide the baseline policies Microsoft recommends you follow to ensure a more secure and productive workforce.
+- [Prerequisites for configuring Azure AD for identity governance](identity-governance-applications-prepare.md)
- [Plan an access reviews deployment to manage resource access lifecycle](deploy-access-reviews.md) - [Identity and device access configurations](/microsoft-365/enterprise/microsoft-365-policies-configurations) - [Securing privileged access](../roles/security-planning.md)
+## Simplifying identity governance tasks with automation
+
+Once you've started using these identity governance features, you can easily automate common identity governance scenarios. The following table shows how to get started for each scenario:
+
+| Scenario to automate | Automation guide |
+| - | |
+| Creating, updating and deleting AD and Azure AD user accounts automatically for employees |[Plan cloud HR to Azure AD user provisioning](../app-provisioning/plan-cloud-hr-provision.md)|
+| Updating the membership of a group, based on changes to the member user's attributes | [Create a dynamic group](../enterprise-users/groups-create-rule.md)|
+| Assigning licenses | [group-based licensing](../enterprise-users/licensing-groups-assign.md) |
+| Adding and removing a user's group memberships, application roles, and SharePoint site roles, on a specific date | [Configure lifecycle settings for an access package in entitlement management](entitlement-management-access-package-lifecycle-policy.md)|
+| Running custom workflows when a user requests or receives access, or access is removed | [Trigger Logic Apps in entitlement management](entitlement-management-logic-apps-integration.md) (preview) |
+| Regularly having memberships of guests in Microsoft groups and Teams reviewed, and removing guest memberships that are denied |[Create an access review](create-access-review.md) |
+| Removing guest accounts that were denied by a reviewer |[Review and remove external users who no longer have resource access](access-reviews-external-users.md) |
+| Removing guest accounts that have no access package assignments |[Manage the lifecycle of external users](entitlement-management-external-users.md#manage-the-lifecycle-of-external-users) |
+| Provisioning users into on-premises and cloud applications that have their own directories or databases | [Configure automatic user provisioning](../app-provisioning/user-provisioning.md) with user assignments or [scoping filters](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md) |
+| Other scheduled tasks | [Automate identity governance tasks with Azure Automation](identity-governance-automation.md) and Microsoft Graph via the [Microsoft.Graph.Identity.Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) PowerShell module|
+ ## Appendix - least privileged roles for managing in Identity Governance features
-It's a best practice to use the least privileged role to perform administrative tasks in Identity Governance. We recommend that you use Azure AD PIM to activate a role as needed to perform these tasks. The following are the least privileged directory roles to configure Identity Governance features:
+It's a best practice to use the least privileged role to perform administrative tasks in Identity Governance. We recommend that you use Azure AD PIM to activate a role as needed to perform these tasks. The following are the least privileged [directory roles](../roles/permissions-reference.md) to configure Identity Governance features:
| Feature | Least privileged role | | - | | | Entitlement management | Identity Governance Administrator |
-| Access reviews | User administrator (with the exception of access reviews of Azure or Azure AD roles, which requires Privileged role administrator) |
-|Privileged Identity Management | Privileged role administrator |
-| Terms of use | Security administrator or Conditional access administrator |
+| Access reviews | User Administrator (with the exception of access reviews of Azure or Azure AD roles, which require Privileged Role Administrator) |
+| Privileged Identity Management | Privileged Role Administrator |
+| Terms of use | Security Administrator or Conditional Access Administrator |
>[!NOTE] >The least privileged role for Entitlement management has changed from the User Administrator role to the Identity Governance Administrator role.
active-directory How To Connect Health Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-agent-install.md
The following table lists requirements for using Azure AD Connect Health.
> [!NOTE] > If you have a highly locked-down and restricted environment, you need to add more URLs than the ones the table lists for Internet Explorer enhanced security. Also add URLs that are listed in the table in the next section.
+### New versions of the agent and Auto upgrade
+If a new version of the Health agent is released, any existing installed agents are automatically updated.
### Outbound connectivity to the Azure service endpoints
active-directory How To Connect Health Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-sync.md
The following documentation is specific to monitoring Azure AD Connect (Sync) wi
![Screenshot of the Azure AD Connect Health for Sync page.](./media/how-to-connect-health-sync/syncsnapshot.png)
+> [!IMPORTANT]
+> Azure AD Connect Health for Sync requires Azure AD Connect Sync V2. If you are still using AADConnect V1 you must upgrade to the latest version.
+> AADConnect V1 is retired on August 31, 2022. Azure AD Connect Health for Sync will no longer work with AADConnect V1 in December 2022.
+>
## Alerts for Azure AD Connect Health for sync The Azure AD Connect Health Alerts for sync section provides you the list of active alerts. Each alert includes relevant information, resolution steps, and links to related documentation. By selecting an active or resolved alert you will see a new blade with additional information, as well as steps you can take to resolve the alert, and links to additional documentation. You can also view historical data on alerts that were resolved in the past.
active-directory How To Connect Sync Staging Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-staging-server.md
# Azure AD Connect: Staging server and disaster recovery+ With a server in staging mode, you can make changes to the configuration and preview the changes before you make the server active. It also allows you to run full import and full synchronization to verify that all changes are expected before you make these changes into your production environment. ## Staging mode+ Staging mode can be used for several scenarios, including: * High availability.
A server in staging mode continues to receive changes from Active Directory and
For those of you with knowledge of older sync technologies, the staging mode is different since the server has its own SQL database. This architecture allows the staging mode server to be located in a different datacenter. ### Verify the configuration of a server+ To apply this method, follow these steps: 1. [Prepare](#prepare)
To apply this method, follow these steps:
5. [Switch active server](#switch-active-server) #### Prepare+ 1. Install Azure AD Connect, select **staging mode**, and unselect **start synchronization** on the last page in the installation wizard. This mode allows you to run the sync engine manually. ![Screenshot shows the Ready to configure page in the Azure AD Connect dialog box.](./media/how-to-connect-sync-staging-server/readytoconfigure.png) 2. Sign off/sign in and from the start menu select **Synchronization Service**. #### Configuration+ If you have made custom changes to the primary server and want to compare the configuration with the staging server, then use [Azure AD Connect configuration documenter](https://github.com/Microsoft/AADConnectConfigDocumenter). #### Import and Synchronize+ 1. Select **Connectors**, and select the first Connector with the type **Active Directory Domain Services**. Click **Run**, select **Full import**, and **OK**. Do these steps for all Connectors of this type. 2. Select the Connector with type **Azure Active Directory (Microsoft)**. Click **Run**, select **Full import**, and **OK**. 3. Make sure the tab Connectors is still selected. For each Connector with type **Active Directory Domain Services**, click **Run**, select **Delta Synchronization**, and **OK**.
If you have made custom changes to the primary server and want to compare the co
You have now staged export changes to Azure AD and on-premises AD (if you are using Exchange hybrid deployment). The next steps allow you to inspect what is about to change before you actually start the export to the directories. #### Verify+ 1. Start a cmd prompt and go to `%ProgramFiles%\Microsoft Azure AD Sync\bin` 2. Run: `csexport "Name of Connector" %temp%\export.xml /f:x` The name of the Connector can be found in Synchronization Service. It has a name similar to "contoso.com ΓÇô Azure AD" for Azure AD.
You have a file in %temp% named export.csv that can be examined in Microsoft Exc
4. Make necessary changes to the data or configuration and run these steps again (Import and Synchronize and Verify) until the changes that are about to be exported are expected. **Understanding the export.csv file**+ Most of the file is self-explanatory. Some abbreviations to understand the content: * OMODT ΓÇô Object Modification Type. Indicates if the operation at an object level is an Add, Update, or Delete. * AMODT ΓÇô Attribute Modification Type. Indicates if the operation at an attribute level is an Add, Update, or delete. **Retrieve common identifiers**+ The export.csv file contains all changes that are about to be exported. Each row corresponds to a change for an object in the connector space and the object is identified by the DN attribute. The DN attribute is a unique identifier assigned to an object in the connector space. When you have many rows/changes in the export.csv to analyze, it may be difficult for you to figure out which objects the changes are for based on the DN attribute alone. To simplify the process of analyzing the changes, use the `csanalyzer.ps1` PowerShell script. The script retrieves common identifiers (for example, displayName, userPrincipalName) of the objects. To use the script: 1. Copy the PowerShell script from the section [CSAnalyzer](#appendix-csanalyzer) to a file named `csanalyzer.ps1`. 2. Open a PowerShell window and browse to the folder where you created the PowerShell script.
The export.csv file contains all changes that are about to be exported. Each row
4. You now have a file named **processedusers1.csv** that can be examined in Microsoft Excel. Note that the file provides a mapping from the DN attribute to common identifiers (for example, displayName and userPrincipalName). It currently does not include the actual attribute changes that are about to be exported. #### Switch active server+ Azure AD Connect can be set up in an Active-Passive High Availability setup, where one server will actively push changes to the synced AD objects to Azure AD and the passive server will stage these changes in the event it will need to take over.
-Note: You cannot set up Azure AD Connect in an Active-Active setup. It must be Active-Passive
-Ensure that only 1 Azure AD Connect server is actively syncing changes.
+>[!Note]
+>
+>You cannot set up Azure AD Connect in an Active-Active setup. It must be Active-Passive. Ensure that only 1 Azure AD Connect server is actively syncing changes.
For more information on setting up an Azure AD Connect sync server in Staging Mode, see [staging mode](how-to-connect-sync-staging-server.md)
You may need to perform a failover of the Sync Servers for several reasons, such
- One currently active Azure AD Connect Sync Server - One staging Azure AD Connect Sync Server
-#### Changing Currently Active Sync Server to Staging Mode
+#### Change currently Active Sync Server to staging mode
We need to ensure that only one Sync Server is syncing changes at any given time throughout this process. If the currently Active Sync Server is reachable you can perform the below steps to move it to Staging Mode. If it is not reachable, ensure that the server or VM does not regain access unexpectedly either by shutting down the server or isolating it from outbound connections and proceed to the steps on how to change the currently Staging Sync Server to Active Mode.
-1. For the currently Active Azure AD Connect server, open the Azure AD Connect Console and click "Configure staging mode" then Next:
-[Insert Image: "active_server_menu.png"]
+1. For the currently Active Azure AD Connect server, open the Azure AD Connect Console and click "Configure staging mode" then Next:
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows Staging Mode highlighted in the Active Azure AD Connect dialog box.](media/how-to-connect-sync-staging-server/active-server-menu.png)
+
+2. You will need to sign into Azure AD with Global Admin or Hybrid Identity Admin credentials:
-2. You will need to sign into Azure AD with Global Admin or Hybrid Identity Admin credentials:
-[Insert Image: "active_server_sign_in.png"]
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows Sign in prompt in the Active Azure AD Connect dialog box.](media/how-to-connect-sync-staging-server/active-server-sign-in.png)
-3. Tick the box for Staging Mode and click Next:
-[Insert Image: "active_server_staging_mode.png"]
+3. Tick the box for Staging Mode and click Next:
-4. The Azure AD Connect server will check for installed components and then prompt you whether you want to start the sync process:
-[Insert Image: "active_server_config.png"]
-Since the server will be in staging mode, it will not write changes to Azure AD, but retain any changes to the AD in its Connector Space, ready to write them.
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows Staging Mode configuration in the Active Azure AD Connect dialog box.](media/how-to-connect-sync-staging-server/active-server-staging-mode.png)
+
+4. The Azure AD Connect server will check for installed components and then prompt you whether you want to start the sync process:
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows Ready to Configure screen in the Active Azure AD Connect dialog box.](media/how-to-connect-sync-staging-server/active-server-config.png)
+
+Since the server will be in staging mode, it will not write changes to Azure AD, but retain any changes to the AD in its Connector Space, ready to write them.
It is recommended to leave the sync process on for the server in Staging Mode, so if it becomes active, it will quickly take over and won't have to do a large sync to catch up to the current state of the AD/Azure AD sync.
-5. After selecting whether to start or stop the sync process and clicking Configure, the Azure AD Connect server will configure itself into Staging Mode.
-When this is completed, you will be prompted with a screen that confirms Staging Mode is enabled.
+5. After selecting whether to start or stop the sync process and clicking Configure, the Azure AD Connect server will configure itself into Staging Mode.
+When this is completed, you will be prompted with a screen that confirms Staging Mode is enabled.
You can click Exit to finish this.
-6. You can confirm that the server is successfully in Staging Mode by opening the Synchronization Service console.
-From here, there should be no more Export jobs since the change and Full & Delta Imports will be suffixed with "(Stage Only)" like below:
-[Insert Image "active_server_sync_server_mgmr.png"]
+6. You can confirm that the server is successfully in Staging Mode by opening the Synchronization Service console.
+From here, there should be no more Export jobs since the change and Full & Delta Imports will be suffixed with "(Stage Only)" like below:
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows Sync Service console on the Active Azure AD Connect dialog box.](media/how-to-connect-sync-staging-server/active-server-sync-server-mgmr.png)
-#### Changing Currently Staging Sync Server to Active Mode
+#### Change current Staging Sync server to active mode
At this point, all of our Azure AD Connect Sync Servers should be in Staging Mode and not exporting changes. We can now move our Staging Sync Server to Active mode and actively sync changes.
-1. Now move to the Azure AD Connect server that was originally in Staging Mode and open the Azure AD Connect console.
-Click on "Configure staging mode" and click Next:
-[Insert Image: "staging_server_menu.png"]
-Note the message at the bottom of the Console that indicates this server is in Staging Mode.
+1. Now move to the Azure AD Connect server that was originally in Staging Mode and open the Azure AD Connect console.
+
+ Click on "Configure staging mode" and click Next:
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows Staging Mode highlighted in the Staging Azure AD Connect dialog box.](media/how-to-connect-sync-staging-server/staging-server-menu.png)
+
+ The message at the bottom of the Console indicates this server is in Staging Mode.
2. Sign into Azure AD, then go to the Staging Mode screen.
-Untick the box for Staging Mode and click Next
-[Insert Image: "staging_server_staging_mode.png"]
-As per the warning on this page, it is important to ensure no other Azure AD Connect server is actively syncing.
-There should only be one active Azure AD Connect sync server at any time.
-3. When you are prompted to start the sync process, tick this box and click Configure:
-[Insert Image: "staging_server_config.png"]
+ Untick the box for Staging Mode and click Next
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows Staging Mode configuration in the Staging Azure AD Connect dialog box.](media/how-to-connect-sync-staging-server/staging-server-staging-mode.png)
+
+ As per the warning on this page, it is important to ensure no other Azure AD Connect server is actively syncing.
+
+ There should only be one active Azure AD Connect sync server at any time.
+
+3. When you are prompted to start the sync process, tick this box and click Configure:
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows Ready to Configure screen in the Staging Azure AD Connect dialog box.](media/how-to-connect-sync-staging-server/staging-server-config.png)
-4. Once the process is finished you should get the below confirmation screen where you can click Exit to finish:
-[Insert Image: "staging_server_confirmation.png"]
+4. Once the process is finished you should get the below confirmation screen where you can click Exit to finish:
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows Confirmation screen in the Staging Azure AD Connect dialog box.](media/how-to-connect-sync-staging-server/staging-server-confirmation.png)
5. You can again confirm that this is working by opening the Sync Service Console and checking if Export jobs are running:
-[Insert Image: "staging_server_sync_server_mgmr.png"]
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot shows Sync Service console on the Staging Azure AD Connect dialog box.](media/how-to-connect-sync-staging-server/staging-server-sync-server-mgmr.png)
## Disaster recovery+ Part of the implementation design is to plan for what to do in case there is a disaster where you lose the sync server. There are different models to use and which one to use depends on several factors including: * What is your tolerance for not being able make changes to objects in Azure AD during the downtime?
Depending on the answers to these questions and your organizationΓÇÖs policy, on
If you do not use the built-in SQL Express database, then you should also review the [SQL High Availability](#sql-high-availability) section. ### Rebuild when needed+ A viable strategy is to plan for a server rebuild when needed. Usually, installing the sync engine and do the initial import and sync can be completed within a few hours. If there isnΓÇÖt a spare server available, it is possible to temporarily use a domain controller to host the sync engine. The sync engine server does not store any state about the objects so the database can be rebuilt from the data in Active Directory and Azure AD. The **sourceAnchor** attribute is used to join the objects from on-premises and the cloud. If you rebuild the server with existing objects on-premises and the cloud, then the sync engine matches those objects together again on reinstallation. The things you need to document and save are the configuration changes made to the server, such as filtering and synchronization rules. These custom configurations must be reapplied before you start synchronizing. ### Have a spare standby server - staging mode+ If you have a more complex environment, then having one or more standby servers is recommended. During installation, you can enable a server to be in **staging mode**. For more information, see [staging mode](#staging-mode). ### Use virtual machines+ A common and supported method is to run the sync engine in a virtual machine. In case the host has an issue, the image with the sync engine server can be migrated to another server. ### SQL High Availability+ If you are not using the SQL Server Express that comes with Azure AD Connect, then high availability for SQL Server should also be considered. The high availability solutions supported include SQL clustering and AOA (Always On Availability Groups). Unsupported solutions include mirroring. Support for SQL AOA was added to Azure AD Connect in version 1.1.524.0. You must enable SQL AOA before installing Azure AD Connect. During installation, Azure AD Connect detects whether the SQL instance provided is enabled for SQL AOA or not. If SQL AOA is enabled, Azure AD Connect further figures out if SQL AOA is configured to use synchronous replication or asynchronous replication. When setting up the Availability Group Listener, it is recommended that you set the RegisterAllProvidersIP property to 0. This is because Azure AD Connect currently uses SQL Native Client to connect to SQL and SQL Native Client does not support the use of MultiSubNetFailover property. ## Appendix CSAnalyzer+ See the section [verify](#verify) on how to use this script. ```powershell
else
``` ## Next steps+ **Overview topics** * [Azure AD Connect sync: Understand and customize synchronization](how-to-connect-sync-whatis.md)
active-directory Whatis Azure Ad Connect V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-azure-ad-connect-v2.md
Azure AD Connect was released several years ago. Since this time, several of the components that Azure AD Connect uses have been scheduled for deprecation and updated to newer versions. Attempting to update all of these components individually would take time and planning. -- To address this, we've bundled as many of these newer components into a new, single release, so you only have to update once. This release is Azure AD Connect V2. This release is a new version of the same software used to accomplish your hybrid identity goals, built using the latest foundational components. ## What are the major changes?
More details about PowerShell prerequisites can be found [here](/powershell/scri
## What else do I need to know? - **Why is this upgrade important for me?** </br> Next year several of the components in your current Azure AD Connect server installations will go out of support. If you are using unsupported products, it will be harder for our support team to provide you with the support experience your organization requires. So we recommend all customers to upgrade to this newer version as soon as they can.
Until one of the components that are being retired are actually deprecated, you
We expect TLS 1.0/1.1 to be deprecated in 2022, and you need to make sure you aren't using these protocols by that date as your service may stop working unexpectedly. You can manually configure your server for TLS 1.2 though, and that doesn't require an update of Azure AD Connect to V2
+Azure AD Connect Health will stop working after December 20222. We will auto upgrade all Health agents to a new version before the end of 2022, but we cannot auto upgrade if you are running AADConnect V1 due to compatibility issues with V versions.
+ After December 2022, ADAL is planned to go out of support. When ADAL goes out of support, authentication may stop working unexpectedly, and this will block the Azure AD Connect server from working properly. We strongly advise you to upgrade to Azure AD Connect V2 before December 2022. You can't upgrade to a supported authentication library with your current Azure AD Connect version. **After upgrading to 2 the ADSync PowerShell cmdlets don't work?** </br>
active-directory Whatis Azure Ad Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-azure-ad-connect.md
Azure AD Connect provides the following features:
![What is Azure AD Connect](./media/whatis-hybrid-identity/arch.png)
+> [!IMPORTANT]
+> Azure AD Connect Health for Sync requires Azure AD Connect Sync V2. If you are still using AADConnect V1 you must upgrade to the latest version.
+> AADConnect V1 is retired on August 31, 2022. Azure AD Connect Health for Sync will no longer work with AADConnect V1 in December 2022.
+ ## What is Azure AD Connect Health?
active-directory Concept Identity Protection Risks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-risks.md
Premium detections are visible only to Azure AD Premium P2 customers. Customers
| Unfamiliar sign-in properties | Real-time |This risk detection type considers past sign-in history to look for anomalous sign-ins. The system stores information about previous sign-ins, and triggers a risk detection when a sign-in occurs with properties that are unfamiliar to the user. These properties can include IP, ASN, location, device, browser, and tenant IP subnet. Newly created users will be in "learning mode" period where the unfamiliar sign-in properties risk detection will be turned off while our algorithms learn the user's behavior. The learning mode duration is dynamic and depends on how much time it takes the algorithm to gather enough information about the user's sign-in patterns. The minimum duration is five days. A user can go back into learning mode after a long period of inactivity. <br><br> We also run this detection for basic authentication (or legacy protocols). Because these protocols don't have modern properties such as client ID, there's limited telemetry to reduce false positives. We recommend our customers to move to modern authentication. <br><br> Unfamiliar sign-in properties can be detected on both interactive and non-interactive sign-ins. When this detection is detected on non-interactive sign-ins, it deserves increased scrutiny due to the risk of token replay attacks. | | Malicious IP address | Offline | This detection indicates sign-in from a malicious IP address. An IP address is considered malicious based on high failure rates because of invalid credentials received from the IP address or other IP reputation sources. | | Suspicious inbox manipulation rules | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-manipulation-rules). This detection profiles your environment and triggers alerts when suspicious rules that delete or move messages or folders are set on a user's inbox. This detection may indicate that the user's account is compromised, that messages are being intentionally hidden, and that the mailbox is being used to distribute spam or malware in your organization. |
-| Password spray | Offline | A password spray attack is where multiple usernames are attacked using common passwords in a unified brute force manner to gain unauthorized access. This risk detection is triggered when a password spray attack has been performed. |
+| Password spray | Offline | A password spray attack is where multiple usernames are attacked using common passwords in a unified brute force manner to gain unauthorized access. This risk detection is triggered when a password spray attack has been successfully performed. For example, the attacker is successfully authenticated, in the detected instance. |
| Impossible travel | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#impossible-travel). This detection identifies two user activities (is a single or multiple sessions) originating from geographically distant locations within a time period shorter than the time it would have taken the user to travel from the first location to the second, indicating that a different user is using the same credentials. | | New country | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#activity-from-infrequent-country). This detection considers past activity locations to determine new and infrequent locations. The anomaly detection engine stores information about previous locations used by users in the organization. | | Activity from anonymous IP address | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#activity-from-anonymous-ip-addresses). This detection identifies that users were active from an IP address that has been identified as an anonymous proxy IP address. |
Premium detections are visible only to Azure AD Premium P2 customers. Customers
| Risk detection | Detection type | Description | | | | | | Possible attempt to access Primary Refresh Token (PRT) | Offline | This risk detection type is detected by Microsoft Defender for Endpoint (MDE). A Primary Refresh Token (PRT) is a key artifact of Azure AD authentication on Windows 10, Windows Server 2016, and later versions, iOS, and Android devices. A PRT is a JSON Web Token (JWT) that's specially issued to Microsoft first-party token brokers to enable single sign-on (SSO) across the applications used on those devices. Attackers can attempt to access this resource to move laterally into an organization or perform credential theft. This detection will move users to high risk and will only fire in organizations that have deployed MDE. This detection is low-volume and will be seen infrequently by most organizations. However, when it does occur it's high risk and users should be remediated. |
+| Anomalous user activity | Offline | This risk detection indicates that suspicious patterns of activity have been identified for an authenticated user. The post-authentication behavior for users is assessed for anomalies based on an action or sequence of actions occurring for the account, along with any sign-in risk detected. |
#### Nonpremium user risk detections
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
We detect risk on workload identities across sign-in behavior and offline indica
| Unusual addition of credentials to an OAuth app | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/defender-cloud-apps/investigate-anomaly-alerts#unusual-addition-of-credentials-to-an-oauth-app). This detection identifies the suspicious addition of privileged credentials to an OAuth app. This can indicate that an attacker has compromised the app, and is using it for malicious activity. | | Admin confirmed account compromised | Offline | This detection indicates an admin has selected 'Confirm compromised' in the Risky Workload Identities UI or using riskyServicePrincipals API. To see which admin has confirmed this account compromised, check the accountΓÇÖs risk history (via UI or API). | | Leaked Credentials (public preview) | Offline | This risk detection indicates that the account's valid credentials have been leaked. This leak can occur when someone checks in the credentials in public code artifact on GitHub, or when the credentials are leaked through a data breach. <br><br> When the Microsoft leaked credentials service acquires credentials from GitHub, the dark web, paste sites, or other sources, they're checked against current valid credentials in Azure AD to find valid matches. |
+| Anomalous service principal activity (public preview) | Offline | This risk detection indicates suspicious patterns of activity have been identified for an authenticated service principal. The post-authentication behavior for service principals is assessed for anomalies based on action or sequence of actions occurring for the account, along with any sign-in risk detected. |
## Identify risky workload identities
active-directory Configure User Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent.md
Previously updated : 11/17/2021 Last updated : 08/10/2022
Before an application can access your organization's data, a user must grant the
To reduce the risk of malicious applications attempting to trick users into granting them access to your organization's data, we recommend that you allow user consent only for applications that have been published by a [verified publisher](../develop/publisher-verification-overview.md).
->[!IMPORTANT]
->As from September 30, 2022, the new default consent setting for new tenants will be to Follow Microsoft's Recommendation. Microsoft's initial recommendation at that time will be that end-users canΓÇÖt consent to multi-tenant applications without publisher verification if the application requests basic permissions like sign-in and read user profile permissions.
- ## Prerequisites To configure user consent, you need:
active-directory Datawiza Azure Ad Sso Oracle Jde https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-oracle-jde.md
This tutorial shows how to enable Azure Active Directory (Azure AD) single sign-
Benefits of integrating applications with Azure AD using DAB include: -- [Proactive security with Zero Trust](https://www.microsoft.com/security/business/zero-trust) through [Azure AD SSO](https://azure.microsoft.com/solutions/active-directory-sso/OCID=AIDcmm5edswduu_SEM_e13a1a1787ce1700761a78c235ae5906:G:s&ef_id=e13a1a1787ce1700761a78c235ae5906:G:s&msclkid=e13a1a1787ce1700761a78c235ae5906#features), [Azure AD Multi-Factor Authentication](https://docs.microsoft.com/azure/active-directory/authentication/concept-mfa-howitworks) and
- [Conditional Access](https://docs.microsoft.com/azure/active-directory/conditional-access/overview).
+- [Proactive security with Zero Trust](https://www.microsoft.com/security/business/zero-trust) through [Azure AD SSO](https://azure.microsoft.com/solutions/active-directory-sso/OCID=AIDcmm5edswduu_SEM_e13a1a1787ce1700761a78c235ae5906:G:s&ef_id=e13a1a1787ce1700761a78c235ae5906:G:s&msclkid=e13a1a1787ce1700761a78c235ae5906#features), [Azure AD Multi-Factor Authentication](/azure/active-directory/authentication/concept-mfa-howitworks) and
+ [Conditional Access](/azure/active-directory/conditional-access/overview).
- [Easy authentication and authorization in Azure AD with no-code Datawiza](https://www.microsoft.com/security/blog/2022/05/17/easy-authentication-and-authorization-in-azure-active-directory-with-no-code-datawiza/). Use of web applications such as: Oracle JDE, Oracle E-Business Suite, Oracle Sibel, Oracle Peoplesoft, and home-grown apps.
The scenario solution has the following components:
- **Datawiza Cloud Management Console (DCMC)**: A centralized console to manage DAB. DCMC has UI and RESTful APIs for administrators to configure Datawiza Access Broker and access control policies. Understand the SP initiated flow by following the steps mentioned in [Datawiza and Azure AD authentication
-architecture](https://docs.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad#datawiza-with-azure-ad-authentication-architecture).
+architecture](/azure/active-directory/manage-apps/datawiza-with-azure-ad#datawiza-with-azure-ad-authentication-architecture).
## Prerequisites
Ensure the following prerequisites are met.
- An Azure subscription. If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free) - An Azure AD tenant linked to the Azure subscription.
- - See, [Quickstart: Create a new tenant in Azure Active Directory.](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-access-create-new-tenant)
+ - See, [Quickstart: Create a new tenant in Azure Active Directory.](/azure/active-directory/fundamentals/active-directory-access-create-new-tenant)
- Docker and Docker Compose
Ensure the following prerequisites are met.
- User identities synchronized from an on-premises directory to Azure AD, or created in Azure AD and flowed back to an on-premises directory.
- - See, [Azure AD Connect sync: Understand and customize synchronization](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-whatis).
+ - See, [Azure AD Connect sync: Understand and customize synchronization](/azure/active-directory/hybrid/how-to-connect-sync-whatis).
- An account with Azure AD and the Application administrator role
- - See, [Azure AD built-in roles, all roles](https://docs.microsoft.com/azure/active-directory/roles/permissions-reference#all-roles).
+ - See, [Azure AD built-in roles, all roles](/azure/active-directory/roles/permissions-reference#all-roles).
- An Oracle JDE environment
For the Oracle JDE application to recognize the user correctly, there's another
## Enable Azure AD Multi-Factor Authentication
-To provide an extra level of security for sign-ins, enforce multifactor authentication (MFA) for user sign-in. One way to achieve this is to [enable MFA on the Azure portal](https://docs.microsoft.com/azure/active-directory/authentication/tutorial-enable-azure-mfa).
+To provide an extra level of security for sign-ins, enforce multifactor authentication (MFA) for user sign-in. One way to achieve this is to [enable MFA on the Azure portal](/azure/active-directory/authentication/tutorial-enable-azure-mfa).
1. Sign in to the Azure portal as a **Global Administrator**.
To confirm Oracle JDE application access occurs correctly, a prompt appears to u
- [Watch the video - Enable SSO/MFA for Oracle JDE with Azure AD via Datawiza](https://www.youtube.com/watch?v=_gUGWHT5m90). -- [Configure Datawiza and Azure AD for secure hybrid access](https://docs.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad)
+- [Configure Datawiza and Azure AD for secure hybrid access](/azure/active-directory/manage-apps/datawiza-with-azure-ad)
-- [Configure Datawiza with Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/partner-datawiza)
+- [Configure Datawiza with Azure AD B2C](/azure/active-directory-b2c/partner-datawiza)
- [Datawiza documentation](https://docs.datawiza.com/)
active-directory Secure Hybrid Access Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access-integrations.md
Microsoft has partnerships with these application delivery controller (ADC) prov
| **ADC provider** | **Link** | | | |
-| Akamai Enterprise Application Access | [https://docs.microsoft.com/azure/active-directory/saas-apps/akamai-tutorial](../saas-apps/akamai-tutorial.md) |
-| Citrix ADC | [https://docs.microsoft.com/azure/active-directory/saas-apps/citrix-netscaler-tutorial](../saas-apps/citrix-netscaler-tutorial.md) |
-| F5 BIG-IP Access Policy Manager | [https://docs.microsoft.com/azure/active-directory/manage-apps/f5-aad-integration](./f5-aad-integration.md) |
-| Kemp LoadMaster | [https://docs.microsoft.com/azure/active-directory/saas-apps/kemp-tutorial](../saas-apps/kemp-tutorial.md) |
-| Pulse Secure Virtual Traffic Manager | [https://docs.microsoft.com/azure/active-directory/saas-apps/pulse-secure-virtual-traffic-manager-tutorial](../saas-apps/pulse-secure-virtual-traffic-manager-tutorial.md) |
+| Akamai Enterprise Application Access | [Akamai Enterprise Application Access](../saas-apps/akamai-tutorial.md) |
+| Citrix ADC | [Citrix ADC](../saas-apps/citrix-netscaler-tutorial.md) |
+| F5 BIG-IP Access Policy Manager | [F5 BIG-IP Access Policy Manager](./f5-aad-integration.md) |
+| Kemp LoadMaster | [Kemp LoadMaster](../saas-apps/kemp-tutorial.md) |
+| Pulse Secure Virtual Traffic Manager | [Pulse Secure Virtual Traffic Manager](../saas-apps/pulse-secure-virtual-traffic-manager-tutorial.md) |
The following VPN solution providers connect with Azure AD to enable modern authentication and authorization methods like SSO and multifactor authentication. | **VPN vendor** | **Link** | | | |
-| Cisco AnyConnect | [https://docs.microsoft.com/azure/active-directory/saas-apps/cisco-anyconnect](../saas-apps/cisco-anyconnect.md) |
-| Fortinet FortiGate | [https://docs.microsoft.com/azure/active-directory/saas-apps/fortigate-ssl-vpn-tutorial](../saas-apps/fortigate-ssl-vpn-tutorial.md) |
-| F5 BIG-IP Access Policy Manager | [https://docs.microsoft.com/azure/active-directory/manage-apps/f5-aad-password-less-vpn](./f5-aad-password-less-vpn.md) |
-| Palo Alto Networks GlobalProtect | [https://docs.microsoft.com/azure/active-directory/saas-apps/paloaltoadmin-tutorial](../saas-apps/paloaltoadmin-tutorial.md) |
-| Pulse Connect Secure | [https://docs.microsoft.com/azure/active-directory/saas-apps/pulse-secure-pcs-tutorial](../saas-apps/pulse-secure-pcs-tutorial.md) |
+| Cisco AnyConnect | [Cisco AnyConnect](../saas-apps/cisco-anyconnect.md) |
+| Fortinet FortiGate | [Fortinet FortiGate](../saas-apps/fortigate-ssl-vpn-tutorial.md) |
+| F5 BIG-IP Access Policy Manager | [F5 BIG-IP Access Policy Manager](./f5-aad-password-less-vpn.md) |
+| Palo Alto Networks GlobalProtect | [Palo Alto Networks GlobalProtect](../saas-apps/paloaltoadmin-tutorial.md) |
+| Pulse Connect Secure | [Pulse Connect Secure](../saas-apps/pulse-secure-pcs-tutorial.md) |
The following providers of software-defined perimeter (SDP) solutions connect with Azure AD to enable modern authentication and authorization methods like SSO and multifactor authentication. | **SDP vendor** | **Link** | | | |
-| Datawiza Access Broker | [https://docs.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad](./datawiza-with-azure-ad.md) |
-| Perimeter 81 | [https://docs.microsoft.com/azure/active-directory/saas-apps/perimeter-81-tutorial](../saas-apps/perimeter-81-tutorial.md) |
-| Silverfort Authentication Platform | [https://docs.microsoft.com/azure/active-directory/manage-apps/silverfort-azure-ad-integration](./silverfort-azure-ad-integration.md) |
-| Strata Maverics Identity Orchestrator | [https://docs.microsoft.com/azure/active-directory/saas-apps/maverics-identity-orchestrator-saml-connector-tutorial](../saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md) |
-| Zscaler Private Access | [https://docs.microsoft.com/azure/active-directory/saas-apps/zscalerprivateaccess-tutorial](../saas-apps/zscalerprivateaccess-tutorial.md) |
+| Datawiza Access Broker | [Datawiza Access Broker](./datawiza-with-azure-ad.md) |
+| Perimeter 81 | [Perimeter 81](../saas-apps/perimeter-81-tutorial.md) |
+| Silverfort Authentication Platform | [Silverfort Authentication Platform](./silverfort-azure-ad-integration.md) |
+| Strata Maverics Identity Orchestrator | [Strata Maverics Identity Orchestrator](../saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md) |
+| Zscaler Private Access | [Zscaler Private Access](../saas-apps/zscalerprivateaccess-tutorial.md) |
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
Managed identities for Azure resources provide Azure services with an automatically managed identity in Azure Active Directory. Using a managed identity, you can authenticate to any [service that supports Azure AD authentication](services-azure-active-directory-support.md) without managing credentials. We are integrating managed identities for Azure resources and Azure AD authentication across Azure. This page provides links to services' content that can use managed identities to access other Azure resources. Each entry in the table includes a link to service documentation discussing managed identities. >[!IMPORTANT]
-> New content is added to docs.microsoft.com every day. This list does not include every article that talks about managed identities. Please refer to each service's content set for details on their managed identities support. Resource provider namespace information is available in the article titled [Resource providers for Azure services](../../azure-resource-manager/management/azure-services-resource-providers.md).
+> New technical content is added daily. This list does not include every article that talks about managed identities. Please refer to each service's content set for details on their managed identities support. Resource provider namespace information is available in the article titled [Resource providers for Azure services](../../azure-resource-manager/management/azure-services-resource-providers.md).
The following Azure services support managed identities for Azure resources:
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Users with this role can manage role assignments in Azure Active Directory, as w
## Reports Reader
-Users with this role can view usage reporting data and the reports dashboard in Microsoft 365 admin center and the adoption context pack in Power BI. Additionally, the role provides access to sign-in reports and activity in Azure AD and data returned by the Microsoft Graph reporting API. A user assigned to the Reports Reader role can access only relevant usage and adoption metrics. They don't have any admin permissions to configure settings or access the product-specific admin centers like Exchange. This role has no access to view, create, or manage support tickets.
+Users with this role can view usage reporting data and the reports dashboard in Microsoft 365 admin center and the adoption context pack in Power BI. Additionally, the role provides access to all sign-in logs, audit logs, and activity reports in Azure AD and data returned by the Microsoft Graph reporting API. A user assigned to the Reports Reader role can access only relevant usage and adoption metrics. They don't have any admin permissions to configure settings or access the product-specific admin centers like Exchange. This role has no access to view, create, or manage support tickets.
> [!div class="mx-tableFixed"] > | Actions | Description |
active-directory Blackboard Learn Shibboleth Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blackboard-learn-shibboleth-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Blackboard Learn - Shibboleth | Microsoft Docs'
+ Title: 'Tutorial: Azure Active Directory integration with Blackboard Learn - Shibboleth'
description: Learn how to configure single sign-on between Azure Active Directory and Blackboard Learn - Shibboleth.
active-directory Blackboard Learn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blackboard-learn-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Blackboard Learn | Microsoft Docs'
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Blackboard Learn'
description: Learn how to configure single sign-on between Azure Active Directory and Blackboard Learn.
active-directory Palo Alto Networks Cloud Identity Engine Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/palo-alto-networks-cloud-identity-engine-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service.
+
+documentationcenter: ''
+
+writer: Thwimmer
++
+ms.assetid: 48afc8f5-a030-42da-9ffa-14fe5f80e333
+++
+ms.devlang: na
+ Last updated : 08/08/2022+++
+# Tutorial: Configure Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service](https://www.paloaltonetworks.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service.
+> * Remove users in Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service.
+> * Provision groups and group memberships in Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service.
+> * [Single sign-on](palo-alto-networks-cloud-identity-enginecloud-authentication-service-tutorial.md) to Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Palo Alto Networks with Admin rights.
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service to support provisioning with Azure AD
+
+Contact [Palo Alto Networks Customer Support](https://support.paloaltonetworks.com/support) to obtain the **SCIM Url** and corresponding **Token**.
+
+## Step 3. Add Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service from the Azure AD application gallery
+
+Add Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service from the Azure AD application gallery to start managing provisioning to Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service. If you have previously setup Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service**.
+
+ ![Screenshot of the Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service. If the connection fails, ensure your Palo Alto Networks account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service**.
+
+1. Review the user attributes that are synchronized from Azure AD to Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |displayName|String||&check;
+ |title|String||
+ |emails[type eq "work"].value|String||
+ |emails[type eq "other"].value|String||
+ |preferredLanguage|String||
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |name.formatted|String||&check;
+ |name.honorificSuffix|String||
+ |name.honorificPrefix|String||
+ |addresses[type eq "work"].formatted|String||
+ |addresses[type eq "work"].streetAddress|String||
+ |addresses[type eq "work"].locality|String||
+ |addresses[type eq "work"].region|String||
+ |addresses[type eq "work"].postalCode|String||
+ |addresses[type eq "work"].country|String||
+ |addresses[type eq "other"].formatted|String||
+ |addresses[type eq "other"].streetAddress|String||
+ |addresses[type eq "other"].locality|String||
+ |addresses[type eq "other"].region|String||
+ |addresses[type eq "other"].postalCode|String||
+ |addresses[type eq "other"].country|String||
+ |phoneNumbers[type eq "work"].value|String||
+ |phoneNumbers[type eq "mobile"].value|String||
+ |phoneNumbers[type eq "fax"].value|String||
+ |externalId|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String||
+
+ > [!NOTE]
+ > **Schema Discovery** is enabled on this app. Hence you might see more attributes in the application than mentioned in the table above.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service**.
+
+1. Review the group attributes that are synchronized from Azure AD to Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
++
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
aks Azure Blob Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-blob-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Blob storage on Az
description: Learn how to use the Container Storage Interface (CSI) driver for Azure Blob storage (preview) in an Azure Kubernetes Service (AKS) cluster. Previously updated : 08/08/2022 Last updated : 08/10/2022
To have a storage volume persist for your workload, you can use a StatefulSet. T
[managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/ [csi-specification]: https://github.com/container-storage-interface/spec/blob/master/spec.md [csi-blob-storage-open-source-driver]: https://github.com/kubernetes-sigs/blob-csi-driver
-[csi-blob-storage-open-source-driver-uninstall-steps]: https://github.com/kubernetes-sigs/blob-csi-driver](https://github.com/kubernetes-sigs/blob-csi-driver/blob/master/docs/install-csi-driver-master.md#clean-up-blob-csi-driver
+[csi-blob-storage-open-source-driver-uninstall-steps]: https://github.com/kubernetes-sigs/blob-csi-driver/blob/master/docs/install-csi-driver-master.md#clean-up-blob-csi-driver
<!-- LINKS - internal --> [install-azure-cli]: /cli/azure/install-azure-cli
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
In addition to in-tree driver features, Azure Disks CSI driver supports the foll
- Performance improvements during concurrent disk attach and detach - In-tree drivers attach or detach disks in serial, while CSI drivers attach or detach disks in batch. There's significant improvement when there are multiple disks attaching to one node. - Zone-redundant storage (ZRS) disk support
- - `Premium_ZRS`, `StandardSSD_ZRS` disk types are supported, ZRS disk could be scheduled on the zone or non-zone node, without the restriction that disk volume should be co-located in the same zone as a given node, check more details about [Zone-redundant storage for managed disks](../virtual-machines/disks-redundancy.md)
+ - `Premium_ZRS`, `StandardSSD_ZRS` disk types are supported. ZRS disk could be scheduled on the zone or non-zone node, without the restriction that disk volume should be co-located in the same zone as a given node. For more information, including which regions are supported, see [Zone-redundant storage for managed disks](../virtual-machines/disks-redundancy.md).
- [Snapshot](#volume-snapshots) - [Volume clone](#clone-volumes) - [Resize disk PV without downtime(Preview)](#resize-a-persistent-volume-without-downtime-preview)
aks Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-storage.md
Title: Concepts - Storage in Azure Kubernetes Services (AKS)
description: Learn about storage in Azure Kubernetes Service (AKS), including volumes, persistent volumes, storage classes, and claims Previously updated : 03/08/2022 Last updated : 08/10/2022 # Storage options for applications in Azure Kubernetes Service (AKS)
-Applications running in Azure Kubernetes Service (AKS) may need to store and retrieve data. While some application workloads can use local, fast storage on unneeded, emptied nodes, others require storage that persists on more regular data volumes within the Azure platform.
+Applications running in Azure Kubernetes Service (AKS) may need to store and retrieve data. While some application workloads can use local, fast storage on unneeded, emptied nodes, others require storage that persists on more regular data volumes within the Azure platform.
Multiple pods may need to:
-* Share the same data volumes.
-* Reattach data volumes if the pod is rescheduled on a different node.
-Finally, you may need to inject sensitive data or application configuration information into pods.
+* Share the same data volumes.
+* Reattach data volumes if the pod is rescheduled on a different node.
+
+Finally, you might need to collect and store sensitive data or application configuration information into pods.
This article introduces the core concepts that provide storage to your applications in AKS:
This article introduces the core concepts that provide storage to your applicati
Kubernetes typically treats individual pods as ephemeral, disposable resources. Applications have different approaches available to them for using and persisting data. A *volume* represents a way to store, retrieve, and persist data across pods and through the application lifecycle.
-Traditional volumes are created as Kubernetes resources backed by Azure Storage. You can manually create data volumes to be assigned to pods directly, or have Kubernetes automatically create them. Data volumes can use: [Azure Disks][disks-types], [Azure Files][storage-files-planning], [Azure NetApp Files][azure-netapp-files-service-levels], or [Azure Blobs][storage-account-overview].
+Traditional volumes are created as Kubernetes resources backed by Azure Storage. You can manually create data volumes to be assigned to pods directly, or have Kubernetes automatically create them. Data volumes can use: [Azure Disks][disks-types], [Azure Files][storage-files-planning], [Azure NetApp Files][azure-netapp-files-service-levels], or [Azure Blobs][storage-account-overview].
### Azure Disks
-Use *Azure Disks* to create a Kubernetes *DataDisk* resource. Disks types include:
+Use *Azure Disks* to create a Kubernetes *DataDisk* resource. Disks types include:
+ * Ultra Disks * Premium SSDs * Standard SSDs * Standard HDDs > [!TIP]
->For most production and development workloads, use Premium SSD.
+>For most production and development workloads, use Premium SSD.
Since Azure Disks are mounted as *ReadWriteOnce*, they're only available to a single node. For storage volumes that can be accessed by pods on multiple nodes simultaneously, use Azure Files. ### Azure Files
-Use *Azure Files* to mount an SMB 3.1.1 share or NFS 4.1 share backed by an Azure storage accounts to pods. Files let you share data across multiple nodes and pods and can use:
+
+Use *Azure Files* to mount a Server Message Block (SMB) version 3.1.1 share or Network File System (NFS) version 4.1 share backed by an Azure storage accounts to pods. Files let you share data across multiple nodes and pods and can use:
+ * Azure Premium storage backed by high-performance SSDs * Azure Standard storage backed by regular HDDs ### Azure NetApp Files
-* Ultra Storage
+
+* Ultra Storage
* Premium Storage
-* Standard Storage
+* Standard Storage
### Azure Blob Storage
-* Block Blobs
+
+Use *Azure Blob Storage* to create a blob storage container and mount it using the NFS v3.0 protocol or BlobFuse.
+
+* Block Blobs
### Volume types
-Kubernetes volumes represent more than just a traditional disk for storing and retrieving information. Kubernetes volumes can also be used as a way to inject data into a pod for use by the containers.
+
+Kubernetes volumes represent more than just a traditional disk for storing and retrieving information. Kubernetes volumes can also be used as a way to inject data into a pod for use by the containers.
Common volume types in Kubernetes include:
Commonly used as temporary space for a pod. All containers within a pod can acce
#### secret
-You can use *secret* volumes to inject sensitive data into pods, such as passwords.
-1. Create a Secret using the Kubernetes API.
-1. Define your pod or deployment and request a specific Secret.
+You can use *secret* volumes to inject sensitive data into pods, such as passwords.
+
+1. Create a Secret using the Kubernetes API.
+1. Define your pod or deployment and request a specific Secret.
* Secrets are only provided to nodes with a scheduled pod that requires them.
- * The Secret is stored in *tmpfs*, not written to disk.
+ * The Secret is stored in *tmpfs*, not written to disk.
1. When you delete the last pod on a node requiring a Secret, the Secret is deleted from the node's tmpfs. * Secrets are stored within a given namespace and can only be accessed by pods within the same namespace. #### configMap
-You can use *configMap* to inject key-value pair properties into pods, such as application configuration information. Define application configuration information as a Kubernetes resource, easily updated and applied to new instances of pods as they're deployed.
+You can use *configMap* to inject key-value pair properties into pods, such as application configuration information. Define application configuration information as a Kubernetes resource, easily updated and applied to new instances of pods as they're deployed.
-Like using a Secret:
-1. Create a ConfigMap using the Kubernetes API.
-1. Request the ConfigMap when you define a pod or deployment.
+Like using a secret:
+
+1. Create a ConfigMap using the Kubernetes API.
+1. Request the ConfigMap when you define a pod or deployment.
* ConfigMaps are stored within a given namespace and can only be accessed by pods within the same namespace. ## Persistent volumes
For clusters using the [Container Storage Interface (CSI) drivers][csi-storage-d
| `managed-csi-premium` | Uses Azure Premium locally redundant storage (LRS) to create a Managed Disk. The reclaim policy again ensures that the underlying Azure Disk is deleted when the persistent volume that used it is deleted. Similarly, this storage class allows for persistent volumes to be expanded. | | `azurefile-csi` | Uses Azure Standard storage to create an Azure File Share. The reclaim policy ensures that the underlying Azure File Share is deleted when the persistent volume that used it is deleted. | | `azurefile-csi-premium` | Uses Azure Premium storage to create an Azure File Share. The reclaim policy ensures that the underlying Azure File Share is deleted when the persistent volume that used it is deleted.|
+| `azureblob-nfs-premium` | Uses Azure Premium storage to create an Azure Blob storage container and connect using the NFS v3 protocol. The reclaim policy ensures that the underlying Azure Blob storage container is deleted when the persistent volume that used it is deleted. |
+| `azureblob-fuse-premium` | Uses Azure Premium storage to create an Azure Blob storage container and connect using BlobFuse. The reclaim policy ensures that the underlying Azure Blob storage container is deleted when the persistent volume that used it is deleted. |
Unless you specify a StorageClass for a persistent volume, the default StorageClass will be used. Ensure volumes use the appropriate storage you need when requesting persistent volumes.
allowVolumeExpansion: true
## Persistent volume claims
-A PersistentVolumeClaim requests storage of a particular StorageClass, access mode, and size. The Kubernetes API server can dynamically provision the underlying Azure storage resource if no existing resource can fulfill the claim based on the defined StorageClass.
+A PersistentVolumeClaim requests storage of a particular StorageClass, access mode, and size. The Kubernetes API server can dynamically provision the underlying Azure storage resource if no existing resource can fulfill the claim based on the defined StorageClass.
The pod definition includes the volume mount once the volume has been connected to the pod.
spec:
``` When you create a pod definition, you also specify:
-* The persistent volume claim to request the desired storage.
-* The *volumeMount* for your applications to read and write data.
+
+* The persistent volume claim to request the desired storage.
+* The *volumeMount* for your applications to read and write data.
The following example YAML manifest shows how the previous persistent volume claim can be used to mount a volume at */mnt/azure*:
For associated best practices, see [Best practices for storage and backups in AK
To see how to use CSI drivers, see the following how-to articles: -- [Enable Container Storage Interface(CSI) drivers for Azure disks and Azure Files on Azure Kubernetes Service(AKS)][csi-storage-drivers]-- [Use Azure disk Container Storage Interface(CSI) drivers in Azure Kubernetes Service(AKS)][azure-disk-csi]-- [Use Azure Files Container Storage Interface(CSI) drivers in Azure Kubernetes Service(AKS)][azure-files-csi]-- [Integrate Azure NetApp Files with Azure Kubernetes Service][azure-netapp-files]
+- [Enable Container Storage Interface (CSI) drivers for Azure Disks, Azure Files, and Azure Blob storage on Azure Kubernetes Service][csi-storage-drivers]
+- [Use Azure Disks CSI driver in Azure Kubernetes Service][azure-disk-csi]
+- [Use Azure Files CSI driver in Azure Kubernetes Service][azure-files-csi]
+- [Use Azure Blob storage CSI driver (preview) in Azure Kubernetes Service][azure-blob-csi]
+- [Integrate Azure NetApp Files with Azure Kubernetes Service][azure-netapp-files]
For more information on core Kubernetes and AKS concepts, see the following articles:
For more information on core Kubernetes and AKS concepts, see the following arti
[aks-concepts-network]: concepts-network.md [operator-best-practices-storage]: operator-best-practices-storage.md [csi-storage-drivers]: csi-storage-drivers.md
+[azure-blob-csi]: azure-blob-csi.md
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
az aks create \
The following screenshot from the Azure portal shows an example of configuring these settings during AKS cluster creation:
-![Advanced networking configuration in the Azure portal][portal-01-networking-advanced]
## Dynamic allocation of IPs and enhanced subnet support
aks Open Service Mesh Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-integrations.md
The Open Service Mesh (OSM) add-on integrates with features provided by Azure as
## Ingress
-Ingress allows for traffic external to the mesh to be routed to services within the mesh. With OSM, you can configure most ingress solutions to work with your mesh, but OSM works best with [Web Application Routing][web-app-routing], [NGINX ingress][osm-nginx], or [Contour ingress][osm-contour]. Open source projects integrating with OSM, including NGINX ingress and Contour ingress, aren't covered by the [AKS support policy][aks-support-policy].
+Ingress allows for traffic external to the mesh to be routed to services within the mesh. With OSM, you can configure most ingress solutions to work with your mesh, but OSM works best with [Web Application Routing][web-app-routing], [NGINX ingress][osm-nginx], or [Contour ingress][osm-contour]. Open source projects integrating with OSM are not covered by the [AKS support policy][aks-support-policy].
-Using [Azure Gateway Ingress Controller (AGIC)][agic] for ingress with OSM isn't supported and not recommended.
+At this time, [Azure Gateway Ingress Controller (AGIC)][agic] only works for HTTP backends. If you configure OSM to use AGIC, AGIC will not be used for other backends such as HTTPS and mTLS.
+
+### Using the Azure Gateway Ingress Controller (AGIC) with the OSM add-on for HTTP ingress
+
+> [!IMPORTANT]
+> You can't configure [Azure Gateway Ingress Controller (AGIC)][agic] for HTTPS ingress.
+
+After installing the AGIC ingress controller, create a namespace for the application service, add it to the mesh using the OSM CLI, and deploy the application service to that namespace:
+
+```console
+# Create a namespace
+kubectl create ns httpbin
+
+# Add the namespace to the mesh
+osm namespace add httpbin
+
+# Deploy the application
+
+export RELEASE_BRANCH=release-v1.2
+kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/$RELEASE_BRANCH/manifests/samples/httpbin/httpbin.yaml -n httpbin
+```
+
+Verify that the pods are up and running, and have the envoy sidecar injected:
+
+```console
+kubectl get pods -n httpbin
+```
+
+Example output:
+
+```console
+NAME READY STATUS RESTARTS AGE
+httpbin-7c6464475-9wrr8 2/2 Running 0 6d20h
+```
+
+```console
+kubectl get svc -n httpbin
+```
+
+Example output:
+
+```console
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+httpbin ClusterIP 10.0.92.135 <none> 14001/TCP 6d20h
+```
+
+Next, deploy the following `Ingress` and `IngressBackend` configurations to allow external clients to access the `httpbin` service on port `14001`.
+
+```console
+kubectl apply -f <<EOF
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: httpbin
+ namespace: httpbin
+ annotations:
+ kubernetes.io/ingress.class: azure/application-gateway
+spec:
+ rules:
+ - http:
+ paths:
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: httpbin
+ port:
+ number: 14001
+
+kind: IngressBackend
+apiVersion: policy.openservicemesh.io/v1alpha1
+metadata:
+ name: httpbin
+ namespace: httpbin
+spec:
+ backends:
+ - name: httpbin
+ port:
+ number: 14001 # targetPort of httpbin service
+ protocol: http
+ sources:
+ - kind: IPRange
+ name: 10.0.0.0/8
+EOF
+```
+
+Ensure that both the Ingress and IngressBackend objects have been successfully deployed:
+
+```console
+kubectl get ingress -n httpbin
+```
+
+Example output:
+
+```console
+NAME CLASS HOSTS ADDRESS PORTS AGE
+httpbin <none> * 20.85.173.179 80 6d20h
+```
+
+```console
+kubectl get ingressbackend -n httpbin
+```
+
+Example output:
+
+```console
+NAME STATUS
+httpbin committed
+```
+
+Use `kubectl` to display the external IP address of the ingress service.
+```console
+kubectl get ingress -n httpbin
+```
+
+Use `curl` to verify you can access the `httpbin` service using the external IP address of the ingress service.
+```console
+curl -sI http://<external-ip>/get
+```
+
+Confirm you receive a response with `status 200`.
## Metrics observability
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
Title: Use KMS etcd encryption in Azure Kubernetes Service (AKS)
description: Learn how to use kms etcd encryption with Azure Kubernetes Service (AKS) Previously updated : 07/26/2022 Last updated : 08/10/2022
The above example stores the value of the Identity Resource ID in *IDENTITY_RESO
### Assign permissions (decrypt and encrypt) to access key vault
-Use `az keyvault set-policy` to create an Azure KeyVault policy.
+#### For non-RBAC key vault
+
+If your key vault is not enabled with `--enable-rbac-authorization`, you could use `az keyvault set-policy` to create an Azure KeyVault policy.
```azurecli-interactive az keyvault set-policy -n MyKeyVault --key-permissions decrypt encrypt --object-id $IDENTITY_OBJECT_ID ```
+#### For RBAC key vault
+
+If your key vault is enabled with `--enable-rbac-authorization`, you need to assign the "Key Vault Administrator" RBAC role which has decrypt, encrypt permission.
+
+```azurecli-interactive
+az role assignment create --role "Key Vault Crypto User" --assignee-object-id $IDENTITY_OBJECT_ID --assignee-principal-type "ServicePrincipal" --scope $KEYVAULT_RESOURCE_ID
+```
+ ### Create an AKS cluster with KMS etcd encryption enabled Create an AKS cluster using the [az aks create][az-aks-create] command with the `--enable-azure-keyvault-kms`, `--azure-keyvault-kms-key-vault-network-access` and `--azure-keyvault-kms-key-id` parameters to enable KMS etcd encryption.
The above example stores the value of the Identity Resource ID in *IDENTITY_RESO
### Assign permissions (decrypt and encrypt) to access key vault
-Use `az keyvault set-policy` to create an Azure KeyVault policy.
+#### For non-RBAC key vault
+
+If your key vault is not enabled with `--enable-rbac-authorization`, you could use `az keyvault set-policy` to create an Azure KeyVault policy.
```azurecli-interactive az keyvault set-policy -n MyKeyVault --key-permissions decrypt encrypt --object-id $IDENTITY_OBJECT_ID ```
+#### For RBAC key vault
+
+If your key vault is enabled with `--enable-rbac-authorization`, you need to assign a RBAC role which at least contains decrypt, encrypt permission.
+
+```azurecli-interactive
+az role assignment create --role "Key Vault Crypto User" --assignee-object-id $IDENTITY_OBJECT_ID --assignee-principal-type "ServicePrincipal" --scope $KEYVAULT_RESOURCE_ID
+```
+
+### Assign permission for creating private link
+ For private key vault, the AKS needs *Key Vault Contributor* role to create private link between private key vault and cluster. ```azurecli-interactive
analysis-services Analysis Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-overview.md
Title: What is Azure Analysis Services | Microsoft Docs
+ Title: What is Azure Analysis Services?
description: Learn about Azure Analysis Services, a fully managed platform as a service (PaaS) that provides enterprise-grade data models in the cloud.
Things are changing rapidly. Get the latest information on the [Power BI blog](h
## Q&A
-Microsoft [Q&A](/answers/products/) is a technical community platform part of Microsoft Docs that provides a rich online experience in answering your technical questions. Join the conversation on [Q&A - Azure Analysis Services forum](/answers/topics/azure-analysis-services.html).
+Microsoft [Q&A](/answers/products/) is a technical community platform that provides a rich online experience in answering your technical questions. Join the conversation on [Q&A - Azure Analysis Services forum](/answers/topics/azure-analysis-services.html).
## Next steps
app-service App Service Sql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-sql-github-actions.md
To run the create Azure resources workflow:
1. Open the `azuredeploy.yaml` file in `.github/workflows` within your repository.
-1. Update the value of `AZURE_RESOURCE_GROUP` to your resource group name.
+1. Update the value of `AZURE_RESOURCE_GROUP` to your resource group name.
+
+1. Update the values of `WEB_APP_NAME` and `SQL_SERVER_NAME` to your web app name and sql server name.
1. Go to **Actions** and select **Run workflow**.
To run the build, push, and deploy workflow:
## Next steps > [!div class="nextstepaction"]
-> [Learn about Azure and GitHub integration](/azure/developer/github/)
+> [Learn about Azure and GitHub integration](/azure/developer/github/)
app-service Configure Authentication Provider Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-aad.md
At present, this allows _any_ client application in your Azure AD tenant to requ
You have now configured a daemon client application that can access your App Service app using its own identity. > [!NOTE]
-> The access tokens provided to your app via EasyAuth do not have scopes for other APIs, such as Graph, even if your application has permissions to access those APIs.
- To use these APIs, you will need to use Azure Resource Manager to configure the token returned so it can be used to authenticate to other services. You can see an example of [this tutorial](https://docs.microsoft.com/azure/app-service/scenario-secure-app-access-microsoft-graph-as-user?tabs=azure-resource-explorer)
+> The access tokens provided to your app via EasyAuth do not have scopes for other APIs, such as Graph, even if your application has permissions to access those APIs. To use these APIs, you will need to use Azure Resource Manager to configure the token returned so it can be used to authenticate to other services. For more information, see [Tutorial: Access Microsoft Graph from a secured .NET app as the user](/azure/app-service/scenario-secure-app-access-microsoft-graph-as-user?tabs=azure-resource-explorer) .
## Best practices
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-python.md
App Service ignores any errors that occur when processing a custom startup comma
gunicorn --bind=0.0.0.0 --timeout 600 --workers=4 --chdir <module_path> <module>.wsgi ```
- For more information, see [Running Gunicorn](https://docs.gunicorn.org/en/stable/run.html) (docs.gunicorn.org).
+ For more information, see [Running Gunicorn](https://docs.gunicorn.org/en/stable/run.html) (docs.gunicorn.org). If you are using scale rules to scale your web app up and down, you can dynamically set the number of workers using the `NUM_CORES` environment variable in our startup command, for example: `--workers $((($NUM_CORES*2)+1))`. For more information on setting the recommended number of gunicorn workers, see [the Gunicorn FAQ](https://docs.gunicorn.org/en/stable/design.html#how-many-workers)
- **Enable production logging for Django**: Add the `--access-logfile '-'` and `--error-logfile '-'` arguments to the command line:
app-service Configure Ssl Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-bindings.md
If your app already has a certificate for the selected custom domain, go to [Cre
If your app has no certificate for the selected custom domain, then you have two options: - **Upload PFX Certificate** - Follow the workflow at [Upload a private certificate](configure-ssl-certificate.md#upload-a-private-certificate), then select this option here.-- **Import App Service Certificate** - Follow the workflow at [Import an App Service certificate](configure-ssl-certificate.md#import-an-app-service-certificate), then select this option here.
+- **Import App Service Certificate** - Follow the workflow at [Import an App Service certificate](configure-ssl-certificate.md#buy-and-import-app-service-certificate), then select this option here.
> [!NOTE] > You can also [Create a free certificate](configure-ssl-certificate.md#create-a-free-managed-certificate) or [Import a Key Vault certificate](configure-ssl-certificate.md#import-a-certificate-from-key-vault), but you must do it separately and then return to the **TLS/SSL Binding** dialog.
Your inbound IP address can change when you delete a binding, even if that bindi
## Enforce HTTPS
-By default, anyone can still access your app using HTTP. You can redirect all HTTP requests to the HTTPS port.
- In your app page, in the left navigation, select **TLS/SSL settings**. Then, in **HTTPS Only**, select **On**.
+If selected **HTTPS Only**, **Off** It means anyone can still access your app using HTTP. You can redirect all HTTP requests to the HTTPS port by selecting **On**.
+ ![Enforce HTTPS](./media/configure-ssl-bindings/enforce-https.png) When the operation is complete, navigate to any of the HTTP URLs that point to your app. For example:
Language specific configuration guides, such as the [Linux Node.js configuration
## More resources * [Use a TLS/SSL certificate in your code in Azure App Service](configure-ssl-certificate-in-code.md)
-* [FAQ : App Service Certificates](./faq-configuration-and-management.yml)
+* [FAQ : App Service Certificates](./faq-configuration-and-management.yml)
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md
description: Create a free certificate, import an App Service certificate, impor
tags: buy-ssl-certificates Previously updated : 04/27/2022 Last updated : 07/28/2022
-# Add a TLS/SSL certificate in Azure App Service
-[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service. This article shows you how to create, upload, or import a private certificate or a public certificate into App Service.
+# Secure connections by adding and managing TLS/SSL certificates in Azure App Service
-Once the certificate is added to your App Service app or [function app](../azure-functions/index.yml), you can [secure a custom DNS name with it](configure-ssl-bindings.md) or [use it in your application code](configure-ssl-certificate-in-code.md).
+You can add digital security certificates to [use in your application code](configure-ssl-certificate-in-code.md) or to [secure custom DNS names](configure-ssl-bindings.md) in [Azure App Service](overview.md), which provides a highly scalable, self-patching web hosting service. Currently called Transport Layer Security (TLS) certificates, also previously known as Secure Socket Layer (SSL) certificates, these private or public certificates help you secure internet connections by encrypting data sent between your browser, websites that you visit, and the website server.
-> [!NOTE]
-> A certificate uploaded into an app is stored in a deployment unit that is bound to the app service plan's resource group, region and operating system combination (internally called a *webspace*). This makes the certificate accessible to other apps in the same resource group and region combination.
-
-The following table lists the options you have for adding certificates in App Service:
+The following table lists the options for you to add certificates in App Service:
|Option|Description| |-|-|
The following table lists the options you have for adding certificates in App Se
| Upload a private certificate | If you already have a private certificate from a third-party provider, you can upload it. See [Private certificate requirements](#private-certificate-requirements). | | Upload a public certificate | Public certificates are not used to secure custom domains, but you can load them into your code if you need them to access remote resources. |
+> [!NOTE]
+> After you upload a certificate to an app, the certificate is stored in a deployment unit that's bound to the App Service plan's resource group, region, and operating system combination, internally called a *webspace*. That way, the certificate is accessible to other apps in the same resource group and region combination.
+ ## Prerequisites - [Create an App Service app](./index.yml).+ - For a private certificate, make sure that it satisfies all [requirements from App Service](#private-certificate-requirements).+ - **Free certificate only**:
- - Map the domain you want a certificate for to App Service. For information, see [Tutorial: Map an existing custom DNS name to Azure App Service](app-service-web-tutorial-custom-domain.md).
- - For a root domain (like contoso.com), make sure your app doesn't have any [IP restrictions](app-service-ip-restrictions.md) configured. Both certificate creation and its periodic renewal for a root domain depends on your app being reachable from the internet.
+
+ - Map the domain where you want the certificate to App Service. For information, see [Tutorial: Map an existing custom DNS name to Azure App Service](app-service-web-tutorial-custom-domain.md).
+
+ - For a root domain (like contoso.com), make sure your app doesn't have any [IP restrictions](app-service-ip-restrictions.md) configured. Both certificate creation and its periodic renewal for a root domain depends on your app being reachable from the internet.
## Private certificate requirements
-The [free App Service managed certificate](#create-a-free-managed-certificate) and the [App Service certificate](#import-an-app-service-certificate) already satisfy the requirements of App Service. If you choose to upload or import a private certificate to App Service, your certificate must meet the following requirements:
+The [free App Service managed certificate](#create-a-free-managed-certificate) and the [App Service certificate](#buy-and-import-app-service-certificate) already satisfy the requirements of App Service. If you choose to upload or import a private certificate to App Service, your certificate must meet the following requirements:
* Exported as a [password-protected PFX file](https://en.wikipedia.org/w/index.php?title=X.509&section=4#Certificate_filename_extensions), encrypted using triple DES. * Contains private key at least 2048 bits long * Contains all intermediate certificates and the root certificate in the certificate chain.
-To secure a custom domain in a TLS binding, the certificate has additional requirements:
+To secure a custom domain in a TLS binding, the certificate has more requirements:
* Contains an [Extended Key Usage](https://en.wikipedia.org/w/index.php?title=X.509&section=4#Extensions_informing_a_specific_usage_of_a_certificate) for server authentication (OID = 1.3.6.1.5.5.7.3.1) * Signed by a trusted certificate authority > [!NOTE]
-> **Elliptic Curve Cryptography (ECC) certificates** can work with App Service but are not covered by this article. Work with your certificate authority on the exact steps to create ECC certificates.
+> **Elliptic Curve Cryptography (ECC) certificates** work with App Service but aren't covered by this article. For the exact steps to create ECC certificates, work with your certificate authority.
[!INCLUDE [Prepare your web app](../../includes/app-service-ssl-prepare-app.md)] ## Create a free managed certificate
-> [!NOTE]
-> Before creating a free managed certificate, make sure you have [fulfilled the prerequisites](#prerequisites) for your app.
-
-The free App Service managed certificate is a turn-key solution for securing your custom DNS name in App Service. It's a TLS/SSL server certificate that's fully managed by App Service and renewed continuously and automatically in six-month increments, 45 days before expiration, as long as the prerequisites set-up remain the same without any action required from you. All the associated bindings will be updated with the renewed certificate. You create the certificate and bind it to a custom domain, and let App Service do the rest.
+The free App Service managed certificate is a turn-key solution for securing your custom DNS name in App Service. Without any action required from you, this TLS/SSL server certificate is fully managed by App Service and is automatically renewed continuously in six-month increments, 45 days before expiration, as long as the prerequisites that you set up stay the same. All the associated bindings are updated with the renewed certificate. You create and bind the certificate to a custom domain, and let App Service do the rest.
> [!IMPORTANT]
-> Because Azure fully manages the certificates on your behalf, any aspect of the managed certificate, including the root issuer, can be changed at anytime. These changes are outside of your control. You should avoid having a hard dependency or practice certificate "pinning" to the managed certificate, or to any part of the certificate hierarchy. If you need the certificate pinning behavior, add a certificate to your custom domain using any other available method in this article.
+> Before you create a free managed certificate, make sure you have [met the prerequisites](#prerequisites) for your app.
+>
+> Free certificates are issued by DigiCert. For some domains, you must explicitly allow DigiCert as a certificate issuer by creating a [CAA domain record](https://wikipedia.org/wiki/DNS_Certification_Authority_Authorization) with the value: `0 issue digicert.com`.
+>
+> Azure fully manages the certificates on your behalf, so any aspect of the managed certificate, including the root issuer, can change at anytime. These changes are outside your control. Make sure to avoid hard dependencies and "pinning" practice certificates to the managed certificate or any part of the certificate hierarchy. If you need the certificate pinning behavior, add a certificate to your custom domain using any other available method in this article.
The free certificate comes with the following limitations: -- Does not support wildcard certificates.-- Does not support usage as a client certificate by using certificate thumbprint (removal of certificate thumbprint is planned).-- Does not support private DNS.-- Is not exportable.-- Is not supported on App Service Environment (ASE).
+- Doesn't support wildcard certificates.
+- Doesn't support usage as a client certificate by using certificate thumbprint, which is planned for deprecation and removal.
+- Doesn't support private DNS.
+- Isn't exportable.
+- Isn't supported in an App Service Environment (ASE).
- Only supports alphanumeric characters, dashes (-), and periods (.).
-# [Apex domain](#tab/apex)
+### [Apex domain](#tab/apex)
- Must have an A record pointing to your web app's IP address.-- Is not supported on apps that are not publicly accessible.-- Is not supported with root domains that are integrated with Traffic Manager.-- All the above must be met for successful certificate issuances and renewals.-
-# [Subdomain](#tab/subdomain)
-- Must have CNAME mapped _directly_ to `<app-name>.azurewebsites.net`. Mapping to an intermediate CNAME value will block certificate issuance and renewal.-- All the above must be met for successful certificate issuance and renewals.
+- Isn't supported on apps that are not publicly accessible.
+- Isn't supported with root domains that are integrated with Traffic Manager.
+- Must meet all the above for successful certificate issuances and renewals.
+### [Subdomain](#tab/subdomain)
+- Must have CNAME mapped _directly_ to `<app-name>.azurewebsites.net`. Mapping to an intermediate CNAME value blocks certificate issuance and renewal.
+- Must meet all the above for successful certificate issuance and renewals.
-> [!NOTE]
-> The free certificate is issued by DigiCert. For some domains, you must explicitly allow DigiCert as a certificate issuer by creating a [CAA domain record](https://wikipedia.org/wiki/DNS_Certification_Authority_Authorization) with the value: `0 issue digicert.com`.
->
+
-In the <a href="https://portal.azure.com" target="_blank">Azure portal</a>, from the left menu, select **App Services** > **\<app-name>**.
+1. In the [Azure portal](https://portal.azure.com), from the left menu, select **App Services** > **\<app-name>**.
-From the left navigation of your app, select **TLS/SSL settings** > **Private Key Certificates (.pfx)** > **Create App Service Managed Certificate**.
+1. On your app's navigation menu, select **TLS/SSL settings**. On the pane that opens, select **Private Key Certificates (.pfx)** > **Create App Service Managed Certificate**.
-![Create free certificate in App Service](./media/configure-ssl-certificate/create-free-cert.png)
+ ![Screenshot of app menu with "TLS/SSL settings", "Private Key Certificates (.pfx)", and "Create App Service Managed Certificate" selected.](./media/configure-ssl-certificate/create-free-cert.png)
-Select the custom domain to create a free certificate for and select **Create**. You can create only one certificate for each supported custom domain.
+1. Select the custom domain for the free certificate, and then select **Create**. You can create only one certificate for each supported custom domain.
-When the operation completes, you see the certificate in the **Private Key Certificates** list.
+ When the operation completes, the certificate appears in the **Private Key Certificates** list.
-![Create free certificate finished](./media/configure-ssl-certificate/create-free-cert-finished.png)
+ ![Screenshot of "Private Key Certificates" pane with newly created certificate listed.](./media/configure-ssl-certificate/create-free-cert-finished.png)
-> [!IMPORTANT]
-> To secure a custom domain with this certificate, you still need to create a certificate binding. Follow the steps in [Create binding](configure-ssl-bindings.md#create-binding).
->
+1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Create binding](configure-ssl-bindings.md#create-binding).
-## Import an App Service Certificate
+## Buy and import App Service certificate
-If you purchase an App Service Certificate from Azure, Azure manages the following tasks:
+If you purchase an App Service certificate from Azure, Azure manages the following tasks:
-- Takes care of the purchase process from GoDaddy.
+- Handles the purchase process from GoDaddy.
- Performs domain verification of the certificate. - Maintains the certificate in [Azure Key Vault](../key-vault/general/overview.md).-- Manages certificate renewal (see [Renew certificate](#renew-an-app-service-certificate)).-- Synchronize the certificate automatically with the imported copies in App Service apps.
+- Manages [certificate renewal](#renew-app-service-certificate).
+- Synchronizes the certificate automatically with the imported copies in App Service apps.
+
+To purchase an App Service certificate, go to [Start certificate order](#start-certificate-purchase).
-To purchase an App Service certificate, go to [Start certificate order](#start-certificate-order).
+> [!NOTE]
+> Currently, App Service certificates aren't supported in Azure National Clouds.
-If you already have a working App Service certificate, you can:
+If you already have a working App Service certificate, you can complete the following tasks:
- [Import the certificate into App Service](#import-certificate-into-app-service).-- [Manage the certificate](#manage-app-service-certificates), such as renew, rekey, and export it.
-> [!NOTE]
-> App Service Certificates are not supported in Azure National Clouds at this time.
+- [Manage the App Service certificate](#manage-app-service-certificates), such as renew, rekey, and export.
-### Start certificate order
+### Start certificate purchase
-Start an App Service certificate order in the <a href="https://portal.azure.com/#create/Microsoft.SSL" target="_blank">App Service Certificate create page</a>.
+1. Go to the [App Service Certificate creation page](https://portal.azure.com/#create/Microsoft.SSL), and start your purchase for an App Service certificate.
-> [!NOTE]
-> All prices shown are for examples only.
+ > [!NOTE]
+ > In this article, all prices shown are for example purposes only.
+ >
+ > App Service Certificates purchased from Azure are issued by GoDaddy. For some domains, you must explicitly allow GoDaddy as a certificate issuer by creating a [CAA domain record](https://wikipedia.org/wiki/DNS_Certification_Authority_Authorization) with the value: `0 issue godaddy.com`
+ :::image type="content" source="./media/configure-ssl-certificate/purchase-app-service-cert.png" alt-text="Screenshot of 'Create App Service Certificate' pane with purchase options.":::
-Use the following table to help you configure the certificate. When finished, click **Create**.
+1. To help you configure the certificate, use the following table. When you're done, select **Create**.
-| Setting | Description |
-|-|-|
-| Subscription | The subscription that will contain the certificate. |
-| Resource group | The resource group that will contain the certificate. You can use a new resource group or select the same resource group as your App Service app, for example. |
-| SKU | Determines the type of certificate to create, whether a standard certificate or a [wildcard certificate](https://wikipedia.org/wiki/Wildcard_certificate). |
-| Naked Domain Host Name | Specify the root domain here. The issued certificate secures *both* the root domain and the `www` subdomain. In the issued certificate, the Common Name field contains the root domain, and the Subject Alternative Name field contains the `www` domain. To secure any subdomain only, specify the fully qualified domain name of the subdomain here (for example, `mysubdomain.contoso.com`).|
-| Certificate name | A friendly name for your App Service certificate. |
-| Enable auto renewal | Select whether the certificate should be renewed automatically before it expires. Each renewal extends the certificate expiration by one year and the cost is charged to your subscription. |
+ | Setting | Description |
+ |-|-|
+ | **Subscription** | The Azure subscription to associate with the certificate. |
+ | **Resource group** | The resource group that will contain the certificate. You can either create a new resource group or select the same resource group as your App Service app. |
+ | **SKU** | Determines the type of certificate to create, either a standard certificate or a [wildcard certificate](https://wikipedia.org/wiki/Wildcard_certificate). |
+ | **Naked Domain Host Name** | Specify the root domain. The issued certificate secures *both* the root domain and the `www` subdomain. In the issued certificate, the **Common Name** field specifies the root domain, and the **Subject Alternative Name** field specifies the `www` domain. To secure any subdomain only, specify the fully qualified domain name for the subdomain, for example, `mysubdomain.contoso.com`.|
+ | **Certificate name** | The friendly name for your App Service certificate. |
+ | **Enable auto renewal** | Select whether to automatically renew the certificate before expiration. Each renewal extends the certificate expiration by one year and the cost is charged to your subscription. |
-> [!NOTE]
-> App Service Certificates purchased from Azure are issued by GoDaddy. For some domains, you must explicitly allow GoDaddy as a certificate issuer by creating a [CAA domain record](https://wikipedia.org/wiki/DNS_Certification_Authority_Authorization) with the value: `0 issue godaddy.com`
->
+### Store certificate in Azure Key Vault
-### Store in Azure Key Vault
+[Key Vault](../key-vault/general/overview.md) is an Azure service that helps safeguard cryptographic keys and secrets used by cloud applications and services. For App Service certificates, the storage of choice is Key Vault. After you finish the certificate purchase process, you have to complete a few more steps before you start using this certificate.
-Once the certificate purchase process is complete, there are few more steps you need to complete before you can start using this certificate.
+1. On the [App Service Certificates page](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders), select the certificate. On the certificate menu, select **Certificate Configuration** > **Step 1: Store**.
-Select the certificate in the [App Service Certificates](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders) page, then click **Certificate Configuration** > **Step 1: Store**.
+ ![Screenshot of "Certificate Configuration" pane with "Step 1: Store" selected.](./media/configure-ssl-certificate/configure-key-vault.png)
-![Configure Key Vault storage of App Service certificate](./media/configure-ssl-certificate/configure-key-vault.png)
+1. On the **Key Vault Status** page, to create a new vault or choose an existing vault, select **Key Vault Repository**.
-[Key Vault](../key-vault/general/overview.md) is an Azure service that helps safeguard cryptographic keys and secrets used by cloud applications and services. It's the storage of choice for App Service certificates.
+1. If you create a new vault, set up the vault based on the following table, and make sure to use the same subscription and resource group as your App Service app. When you're done, select **Create**.
-In the **Key Vault Status** page, click **Key Vault Repository** to create a new vault or choose an existing vault. If you choose to create a new vault, use the following table to help you configure the vault and click Create. Create the new Key Vault inside the same subscription and resource group as your App Service app.
+ | Setting | Description |
+ |-|-|
+ | **Name** | A unique name that uses only alphanumeric characters and dashes. |
+ | **Resource group** | Recommended: The same resource group as your App Service certificate. |
+ | **Location** | The same location as your App Service app. |
+ | **Pricing tier** | For information, see [Azure Key Vault pricing details](https://azure.microsoft.com/pricing/details/key-vault/). |
+ | **Access policies** | Defines the applications and the allowed access to the vault resources. You can set up these policies later by following the steps at [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-portal.md). Currently, App Service Certificate supports only Key Vault access policies, not the RBAC model. |
+ | **Virtual Network Access** | Restrict vault access to certain Azure virtual networks. You can set up this restriction later by following the steps at [Configure Azure Key Vault Firewalls and Virtual Networks](../key-vault/general/network-security.md) |
-| Setting | Description |
-|-|-|
-| Name | A unique name that consists for alphanumeric characters and dashes. |
-| Resource group | As a recommendation, select the same resource group as your App Service certificate. |
-| Location | Select the same location as your App Service app. |
-| Pricing tier | For information, see [Azure Key Vault pricing details](https://azure.microsoft.com/pricing/details/key-vault/). |
-| Access policies| Defines the applications and the allowed access to the vault resources. You can configure it later, following the steps at [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-portal.md). |
-| Virtual Network Access | Restrict vault access to certain Azure virtual networks. You can configure it later, following the steps at [Configure Azure Key Vault Firewalls and Virtual Networks](../key-vault/general/network-security.md) |
+1. After you select the vault, close the **Key Vault Repository** page. The **Step 1: Store** option should show a green check mark to indicate success. Keep the page open for the next step.
-Once you've selected the vault, close the **Key Vault Repository** page. The **Step 1: Store** option should show a green check mark for success. Keep the page open for the next step.
+### Confirm domain ownership
-> [!NOTE]
-> Currently, App Service Certificate only supports Key Vault access policy but not RBAC model.
->
+1. From the same **Certificate Configuration** page in the previous section, select **Step 2: Verify**.
-### Verify domain ownership
+ ![Screenshot of "Certificate Configuration" pane with "Step 2: Verify" selected.](./media/configure-ssl-certificate/verify-domain.png)
-From the same **Certificate Configuration** page you used in the last step, click **Step 2: Verify**.
+1. Select **App Service Verification**. However, because you previously mapped the domain to your web app per the [Prerequisites](#prerequisites), the domain is already verified. To finish this step, just select **Verify**, and then select **Refresh** until the message **Certificate is Domain Verified** appears.
-![Verify domain for App Service certificate](./media/configure-ssl-certificate/verify-domain.png)
+The following domain verification methods are supported:
-Select **App Service Verification**. Since you already mapped the domain to your web app (see [Prerequisites](#prerequisites)), it's already verified. Just click **Verify** to finish this step. Click the **Refresh** button until the message **Certificate is Domain Verified** appears.
+| Method | Description |
+|--|-|
+| **App Service** | The most convenient option when the domain is already mapped to an App Service app in the same subscription because the App Service app has already verified the domain ownership. Review the last step in [Confirm domain ownership](#confirm-domain-ownership). |
+| **Domain** | Confirm an [App Service domain that you purchased from Azure](manage-custom-dns-buy-domain.md). Azure automatically adds the verification TXT record for you and completes the process. |
+| **Mail** | Confirm the domain by sending an email to the domain administrator. Instructions are provided when you select the option. |
+| **Manual** | Confirm the domain by using either a DNS TXT record or an HTML page, which applies only to **Standard** certificates per the following note. The steps are provided after you select the option. The HTML page option doesn't work for web apps with "HTTPS Only' enabled. |
> [!IMPORTANT]
-> For a **Standard** certificate, the certificate provider gives you a certificate for the requested top-level domain *and* its `www` subdomain (for example, `contoso.com` and `www.contoso.com`). However, beginning on December 1, 2021, [a restriction is introduced](https://azure.github.io/AppService/2021/11/22/ASC-1130-Change.html) on the **App Service** and the **Manual** verification methods. Both of them use HTML page verification to verify domain ownership. With this method, the certificate provider is no longer allowed to include the `www` subdomain when issuing, rekeying, or renewing a certificate.
+> For a **Standard** certificate, the certificate provider gives you a certificate for the requested top-level domain *and* the `www` subdomain, for example, `contoso.com` and `www.contoso.com`. However, starting December 1, 2021, [a restriction is introduced](https://azure.github.io/AppService/2021/11/22/ASC-1130-Change.html) on **App Service** and the **Manual** verification methods. To confirm domain ownership, both use HTML page verification. This method doesn't allow the certificate provider to include the `www` subdomain when issuing, rekeying, or renewing a certificate.
>
-> The **Domain** and **Mail** verification methods continue to include the `www` subdomain with the requested top-level domain in the certificate.
-
-> [!NOTE]
-> Four types of domain verification methods are supported:
->
-> - **App Service** - The most convenient option when the domain is already mapped to an App Service app in the same subscription. It takes advantage of the fact that the App Service app has already verified the domain ownership (see previous note).
-> - **Domain** - Verify an [App Service domain that you purchased from Azure](manage-custom-dns-buy-domain.md). Azure automatically adds the verification TXT record for you and completes the process.
-> - **Mail** - Verify the domain by sending an email to the domain administrator. Instructions are provided when you select the option.
-> - **Manual** - Verify the domain using either an HTML page (**Standard** certificate only, see previous note) or a DNS TXT record. Instructions are provided when you select the option. The HTML page option doesn't work for web apps with "Https Only" enabled.
+> However, the **Domain** and **Mail** verification methods continue to include the `www` subdomain with the requested top-level domain in the certificate.
### Import certificate into App Service
-In the <a href="https://portal.azure.com" target="_blank">Azure portal</a>, from the left menu, select **App Services** > **\<app-name>**.
+1. In the [Azure portal](https://portal.azure.com), from the left menu, select **App Services** > **\<app-name>**.
-From the left navigation of your app, select **TLS/SSL settings** > **Private Key Certificates (.pfx)** > **Import App Service Certificate**.
+1. From your app's navigation menu, select **TLS/SSL settings** > **Private Key Certificates (.pfx)** > **Import App Service Certificate**.
-![Import App Service certificate in App Service](./media/configure-ssl-certificate/import-app-service-cert.png)
+ ![Screenshot of app menu with "TLS/SSL settings", "Private Key Certificates (.pfx)", and "Import App Service certificate" selected.](./media/configure-ssl-certificate/import-app-service-cert.png)
-Select the certificate that you just purchased and select **OK**.
+1. Select the certificate that you just purchased, and then select **OK**.
-When the operation completes, you see the certificate in the **Private Key Certificates** list.
+ When the operation completes, the certificate appears in the **Private Key Certificates** list.
-![Import App Service certificate finished](./media/configure-ssl-certificate/import-app-service-cert-finished.png)
+ ![Screenshot of "Private Key Certificates" pane with purchased certificate listed.](./media/configure-ssl-certificate/import-app-service-cert-finished.png)
-> [!IMPORTANT]
-> To secure a custom domain with this certificate, you still need to create a certificate binding. Follow the steps in [Create binding](configure-ssl-bindings.md#create-binding).
->
+1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Create binding](configure-ssl-bindings.md#create-binding).
## Import a certificate from Key Vault
-If you use Azure Key Vault to manage your certificates, you can import a PKCS12 certificate from Key Vault into App Service as long as it [satisfies the requirements](#private-certificate-requirements).
+If you use Azure Key Vault to manage your certificates, you can import a PKCS12 certificate into App Service from Key Vault if you met the [requirements](#private-certificate-requirements).
### Authorize App Service to read from the vault
-By default, the App Service resource provider doesnΓÇÖt have access to the Key Vault. In order to use a Key Vault for a certificate deployment, you need to [authorize the resource provider read access to the KeyVault](../key-vault/general/assign-access-policy-cli.md).
-| Resource Provider | Service Principal AppId | KeyVault secret permissions | KeyVault certificate permissions |
-|--|--|--|--|
-| `Microsoft Azure App Service` or `Microsoft.Azure.WebSites` | `abfa0a7c-a6b6-4736-8310-5855508787cd` (It's the same for all Azure subscriptions)<br/><br/>For Azure Government cloud environment, use `6a02c803-dafd-4136-b4c3-5a6f318b4714`. | Get | Get |
-| Microsoft.Azure.CertificateRegistration | | Get<br/>List<br/>Set<br/>Delete | Get<br/>List |
+By default, the App Service resource provider doesn't have access to your key vault. To use a key vault for a certificate deployment, you have to [authorize read access for the resource provider to the key vault](../key-vault/general/assign-access-policy-cli.md).
> [!NOTE]
-> Currently, Key Vault Certificate only supports Key Vault access policy but not RBAC model.
->
+> Currently, a Key Vault certificate supports only the Key Vault access policy, not RBAC model.
+
+| Resource provider | Service principal AppId | Key vault secret permissions | Key vault certificate permissions |
+|--|--|--|--|
+| **Microsoft Azure App Service** or **Microsoft.Azure.WebSites** | - `abfa0a7c-a6b6-4736-8310-5855508787cd`, which is the same for all Azure subscriptions <br><br>- For Azure Government cloud environment, use `6a02c803-dafd-4136-b4c3-5a6f318b4714`. | Get | Get |
+| **Microsoft.Azure.CertificateRegistration** | | Get<br/>List<br/>Set<br/>Delete | Get<br/>List |
### Import a certificate from your vault to your app
-In the <a href="https://portal.azure.com" target="_blank">Azure portal</a>, from the left menu, select **App Services** > **\<app-name>**.
+1. In the [Azure portal](https://portal.azure.com), on the left menu, select **App Services** > **\<app-name>**.
-From the left navigation of your app, select **TLS/SSL settings** > **Private Key Certificates (.pfx)** > **Import Key Vault Certificate**.
+1. From your app's navigation menu, select **TLS/SSL settings** > **Private Key Certificates (.pfx)** > **Import Key Vault Certificate**.
-![Import Key Vault certificate in App Service](./media/configure-ssl-certificate/import-key-vault-cert.png)
+ ![Screenshot of "TLS/SSL settings", "Private Key Certificates (.pfx)", and "Import Key Vault Certificate" selected.](./media/configure-ssl-certificate/import-key-vault-cert.png)
-Use the following table to help you select the certificate.
+1. To help you select the certificate, use the following table:
-| Setting | Description |
-|-|-|
-| Subscription | The subscription that the Key Vault belongs to. |
-| Key Vault | The vault with the certificate you want to import. |
-| Certificate | Select from the list of PKCS12 certificates in the vault. All PKCS12 certificates in the vault are listed with their thumbprints, but not all are supported in App Service. |
+ | Setting | Description |
+ |-|-|
+ | **Subscription** | The subscription associated with the key vault. |
+ | **Key Vault** | The key vault that has the certificate you want to import. |
+ | **Certificate** | From this list, select a PKCS12 certificate that's in the vault. All PKCS12 certificates in the vault are listed with their thumbprints, but not all are supported in App Service. |
-When the operation completes, you see the certificate in the **Private Key Certificates** list. If the import fails with an error, the certificate doesn't meet the [requirements for App Service](#private-certificate-requirements).
+ When the operation completes, the certificate appears in the **Private Key Certificates** list. If the import fails with an error, the certificate doesn't meet the [requirements for App Service](#private-certificate-requirements).
-![Import Key Vault certificate finished](./media/configure-ssl-certificate/import-app-service-cert-finished.png)
+ ![Screenshot of "Private Key Certificates" pane with imported certificate listed.](./media/configure-ssl-certificate/import-app-service-cert-finished.png)
-> [!NOTE]
-> If you update your certificate in Key Vault with a new certificate, App Service automatically syncs your certificate within 24 hours.
+ > [!NOTE]
+ > If you update your certificate in Key Vault with a new certificate, App Service automatically syncs your certificate within 24 hours.
-> [!IMPORTANT]
-> To secure a custom domain with this certificate, you still need to create a certificate binding. Follow the steps in [Create binding](configure-ssl-bindings.md#create-binding).
->
+1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Create binding](configure-ssl-bindings.md#create-binding).
## Upload a private certificate
-Once you obtain a certificate from your certificate provider, follow the steps in this section to make it ready for App Service.
+After you get a certificate from your certificate provider, make the certificate ready for App Service by following the steps in this section.
### Merge intermediate certificates
-If your certificate authority gives you multiple certificates in the certificate chain, you need to merge the certificates in order.
+If your certificate authority gives you multiple certificates in the certificate chain, you have to merge the certificates following the same order.
-To do this, open each certificate you received in a text editor.
+1. In a text editor, open each received certificate.
-Create a file for the merged certificate, called _mergedcertificate.crt_. In a text editor, copy the content of each certificate into this file. The order of your certificates should follow the order in the certificate chain, beginning with your certificate and ending with the root certificate. It looks like the following example:
+1. To store the merged certificate, create a file named _mergedcertificate.crt_.
-```
BEGIN CERTIFICATE--
-<your entire Base64 encoded SSL certificate>
END CERTIFICATE--
+1. Copy the content for each certificate into this file. Make sure to follow the certificate sequence specified by the certificate chain, starting with your certificate and ending with the root certificate, for example:
BEGIN CERTIFICATE--
-<The entire Base64 encoded intermediate certificate 1>
END CERTIFICATE--
+ ```
+ --BEGIN CERTIFICATE--
+ <your entire Base64 encoded SSL certificate>
+ --END CERTIFICATE--
BEGIN CERTIFICATE--
-<The entire Base64 encoded intermediate certificate 2>
END CERTIFICATE--
+ --BEGIN CERTIFICATE--
+ <The entire Base64 encoded intermediate certificate 1>
+ --END CERTIFICATE--
BEGIN CERTIFICATE--
-<The entire Base64 encoded root certificate>
END CERTIFICATE--
-```
+ --BEGIN CERTIFICATE--
+ <The entire Base64 encoded intermediate certificate 2>
+ --END CERTIFICATE--
-### Export certificate to PFX
+ --BEGIN CERTIFICATE--
+ <The entire Base64 encoded root certificate>
+ --END CERTIFICATE--
+ ```
-Export your merged TLS/SSL certificate with the private key that your certificate request was generated with.
+### Export merged private certificate to PFX
-If you generated your certificate request using OpenSSL, then you have created a private key file. To export your certificate to PFX, run the following command. Replace the placeholders _&lt;private-key-file>_ and _&lt;merged-certificate-file>_ with the paths to your private key and your merged certificate file.
+Now, export your merged TLS/SSL certificate with the private key that was used to generate your certificate request. If you generated your certificate request using OpenSSL, then you created a private key file.
-```bash
-openssl pkcs12 -export -out myserver.pfx -inkey <private-key-file> -in <merged-certificate-file>
-```
+1. To export your certificate to a PFX file, run the following command, but replace the placeholders _&lt;private-key-file>_ and _&lt;merged-certificate-file>_ with the paths to your private key and your merged certificate file.
-When prompted, define an export password. You'll use this password when uploading your TLS/SSL certificate to App Service later.
+ ```bash
+ openssl pkcs12 -export -out myserver.pfx -inkey <private-key-file> -in <merged-certificate-file>
+ ```
-If you used IIS or _Certreq.exe_ to generate your certificate request, install the certificate to your local machine, and then [export the certificate to PFX](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754329(v=ws.11)).
+1. When you're prompted, specify a password for the export operation. When you upload your TLS/SSL certificate to App Service later, you'll have to provide this password.
+
+1. If you used IIS or _Certreq.exe_ to generate your certificate request, install the certificate to your local computer, and then [export the certificate to a PFX file](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754329(v=ws.11)).
### Upload certificate to App Service You're now ready upload the certificate to App Service.
-In the <a href="https://portal.azure.com" target="_blank">Azure portal</a>, from the left menu, select **App Services** > **\<app-name>**.
+1. In the [Azure portal](https://portal.azure.com), from the left menu, select **App Services** > **\<app-name>**.
-From the left navigation of your app, select **TLS/SSL settings** > **Private Key Certificates (.pfx)** > **Upload Certificate**.
+1. From your app's navigation menu, select **TLS/SSL settings** > **Private Key Certificates (.pfx)** > **Upload Certificate**.
-![Upload private certificate in App Service](./media/configure-ssl-certificate/upload-private-cert.png)
+ ![Screenshot of "TLS/SSL settings", "Private Key Certificates (.pfx)", "Upload Certificate" selected.](./media/configure-ssl-certificate/upload-private-cert.png)
-In **PFX Certificate File**, select your PFX file. In **Certificate password**, type the password that you created when you exported the PFX file. When finished, click **Upload**.
+1. In **PFX Certificate File**, select your PFX file. In **Certificate password**, enter the password that you created when you exported the PFX file. When you're done, select **Upload**.
-When the operation completes, you see the certificate in the **Private Key Certificates** list.
+ When the operation completes, the certificate appears in the **Private Key Certificates** list.
-![Upload certificate finished](./media/configure-ssl-certificate/create-free-cert-finished.png)
+ ![Screenshot of "Private Key Certificates" pane with uploaded certificate listed.](./media/configure-ssl-certificate/create-free-cert-finished.png)
-> [!IMPORTANT]
-> To secure a custom domain with this certificate, you still need to create a certificate binding. Follow the steps in [Create binding](configure-ssl-bindings.md#create-binding).
+1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Create binding](configure-ssl-bindings.md#create-binding).
## Upload a public certificate
-Public certificates are supported in the *.cer* format.
-
-In the <a href="https://portal.azure.com" target="_blank">Azure portal</a>, from the left menu, select **App Services** > **\<app-name>**.
+Public certificates are supported in the *.cer* format.
-From the left navigation of your app, click **TLS/SSL settings** > **Public Certificates (.cer)** > **Upload Public Key Certificate**.
+1. In the [Azure portal](https://portal.azure.com), from the left menu, select **App Services** > **\<app-name>**.
-In **Name**, type a name for the certificate. In **CER Certificate file**, select your CER file.
+1. From your app's navigation menu, select **TLS/SSL settings** > **Public Certificates (.cer)** > **Upload Public Key Certificate**.
-Click **Upload**.
+1. For **Name**, enter the name for the certificate. In **CER Certificate file**, select your CER file. When you're done, select **Upload**.
-![Upload public certificate in App Service](./media/configure-ssl-certificate/upload-public-cert.png)
+ ![Screenshot of name and public key certificate to upload.](./media/configure-ssl-certificate/upload-public-cert.png)
-Once the certificate is uploaded, copy the certificate thumbprint and see [Make the certificate accessible](configure-ssl-certificate-in-code.md#make-the-certificate-accessible).
+1. After the certificate is uploaded, copy the certificate thumbprint, and then review [Make the certificate accessible](configure-ssl-certificate-in-code.md#make-the-certificate-accessible).
## Renew an expiring certificate
-Before a certificate expires, you should add the renewed certificate into App Service and update any [TLS/SSL binding](configure-ssl-certificate.md). The process depends on the certificate type. For example, a [certificate imported from Key Vault](#import-a-certificate-from-key-vault), including an [App Service certificate](#import-an-app-service-certificate), automatically syncs to App Service every 24 hours and updates the TLS/SSL binding when you renew the certificate. For an [uploaded certificate](#upload-a-private-certificate), there's no automatic binding update. See one of the following sections depending on your scenario:
+Before a certificate expires, make sure to add the renewed certificate to App Service, and update any [TLS/SSL bindings](configure-ssl-certificate.md) where the process depends on the certificate type. For example, a [certificate imported from Key Vault](#import-a-certificate-from-key-vault), including an [App Service certificate](#buy-and-import-app-service-certificate), automatically syncs to App Service every 24 hours and updates the TLS/SSL binding when you renew the certificate. For an [uploaded certificate](#upload-a-private-certificate), there's no automatic binding update. Based on your scenario, review the corresponding section:
-- [Renew an uploaded certificate](#renew-an-uploaded-certificate)-- [Renew an App Service certificate](#renew-an-app-service-certificate)
+- [Renew an uploaded certificate](#renew-uploaded-certificate)
+- [Renew an App Service certificate](#renew-app-service-certificate)
- [Renew a certificate imported from Key Vault](#renew-a-certificate-imported-from-key-vault)
-### Renew an uploaded certificate
+## Renew uploaded certificate
-To replace an expiring certificate, how you update the certificate binding with the new certificate can adversely affect user experience. For example, your inbound IP address can change when you delete a binding, even if that binding is IP-based. This is especially important when you renew a certificate that's already in an IP-based binding. To avoid a change in your app's IP address, and to avoid downtime for your app due to HTTPS errors, follow these steps in order:
+When you replace an expiring certificate, the way you update the certificate binding with the new certificate might adversely affect user experience. For example, your inbound IP address might change when you delete a binding, even if that binding is IP-based. This result is especially impactful when you renew a certificate that's already in an IP-based binding. To avoid a change in your app's IP address, and to avoid downtime for your app due to HTTPS errors, follow these steps in the specified sequence:
1. [Upload the new certificate](#upload-a-private-certificate).
-2. Bind the new certificate to the same custom domain without deleting the existing (expiring) certificate. This action replaces the binding instead of removing the existing certificate binding. To do this, navigate to the TLS/SSL settings blade of your App Service and select the Add Binding button.
-3. Delete the existing certificate.
-### Renew an App Service certificate
+1. Bind the new certificate to the same custom domain without deleting the existing, expiring certificate. For this task, go to your App Service app's TLS/SSL settings pane, and select **Add Binding**.
+
+ This action replaces the binding, rather than remove the existing certificate binding.
+
+1. Delete the existing certificate.
+
+## Renew App Service certificate
+
+By default, App Service certificates have a one-year validity period. Before and nearer to the expiration date, you can automatically or manually renew App Service certificates in one-year increments. The renewal process effectively gives you a new App Service certificate with the expiration date extended to one year from the existing certificate's expiration date.
> [!NOTE]
-> Beginning September 23 2021, App Service certificates require domain verification during renew or rekey if you haven't verified domain in the last 395 days. The new certificate order remains in "pending issuance" during renew or rekey until you complete the domain verification.
+> Starting September 23 2021, if you haven't verified the domain in the last 395 days, App Service certificates require domain verification during a renew or rekey process. The new certificate order remains in "pending issuance" mode during the renew or rekey process until you complete the domain verification.
>
-> Unlike App Service Managed Certificate, domain re-verification for App Service certificates is *not* automated, and failure to verify domain ownership will result in failed renewals. Refer to [verify domain ownership](#verify-domain-ownership) for more information on how to verify your App Service certificate.
+> Unlike an App Service managed certificate, domain re-verification for App Service certificates *isn't* automated. Failure to verify domain ownership results in failed renewals. For more information about how to verify your App Service certificate, review [Confirm domain ownership](#confirm-domain-ownership).
+>
+> The renewal process requires that the well-known [service principal for App Service has the required permissions on your key vault](deploy-resource-manager-template.md#deploy-web-app-certificate-from-key-vault). These permissions are set up for you when you import an App Service certificate through the Azure portal. Make sure that you don't remove these permisisons from your key vault.
-> [!NOTE]
-> The renewal process requires that [the well-known service principal for App Service has the required permissions on your key vault](deploy-resource-manager-template.md#deploy-web-app-certificate-from-key-vault). This permission is configured for you when you import an App Service Certificate through the portal, and should not be removed from your key vault.
+1. To change the automatic renewal setting for your App Service certificate at any time, on the [App Service Certificates page](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders), select the certificate.
-By default, App Service certificates have a one-year validity period. Near the time of expiration, App Service certificates, can be renewed in one-year increments automatically or manually. In effect, th renewal process gives you a new App Service certificate with the expiration date extended to one year from the existing certificate's expiration date.
+1. On the left menu, select **Auto Renew Settings**.
-To toggle the automatic renewal setting of your App Service certificate at any time, select the certificate in the [App Service Certificates](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders) page, then click **Auto Renew Settings** in the left navigation.
+1. Select **On** or **Off**, and select **Save**.
-Select **On** or **Off** and click **Save**. Certificates can start automatically renewing 32 days before expiration if you have automatic renewal turned on.
+ If you turn on automatic renewal, certificates can start automatically renewing 32 days before expiration.
-![Renew App Service certificate automatically](./media/configure-ssl-certificate/auto-renew-app-service-cert.png)
+ ![Screenshot of specified certificate's auto renewal settings.](./media/configure-ssl-certificate/auto-renew-app-service-cert.png)
-To manually renew the certificate instead, click **Manual Renew**. You can request to manually renew your certificate 60 days before expiration.
+1. To manually renew the certificate instead, select **Manual Renew**. You can request to manually renew your certificate 60 days before expiration.
-Once the renew operation is complete, click **Sync**. The sync operation automatically updates the hostname bindings for the certificate in App Service without causing any downtime to your apps.
+1. After the renew operation completes, select **Sync**.
-> [!NOTE]
-> If you don't click **Sync**, App Service automatically syncs your certificate within 24 hours.
+ The sync operation automatically updates the hostname bindings for the certificate in App Service without causing any downtime to your apps.
-### Renew a certificate imported from Key Vault
+ > [!NOTE]
+ > If you don't select **Sync**, App Service automatically syncs your certificate within 24 hours.
-To renew a certificate you imported into App Service from Key Vault, see [Renew your Azure Key Vault certificate](../key-vault/certificates/overview-renew-certificate.md).
+## Renew a certificate imported from Key Vault
-Once the certificate is renewed in your key vault, App Service automatically syncs the new certificate and updates any applicable TLS/SSL binding within 24 hours. To sync manually:
+To renew a certificate that you imported into App Service from Key Vault, review [Renew your Azure Key Vault certificate](../key-vault/certificates/overview-renew-certificate.md).
+
+After the certificate renews inside your key vault, App Service automatically syncs the new certificate, and updates any applicable TLS/SSL binding within 24 hours. To sync manually, follow these steps:
1. Go to your app's **TLS/SSL settings** page.
-1. Select the imported certificate under **Private Key Certificates**.
-1. Click **Sync**.
+
+1. Under **Private Key Certificates**, select the imported certificate, and then select **Sync**.
## Manage App Service certificates
-This section shows you how to manage an [App Service certificate you purchased](#import-an-app-service-certificate).
+This section includes links to tasks that help you manage an [App Service certificate that you purchased](#buy-and-import-app-service-certificate):
-- [Rekey certificate](#rekey-certificate)-- [Export certificate](#export-certificate)-- [Delete certificate](#delete-certificate)
+- [Rekey an App Service certificate](#rekey-app-service-certificate)
+- [Export an App Service certificate](#export-app-service-certificate)
+- [Delete an App Service certificate](#delete-app-service-certificate)
+- [Renew an App Service certificate](#renew-app-service-certificate)
-Also, see [Renew an App Service certificate](#renew-an-app-service-certificate)
+### Rekey App Service certificate
-### Rekey certificate
+If you think your certificate's private key is compromised, you can rekey your certificate. This action rolls the certificate with a new certificate issued from the certificate authority.
-If you think your certificate's private key is compromised, you can rekey your certificate. Select the certificate in the [App Service Certificates](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders) page, then select **Rekey and Sync** from the left navigation.
+1. On the [App Service Certificates page](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders), select the certificate. From the left menu, select **Rekey and Sync**.
-Click **Rekey** to start the process. This process can take 1-10 minutes to complete.
+1. To start the process, select **Rekey**. This process can take 1-10 minutes to complete.
-![Rekey an App Service certificate](./media/configure-ssl-certificate/rekey-app-service-cert.png)
+ ![Screenshot of rekeying an App Service certificate.](./media/configure-ssl-certificate/rekey-app-service-cert.png)
-Rekeying your certificate rolls the certificate with a new certificate issued from the certificate authority.
+1. You might also be required to [reconfirm domain ownership](#confirm-domain-ownership).
-You may be required to [reverify domain ownership](#verify-domain-ownership).
+1. After the rekey operation completes, select **Sync**.
-Once the rekey operation is complete, click **Sync**. The sync operation automatically updates the hostname bindings for the certificate in App Service without causing any downtime to your apps.
+ The sync operation automatically updates the hostname bindings for the certificate in App Service without causing any downtime to your apps.
-> [!NOTE]
-> If you don't click **Sync**, App Service automatically syncs your certificate within 24 hours.
+ > [!NOTE]
+ > If you don't select **Sync**, App Service automatically syncs your certificate within 24 hours.
-### Export certificate
+### Export App Service certificate
-Because an App Service Certificate is a [Key Vault secret](../key-vault/general/about-keys-secrets-certificates.md), you can export a PFX copy of it and use it for other Azure services or outside of Azure.
+Because an App Service certificate is a [Key Vault secret](../key-vault/general/about-keys-secrets-certificates.md), you can export a copy as a PFX file, which you can use for other Azure services or outside of Azure.
-> [!NOTE]
-> The exported certificate is an unmanaged artifact. For example, it isn't synced when the App Service Certificate is [renewed](#renew-an-app-service-certificate). You must export the renewed certificate and install it where you need it.
+> [!IMPORTANT]
+> The exported certificate is an unmanaged artifact. App Service doesn't sync such artifacts when the App Service Certificate is [renewed](#renew-app-service-certificate). You must export and install the renewed certificate where necessary.
-# [Azure portal](#tab/portal)
+#### [Azure portal](#tab/portal)
-1. Select the certificate in the [App Service Certificates](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders) page, then select **Export Certificate** from the left navigation.
+1. On the [App Service Certificates page](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders), select the certificate.
+
+1. On the left menu, select **Export Certificate**.
1. Select **Open in Key Vault**.
-1. Select the current version of the certificate.
+1. Select the certificate's current version.
1. Select **Download as a certificate**.
-# [Azure CLI](#tab/cli)
+#### [Azure CLI](#tab/cli)
-To export the App Service Certificate as a PFX file, run the following commands in the [Cloud Shell](https://shell.azure.com). You can also run it locally if you [installed Azure CLI](/cli/azure/install-azure-cli). Replace the placeholders with the names you used when you [created the App Service certificate](#start-certificate-order).
+To export the App Service Certificate as a PFX file, run the following commands in [Azure Cloud Shell](https://shell.azure.com). Or, you can locally run Cloud Shell locally if you [installed Azure CLI](/cli/azure/install-azure-cli). Replace the placeholders with the names that you used when you [bought the App Service certificate](#start-certificate-purchase).
```azurecli-interactive secretname=$(az resource show \
az keyvault secret download \
--encoding base64 ``` ++
+The downloaded PFX file is a raw PKCS12 file that contains both the public and private certificates and has an import password that's an empty string. You can locally install the file by leaving the password field empty. You can't [upload the file as-is into App Service](#upload-a-private-certificate) because the file isn't [password protected](#private-certificate-requirements).
+
+### Delete App Service certificate
+
+If you delete an App Service certificate, the delete operation is irreversible and final. The result is a revoked certificate, and any binding in App Service that uses this certificate becomes invalid.
-The downloaded PFX file is a raw PKCS12 file that contains both the public and private certificates, and its import password is an empty string. You can install it locally by leaving the password field empty. Notable is the fact that it can't be [uploaded into App Service](#upload-a-private-certificate) as-is because it's not [password protected](#private-certificate-requirements).
+To prevent accidental deletion, Azure puts a lock on the App Service certificate. So, to delete the certificate, you have to first remove the delete lock on the certificate.
-### Delete certificate
+1. On the [App Service Certificates page](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders), select the certificate.
-Deletion of an App Service certificate is final and irreversible. Deletion of an App Service Certificate resource results in the certificate being revoked. Any binding in App Service with this certificate becomes invalid. To prevent accidental deletion, Azure puts a lock on the certificate. To delete an App Service certificate, you must first remove the delete lock on the certificate.
+1. On the left menu, select **Locks**.
-Select the certificate in the [App Service Certificates](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders) page, then select **Locks** in the left navigation.
+1. On your certificate, find the lock with the lock type named **Delete**. To the right side, select **Delete**.
-Find the lock on your certificate with the lock type **Delete**. To the right of it, select **Delete**.
+ ![Screenshot of deleting the lock on an App Service certificate.](./media/configure-ssl-certificate/delete-lock-app-service-cert.png)
-![Delete lock for App Service certificate](./media/configure-ssl-certificate/delete-lock-app-service-cert.png)
+1. Now, you can delete the App Service certificate. From the left menu, select **Overview** > **Delete**.
-Now you can delete the App Service certificate. From the left navigation, select **Overview** > **Delete**. In the confirmation dialog, type the certificate name and select **OK**.
+1. When the confirmation box opens, enter the certificate name, and select **OK**.
## Automate with scripts
app-service Integrate With Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/integrate-with-application-gateway.md
With a public domain mapped to the application gateway, you don't need to config
### A valid public certificate
-For security enhancement, it's recommended to bind TLS/SSL certificate for session encryption. To bind TLS/SSL certificate to the application gateway, a valid public certificate with following information is required. With [App Service Certificates](../configure-ssl-certificate.md#start-certificate-order), you can buy a TLS/SSL certificate and export it in .pfx format.
+For security enhancement, it's recommended to bind TLS/SSL certificate for session encryption. To bind TLS/SSL certificate to the application gateway, a valid public certificate with following information is required. With [App Service Certificates](../configure-ssl-certificate.md#start-certificate-purchase), you can buy a TLS/SSL certificate and export it in .pfx format.
| Name | Value | Description| | -- | - ||
app-service Overview Local Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-local-cache.md
While many apps use one or all of these features, some apps just need a high-per
The Azure App Service Local Cache feature provides a web role view of your content. This content is a write-but-discard cache of your storage content that is created asynchronously on-site startup. When the cache is ready, the site is switched to run against the cached content. Apps that run on Local Cache have the following benefits: * They are immune to latencies that occur when they access content on Azure Storage.
-* They are immune to the planned upgrades or unplanned downtimes and any other disruptions with Azure Storage that occur on servers that serve the content share.
+* They are not affected by connection issues to the storage, since the read-only copy is cached on the worker.
* They have fewer app restarts due to storage share changes.
+> [!NOTE]
+> If you are using Java (Java SE, Tomcat, or JBoss EAP), then by default the Java artifacts--.jar, .war, and .ear files--are copied locally to the worker. If your Java application depends on read-only access to other files as well, set `JAVA_COPY_ALL` to `true` for those files to also be copied. If Local Cache is enabled, it takes precendnce over this Java-specific enhancement.
+ ## How the local cache changes the behavior of App Service * _D:\home_ points to the local cache, which is created on the VM instance when the app starts up. _D:\local_ continues to point to the temporary VM-specific storage. * The local cache contains a one-time copy of the _/site_ and _/siteextensions_ folders of the shared content store, at _D:\home\site_ and _D:\home\siteextensions_, respectively. The files are copied to the local cache when the app starts. The size of the two folders for each app is limited to 1 GB by default, but can be increased to 2 GB. Note that as the cache size increases, it will take longer to load the cache. If you've increased local cache limit to 2 GB and the copied files exceed the maximum size of 2 GB, App Service silently ignores local cache and reads from the remote file share. If there is no limit defined or the limit is set to anything lower than 2 GB and the copied files exceeds the limit, the deployment or swap may fail with an error.
You enable Local Cache on a per-web-app basis by using this app setting:
### Configure Local Cache by using Azure Resource Manager <a name="Configure-Local-Cache-ARM"></a>
-```json
-
+```jsonc
... {
app-service Overview Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-manage-costs.md
When you create or use App Service resources, you're charged for the following m
Other cost resources for App Service are (see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/) for details): - [App Service domains](manage-custom-dns-buy-domain.md) Your subscription is charged for the domain registration on a yearly basis, if you enable automatic renewal.-- [App Service certificates](configure-ssl-certificate.md#import-an-app-service-certificate) One-time charge at the time of purchase. If you have multiple subdomains to secure, you can reduce cost by purchasing one wildcard certificate instead of multiple standard certificates.
+- [App Service certificates](configure-ssl-certificate.md#buy-and-import-app-service-certificate) One-time charge at the time of purchase. If you have multiple subdomains to secure, you can reduce cost by purchasing one wildcard certificate instead of multiple standard certificates.
- [IP-based certificate bindings](configure-ssl-bindings.md#create-binding) The binding is configured on a certificate at the app level. Costs are accrued for each binding. For **Standard** tier and above, the first IP-based binding is not charged. At the end of your billing cycle, the charges for each VM instance. Your bill or invoice shows a section for all App Service costs. There's a separate line item for each meter.
You can also [export your cost data](../cost-management-billing/costs/tutorial-e
<!-- Insert links to other articles that might help users save and manage costs for you service here.
-Create a table of contents entry for the article in the How-to guides section where appropriate. -->
+Create a table of contents entry for the article in the How-to guides section where appropriate. -->
app-service Troubleshoot Domain Ssl Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-domain-ssl-certificates.md
+ # Troubleshoot domain and TLS/SSL certificate problems in Azure App Service
-This article lists common problems that you might encounter when you configure a domain or TLS/SSL certificate for your web apps in Azure App Service. It also describes possible causes and solutions for these problems.
+When you set up a domain or TLS/SSL certificate for your web apps in Azure App Service, you might encounter the following common problems. This article also includes the possible causes and solutions for these problems.
-If you need more help at any point in this article, you can contact the Azure experts on [Microsoft Q & A and Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, you can file an Azure support incident. Go to the [Azure Support site](https://azure.microsoft.com/support/options/) and select **Get Support**.
+At any point in this article, you can get more help by contacting Azure experts on the [Microsoft Q & A and Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, to file an Azure support incident, go to the [Azure Support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
When you add a TLS binding, you receive the following error message:
#### Cause
-This problem can occur if you have multiple IP-based TLS/SSL bindings for the same IP address across multiple apps. For example, app A has an IP-based TLS/SSL binding with an old certificate. App B has an IP-based TLS/SSL binding with a new certificate for the same IP address. When you update the app TLS binding with the new certificate, it fails with this error because the same IP address is being used for another app.
+This problem might happen if you have multiple IP-based TLS/SSL bindings for the same IP address across multiple apps. For example, app A has an IP-based TLS/SSL binding with an old certificate. App B has an IP-based TLS/SSL binding with a new certificate for the same IP address. When you update the app TLS binding with the new certificate, the update fails with this error because the same IP address is used for another app.
#### Solution
-To fix this problem, use one of the following methods:
+To resolve this problem, try one of the following methods:
- Delete the IP-based TLS/SSL binding on the app that uses the old certificate. + - Create a new IP-based TLS/SSL binding that uses the new certificate. ### You can't delete a certificate
When you try to delete a certificate, you receive the following error message:
#### Cause
-This problem might occur if another app uses the certificate.
+This problem might happen if another app uses the certificate.
#### Solution
-Remove the TLS binding for that certificate from the apps. Then try to delete the certificate. If you still can't delete the certificate, clear the internet browser cache and reopen the Azure portal in a new browser window. Then try to delete the certificate.
+1. Remove the TLS binding for that certificate from the apps.
+
+1. Try to delete the certificate.
+
+1. If you still can't delete the certificate, clear the internet browser cache, and reopen the Azure portal in a new browser window. Then try to delete the certificate.
### You can't purchase an App Service certificate #### Symptom
-You can't purchase an [Azure App Service certificate](./configure-ssl-certificate.md#import-an-app-service-certificate) from the Azure portal.
+
+In the Azure portal, you can't purchase an [Azure App Service certificate](./configure-ssl-certificate.md#buy-and-import-app-service-certificate).
#### Cause and solution
-This problem can occur for any of the following reasons:
-- The App Service plan is Free or Shared. These pricing tiers don't support TLS.
+This problem can happen for any of the following reasons:
+
+- The App Service plan is a Free or Shared pricing tier. These tiers don't support TLS.
- **Solution**: Upgrade the App Service plan for app to Standard.
+ **Solution**: Upgrade the App Service plan to Standard.
- The subscription doesn't have a valid credit card.
- **Solution**: Add a valid credit card to your subscription.
+ **Solution**: Add a valid credit card to your subscription.
- The subscription offer doesn't support purchasing an App Service certificate such as Microsoft Student.
- **Solution**: Upgrade your subscription.
+ **Solution**: Upgrade your subscription.
-- The subscription reached the limit of purchases that are allowed on a subscription.
+- The subscription reached the limit on purchases that are allowed on a subscription.
+
+ **Solution**: App Service certificates have a limit of 10 certificate purchases for the Pay-As-You-Go and EA subscription types. For other subscription types, the limit is 3. To increase the limit, contact [Azure support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
- **Solution**: App Service certificates have a limit of 10 certificate purchases for the Pay-As-You-Go and EA subscription types. For other subscription types, the limit is 3. To increase the limit, contact [Azure support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
- The App Service certificate was marked as fraud. You received the following error message: "Your certificate has been flagged for possible fraud. The request is currently under review. If the certificate does not become usable within 24 hours, contact Azure Support."
- **Solution**: If the certificate is marked as fraud and isn't resolved after 24 hours, follow these steps:
+ **Solution**: If the certificate is marked as fraud and isn't resolved after 24 hours, follow these steps:
+
+ 1. Sign in to the [Azure portal](https://portal.azure.com).
- 1. Sign in to the [Azure portal](https://portal.azure.com).
- 2. Go to **App Service Certificates**, and select the certificate.
- 3. Select **Certificate Configuration** > **Step 2: Verify** > **Domain Verification**. This step sends an email notice to the Azure certificate provider to resolve the problem.
+ 1. Go to **App Service Certificates**, and select the certificate.
+
+ 1. Select **Certificate Configuration** > **Step 2: Verify** > **Domain Verification**. This step sends an email notice to the Azure certificate provider to resolve the problem.
## Custom domain problems
This problem can occur for any of the following reasons:
When you browse to the site by using the custom domain name, you receive the following error message:
-"Error 404-Web app not found."
+"Error 404 - Web app not found."
#### Cause and solution **Cause 1**
-The custom domain that you configured is missing a CNAME or A record.
+Your configured custom domain is missing a "CNAME record" or an "A record".
**Solution for cause 1** -- If you added an A record, make sure that a TXT record is also added. For more information, see [Create the A record](./app-service-web-tutorial-custom-domain.md#3-create-the-dns-records).-- If you don't have to use the root domain for your app, we recommend that you use a CNAME record instead of an A record.-- Don't use both a CNAME record and an A record for the same domain. This issue can cause a conflict and prevent the domain from being resolved.
+- If you added an "A record", make sure that a TXT record is also added. For more information, see [Create the "A record"](./app-service-web-tutorial-custom-domain.md#3-create-the-dns-records).
+
+- If you don't have to use the root domain for your app, the recommendation is that you use a "CNAME record", rather than an "A record".
+
+- Don't use both a "CNAME record" and an "A record" for the same domain. This issue can cause a conflict and prevent domain resolution.
**Cause 2**
-The internet browser might still be caching the old IP address for your domain.
+The internet browser might still be caching the old IP address for your domain.
**Solution for Cause 2**
-Clear the browser. For Windows devices, you can run the command `ipconfig /flushdns`. Use [WhatsmyDNS.net](https://www.whatsmydns.net/) to verify that your domain points to the app's IP address.
+Clear the browser. For Windows devices, you can run the command `ipconfig /flushdns`. To check that your domain points to the app's IP address, use [WhatsmyDNS.net](https://www.whatsmydns.net/).
### You can't add a subdomain
You can't add a new host name to an app to assign a subdomain.
#### Solution -- Check with subscription administrator to make sure that you have permissions to add a host name to the app.
+- Make sure that you have permissions to add a host name to an app by checking with the subscription administrator.
+ - If you need more subdomains, we recommend that you change the domain hosting to Azure Domain Name Service (DNS). By using Azure DNS, you can add 500 host names to your app. For more information, see [Add a subdomain](/archive/blogs/waws/mapping-a-custom-subdomain-to-an-azure-website). ### DNS can't be resolved
You received the following error message:
"The DNS record could not be located." #### Cause
-This problem occurs for one of the following reasons:
-- The time to live (TTL) period has not expired. Check the DNS configuration for your domain to determine the TTL value, and then wait for the period to expire.
+This problem happens for one of the following reasons:
+
+- The time to live (TTL) period hasn't expired. To determine the TTL value, check your domain's DNS configuration, and wait for the period to expire.
+ - The DNS configuration is incorrect. #### Solution-- Wait for 48 hours for this problem to resolve itself.-- If you can change the TTL setting in your DNS configuration, change the value to 5 minutes to see whether this resolves the problem.-- Use [WhatsmyDNS.net](https://www.whatsmydns.net/) to verify that your domain points to the app's IP address. If it doesn't, configure the A record to the correct IP address of the app.+
+- Wait for 48 hours for this problem to resolve by itself.
+
+- If you can change the TTL setting in your DNS configuration, try changing the value to 5 minutes, which might solve this problem.
+
+- To verify that your domain points to the app's IP address, use [WhatsmyDNS.net](https://www.whatsmydns.net/). If the domain doesn't point to the IP address, configure the "A record" to the app's correct IP address.
### You need to restore a deleted domain #### Symptom+ Your domain is no longer visible in the Azure portal. #### Cause
-The owner of the subscription might have accidentally deleted the domain.
+
+The subscription owner might have accidentally deleted the domain.
#### Solution
-If your domain was deleted fewer than seven days ago, the domain has not yet started the deletion process. In this case, you can buy the same domain again on the Azure portal under the same subscription. (Be sure to type the exact domain name in the search box.) You won't be charged again for this domain. If the domain was deleted more than seven days ago, contact [Azure support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) for help with restoring the domain.
+
+If your domain was deleted fewer than seven days ago, the domain hasn't started the deletion process. In this case, you can buy the same domain again on the Azure portal under the same subscription. (Be sure to type the exact domain name in the search box.) You won't be charged again for this domain. If the domain was deleted more than seven days ago, contact [Azure support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) for help with restoring the domain.
## Domain problems
You purchased an App Service certificate for the wrong domain. You can't update
#### Solution
-Delete that certificate and then buy a new certificate.
+Delete that certificate, and then buy a new certificate.
-If the current certificate that uses the wrong domain is in the ΓÇ£IssuedΓÇ¥ state, you'll also be billed for that certificate. App Service certificates are not refundable, but you can contact [Azure support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to see whether there are other options.
+If the current certificate that uses the wrong domain is in the "Issued" state, you'll also be billed for that certificate. App Service certificates aren't refundable, but you can contact [Azure support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) for other possible options.
### An App Service certificate was renewed, but the app shows the old certificate
If the current certificate that uses the wrong domain is in the ΓÇ£IssuedΓÇ¥ sta
The App Service certificate was renewed, but the app that uses the App Service certificate is still using the old certificate. Also, you received a warning that the HTTPS protocol is required.
-#### Cause
-App Service automatically syncs your certificate within 48 hours. When you rotate or update a certificate, sometimes the application is still retrieving the old certificate and not the newly updated certificate. The reason is that the job to sync the certificate resource hasn't run yet. Click Sync. The sync operation automatically updates the hostname bindings for the certificate in App Service without causing any downtime to your apps.
+#### Cause
+
+App Service automatically syncs your certificate within 48 hours. When you rotate or update a certificate, sometimes the application is still retrieving the old certificate and not the newly updated certificate. The reason is that the job to sync the certificate resource hasn't run yet. To resolve this problem, sync the certificate, which automatically updates the hostname bindings for the certificate in App Service without causing any downtime to your apps.
#### Solution
-You can force a sync of the certificate:
+You can force a sync for the certificate.
1. Sign in to the [Azure portal](https://portal.azure.com). Select **App Service Certificates**, and then select the certificate.
-2. Select **Rekey and Sync**, and then select **Sync**. The sync takes some time to finish.
-3. When the sync is completed, you see the following notification: "Successfully updated all the resources with the latest certificate."
+
+1. Select **Rekey and Sync**, and then select **Sync**. The sync takes some time to finish.
+
+1. When the sync completes, the following notification appears: "Successfully updated all the resources with the latest certificate."
### Domain verification is not working #### Symptom + The App Service certificate requires domain verification before the certificate is ready to use. When you select **Verify**, the process fails. #### Solution+ Manually verify your domain by adding a TXT record: 1. Go to the Domain Name Service (DNS) provider that hosts your domain name.+ 1. Add a TXT record for your domain that uses the value of the domain token that's shown in the Azure portal.
-Wait a few minutes for DNS propagation to run, and then select the **Refresh** button to trigger the verification.
+1. Wait a few minutes for DNS propagation to run, and then select the **Refresh** button to trigger the verification.
-As an alternative, you can use the HTML webpage method to manually verify your domain. This method allows the certificate authority to confirm the domain ownership of the domain that the certificate is issued for.
+As an alternative, you can use the HTML webpage method to manually verify your domain. This method allows the certificate authority to confirm the domain ownership of the domain for which the certificate is issued.
+
+1. Create an HTML file that's named **{domain verification token}.html**. The file content should contain the value of domain verification token.
-1. Create an HTML file that's named {domain verification token}.html. The content of this file should be the value of domain verification token.
1. Upload this file at the root of the web server that's hosting your domain.+ 1. Select **Refresh** to check the certificate status. It might take few minutes for verification to finish. For example, if you're buying a standard certificate for azure.com with the domain verification token 1234abcd, a web request made to https://azure.com/1234abcd.html should return 1234abcd. > [!IMPORTANT]
-> A certificate order has only 15 days to complete the domain verification operation. After 15 days, the certificate authority denies the certificate, and you are not charged for the certificate. In this situation, delete this certificate and try again.
->
->
+> A certificate purchase has only 15 days to complete the domain verification operation. After 15 days, the certificate authority denies the certificate, and you're not charged for the certificate. In this situation, delete this certificate and try again.>
### You can't purchase a domain #### Symptom+ You can't buy an App Service domain in the Azure portal. #### Cause and solution
-This problem occurs for one of the following reasons:
+This problem happens for one of the following reasons:
- There's no credit card on the Azure subscription, or the credit card is invalid.
- **Solution**: Add a valid credit card to your subscription.
+ **Solution**: Add a valid credit card to your subscription.
- You're not the subscription owner, so you don't have permission to purchase a domain.
- **Solution**: [Assign the Owner role](../role-based-access-control/role-assignments-portal.md) to your account. Or contact the subscription administrator to get permission to purchase a domain.
+ **Solution**: [Assign the Owner role](../role-based-access-control/role-assignments-portal.md) to your account. Or, contact the subscription administrator to get permission to purchase a domain.
- Your Azure subscription type does not support the purchase of an App Service domain.
- **Solution**: Upgrade your Azure subscription to another subscription type, such as a Pay-As-You-Go subscription.
+ **Solution**: Upgrade your Azure subscription to another subscription type, such as a Pay-As-You-Go subscription.
### You can't add a host name to an app
When you add a host name, the process fails to validate and verify the domain.
#### Cause
-This problem occurs for one of the following reasons:
+This problem happens for one of the following reasons:
- You donΓÇÖt have permission to add a host name.
- **Solution**: Ask the subscription administrator to give you permission to add a host name.
+ **Solution**: Ask the subscription administrator to give you permission to add a host name.
+ - Your domain ownership could not be verified.
- **Solution**: Verify that your CNAME or A record is configured correctly. To map a custom domain to an app, create either a CNAME record or an A record. If you want to use a root domain, you must use A and TXT records:
+ **Solution**: Verify that your "CNAME record" or "A record" is correctly set up. To map a custom domain to an app, create either a "CNAME record" or an "A record". If you want to use a root domain, you must use an "A record" and "TXT record":
- |Record type|Host|Point to|
- |||--|
- |A|@|IP address for an app|
- |TXT|@|`<app-name>.azurewebsites.net`|
- |CNAME|www|`<app-name>.azurewebsites.net`|
+ |Record type|Host|Point to|
+ |||--|
+ |A|@|IP address for an app|
+ |TXT|@|`<app-name>.azurewebsites.net`|
+ |CNAME|www|`<app-name>.azurewebsites.net`|
## FAQ **Do I have to configure my custom domain for my website once I buy it?**
-When you purchase a domain from the Azure portal, the App Service application is automatically configured to use that custom domain. You donΓÇÖt have to take any additional steps. For more information, watch Azure App Service Self Help: Add a Custom Domain Name on Channel9.
+When you purchase a domain from the Azure portal, the App Service app is automatically configured to use that custom domain. You donΓÇÖt have to take any further steps. For more information, watch Azure App Service Self Help: Add a Custom Domain Name on Channel9.
-**Can I use a domain purchased in the Azure portal to point to an Azure VM instead?**
+**Can I use a domain purchased in the Azure portal to point to an Azure virtual machine instead?**
-Yes, you can point the domain to a VM. For more information, see [Use Azure DNS to provide custom domain settings for an Azure service](../dns/dns-custom-domain.md).
+Yes, you can point the domain to a virtual machine. For more information, see [Use Azure DNS to provide custom domain settings for an Azure service](../dns/dns-custom-domain.md).
**Is my domain hosted by GoDaddy or Azure DNS?** App Service Domains use GoDaddy for domain registration and Azure DNS to host the domains.
-**I have auto-renew enabled but still received a renewal notice for my domain via email. What should I do?**
+**I enabled auto-renew but still received a renewal notice for my domain via email. What should I do?**
-If you have auto-renew enabled, you do not need to take any action. The notice email is provided to inform you that the domain is close to expiring and to renew manually if auto-renew is not enabled.
+If you enabled auto-renew, you don't need to take any action. The renewal notice through email only informs you that the domain is close to expiration and if auto-renew isn't enabled, you have to manually renew.
-**Will I be charged for Azure DNS hosting my domain?**
+**Will I be charged for Azure DNS hosting my domain?**
-The initial cost of domain purchase applies to domain registration only. In addition to the registration cost, there are incurring charges for Azure DNS based on your usage. For more information, see [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/) for more details.
+The initial cost of domain purchase applies to domain registration only. Along with the registration cost, Azure DNS incurs charges, based on your usage. For more information, see [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/).
**I purchased my domain earlier from the Azure portal and want to move from GoDaddy hosting to Azure DNS hosting. How can I do this?**
-It is not mandatory to migrate to Azure DNS hosting. If you do want to migrate to Azure DNS, the domain management experience in the Azure portal about provides information on steps necessary to move to Azure DNS. If the domain was purchased through App Service, migration from GoDaddy hosting to Azure DNS is relatively seamless procedure.
+You're not required to migrate to Azure DNS hosting. If you want to migrate to Azure DNS, the domain management experience in the Azure portal provides information about the steps necessary to move to Azure DNS. If you bought the domain through App Service, migration from GoDaddy hosting to Azure DNS is a relatively seamless procedure.
**I would like to purchase my domain from App Service Domain but can I host my domain on GoDaddy instead of Azure DNS?**
-Beginning July 24, 2017, App Service domains purchased in the portal are hosted on Azure DNS. If you prefer to use a different hosting provider, you must go to their website to obtain a domain hosting solution.
+Starting July 24, 2017, Azure hosts App Service domains purchased from the Azure portal on Azure DNS. If you prefer to use a different hosting provider, you must go to their website to obtain a domain hosting solution.
**Do I have to pay for privacy protection for my domain?**
-When you purchase a domain through the Azure portal, you can choose to add privacy at no additional cost. This is one of the benefits of purchasing your domain through Azure App Service.
+When you purchase a domain through the Azure portal, you can choose to add privacy at no extra cost. This benefit is included with purchasing your domain through Azure App Service.
**If I decide I no longer want my domain, can I get my money back?**
-When you purchase a domain, you are not charged for a period of five days, during which time you can decide that you do not want the domain. If you do decide you donΓÇÖt want the domain within that five-day period, you are not charged. (.uk domains are an exception to this. If you purchase a .uk domain, you are charged immediately and you cannot be refunded.)
+When you purchase a domain, you're not charged for five days. During this time, you can decide whether to keep the domain. If you choose to not keep the domain within this duration, you're not charged. However, domains that end with `.uk` are the exception. If you purchase such a domain, you're immediately charged, and you can't get a refund.
**Can I use the domain in another Azure App Service app in my subscription?**
-Yes. When you access the Custom Domains and TLS blade in the Azure portal, you see the domains that you have purchased. You can configure your app to use any of those domains.
+Yes, when you access the Custom Domains and TLS blade in the Azure portal, you see the domains that you purchased. You can configure your app to use any of those domains.
**Can I transfer a domain from one subscription to another subscription?**
-You can move a domain to another subscription/resource group using the [Move-AzResource](/powershell/module/az.Resources/Move-azResource) PowerShell cmdlet.
+Yes, you can move a domain to another subscription or resource group using the [Move-AzResource](/powershell/module/az.Resources/Move-azResource) PowerShell cmdlet.
**How can I manage my custom domain if I donΓÇÖt currently have an Azure App Service app?**
-You can manage your domain even if you donΓÇÖt have an App Service Web App. Domain can be used for Azure services like Virtual machine, Storage etc. If you intend to use the domain for App Service Web Apps, then you need to include a Web App that is not on the Free App Service plan in order to bind the domain to your web app.
+You can manage your domain even if you don't have an App Service web app. You can use the domain for Azure services such as Virtual Machines, Azure Storage, and so on. If you plan to use the domain for App Service web apps, you must include a web app that's not on a free App Service plan so that you can bind the domain to your web app.
**Can I move a web app with a custom domain to another subscription or from App Service Environment v1 to V2?**
-Yes, you can move your web app across subscriptions. Follow the guidance in [How to move resources in Azure](../azure-resource-manager/management/move-resource-group-and-subscription.md). There are a few limitations when moving the web app. For more information, see [Limitations for moving App Service resources](../azure-resource-manager/management/move-limitations/app-service-move-limitations.md).
+Yes, you can move your web app across subscriptions. Follow the guidance in [How to move resources in Azure](../azure-resource-manager/management/move-resource-group-and-subscription.md). Some limitations apply when you move a web app. For more information, see [Limitations for moving App Service resources](../azure-resource-manager/management/move-limitations/app-service-move-limitations.md).
-After moving the web app, the host name bindings of the domains within the custom domains setting should remain the same. No additional steps are required to configure the host name bindings.
+After you move a web app, the host name bindings of the domains within the custom domains setting should stay the same. No extra steps are required to configure the host name bindings.
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
Title: 'Tutorial: Deploy a Python Django or Flask web app with PostgreSQL' description: Create a Python Django or Flask web app with a PostgreSQL database and deploy it to Azure. The tutorial uses either the Django or Flask framework and the app is hosted on Azure App Service on Linux.-+ ms.devlang: python
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
Azure also reserves five IP addresses in each subnet for internal use: the first
Consider a subnet that has 27 application gateway instances and an IP address for a private front-end IP. In this case, you need 33 IP addresses: 27 for the application gateway instances, one for the private front end, and five for internal use.
-Application Gateway (Standard or WAF) SKU can support up to 32 instances (32 instance IP addresses + 1 private front-end IP + 5 Azure reserved) ΓÇô so a minimum subnet size of /26 is recommended
+Application Gateway (Standard or WAF) SKU can support up to 32 instances (32 instance IP addresses + 1 private frontend IP configuration + 5 Azure reserved) ΓÇô so a minimum subnet size of /26 is recommended
-Application Gateway (Standard_v2 or WAF_v2 SKU) can support up to 125 instances (125 instance IP addresses + 1 private front-end IP + 5 Azure reserved). A minimum subnet size of /24 is recommended.
+Application Gateway (Standard_v2 or WAF_v2 SKU) can support up to 125 instances (125 instance IP addresses + 1 private frontend IP configuration + 5 Azure reserved). A minimum subnet size of /24 is recommended.
+
+To determine the available capacity of a subnet that has existing Application Gateways provisioned, take the size of the subnet and subtract the five reserved IP addresses of the subnet reserved by the platform.  Next, take each gateway and subtract the the max-instance count.  For each gateway that has a private frontend IP configuration, subtract one additional IP address per gateway as well.
+
+For example, here's how to calculate the available addressing for a subnet with three gateways of varying sizes:
+- Gateway 1: Maximum of 10 instances; utilizes a private frontend IP configuration
+- Gateway 2: Maximum of 2 instances; no private frontend IP configuration
+- Gateway 3: Maximum of 15 instances; utilizes a private frontend IP configuration
+- Subnet Size: /24
+
+Subnet Size /24 = 255 IP addresses - 5 reserved from the platform = 250 available addresses.
+250 - Gateway 1 (10) - 1 private frontend IP configuration = 239
+239 - Gateway 2 (2) = 237
+237 - Gateway 3 (15) - 1 private frontend IP configuration = 223
> [!IMPORTANT] > Although a /24 subnet is not required per Application Gateway v2 SKU deployment, it is highly recommended. This is to ensure that Application Gateway v2 has sufficient space for autoscaling expansion and maintenance upgrades. You should ensure that the Application Gateway v2 subnet has sufficient address space to accommodate the number of instances required to serve your maximum expected traffic. If you specify the maximum instance count, then the subnet should have capacity for at least that many addresses. For capacity planning around instance count, see [instance count details](understanding-pricing.md#instance-count). > [!TIP]
-> IP addresses are allocated from the beginning of the defined subnet space for gateway instances. As instances are created and removed due to creation of gateways or scaling events, it can become difficult to understand what the next available address is in the subnet. To be able to determine the next address to use for a future gateway and have a contiguous addressing theme for frontend IPs, consider assigning frontend IP addresses from the upper half of the defined subset space. For example, if my subnet address space is 10.5.5.0/24, consider setting the frontend IP of your gateways starting with 10.5.5.254 and then following with 10.5.5.253, 10.5.5.252, 10.5.5.251, and so forth for future gateways.
+> IP addresses are allocated from the beginning of the defined subnet space for gateway instances. As instances are created and removed due to creation of gateways or scaling events, it can become difficult to understand what the next available address is in the subnet. To be able to determine the next address to use for a future gateway and have a contiguous addressing theme for frontend IPs, consider assigning frontend IP addresses from the upper half of the defined subset space. For example, if my subnet address space is 10.5.5.0/24, consider setting the private frontend IP configuration of your gateways starting with 10.5.5.254 and then following with 10.5.5.253, 10.5.5.252, 10.5.5.251, and so forth for future gateways.
> [!TIP] > It is possible to change the subnet of an existing Application Gateway within the same virtual network. You can do this using Azure PowerShell or Azure CLI. For more information, see [Frequently asked questions about Application Gateway](application-gateway-faq.yml#can-i-change-the-virtual-network-or-subnet-for-an-existing-application-gateway)
application-gateway Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-cli.md
Previously updated : 06/14/2021 Last updated : 07/22/2022
automanage Automanage Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-upgrade.md
All machines that need to be upgraded will have the status **Needs upgrade**. Yo
:::image type="content" source="media\automanage-upgrade\overview-blade.png" alt-text="Needs upgrade status.":::
-### Disable Automanage machines that need to be upgrade
+### Disable Automanage machines that need to be upgraded
Before a machine can upgrade to the new Automanage version, the machine must be disabled from the previous version of Automanage. To disable the machines follow these steps: 1. Select the checkbox next to the virtual machine you want to disable.
automation Automation Scenario Using Watcher Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-scenario-using-watcher-task.md
To complete this article, the following are required:
* Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * [Automation account](./index.yml) to hold the watcher and action runbooks and the Watcher Task. * A [hybrid runbook worker](automation-hybrid-runbook-worker.md) where the watcher task runs.
-* PowerShell runbooks. PowerShell Workflow runbooks aren't supported by watcher tasks.
+* PowerShell runbooks. PowerShell Workflow runbooks and Graphical runbooks aren't supported by watcher tasks.
## Import a watcher runbook
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
To install and use Hybrid Worker extension using REST API, follow these steps. T
```
-1. Follow the steps [here]( /azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#enable-system-assigned-managed-identity-on-an-existing-vm) to enable the System-assigned managed identity on the VM.
+1. Follow the steps [here](/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#enable-system-assigned-managed-identity-on-an-existing-vm) to enable the System-assigned managed identity on the VM.
1. Get the automation account details using this API call.
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookW
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/move/action | Moves Hybrid Runbook Worker from one Worker Group to another. Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/delete | Deletes a Hybrid Runbook Worker. + ## Check version of Hybrid Worker To check the version of the extension-based Hybrid Runbook Worker:
To check the version of the extension-based Hybrid Runbook Worker:
|**Windows** |`C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\`| The path has *version* folder that has the version information. | |**Linux** | `/var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux-<version>` | The folder name ends with *version* information. | +
+## Monitor performance of Hybrid Workers using VM insights
+
+Using [VM insights](../azure-monitor/vm/vminsights-overview.md), you can monitor the performance of Azure VMs and Arc-enabled Servers deployed as Hybrid Runbook workers. Among multiple elements that are considered during performances, the VM insights monitors the key operating system performance indicators related to processor, memory, network adapter, and disk utilization.
+- For Azure VMs, see [How to chart performance with VM insights](../azure-monitor/vm/vminsights-performance.md).
+- For Arc-enabled servers, see [Tutorial: Monitor a hybrid machine with VM insights](../azure-arc/servers/learn/tutorial-enable-vm-insights.md).
+ ## Next steps - To learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environments, see [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md).
To check the version of the extension-based Hybrid Runbook Worker:
- To learn about Azure VM extensions, see [Azure VM extensions and features for Windows](../virtual-machines/extensions/features-windows.md) and [Azure VM extensions and features for Linux](../virtual-machines/extensions/features-linux.md). -- To learn about VM extensions for Arc-enabled servers, see [VM extension management with Azure Arc-enabled servers](../azure-arc/servers/manage-vm-extensions.md).
+- To learn about VM extensions for Arc-enabled servers, see [VM extension management with Azure Arc-enabled servers](../azure-arc/servers/manage-vm-extensions.md).
azure-app-configuration Howto Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-geo-replication.md
+
+ Title: Enable geo-replication (preview)
+description: Learn how to use Azure App Configuration geo replication to create, delete, and manage replicas of your configuration store.
++
+ms.assetid:
+
+ms.devlang: csharp
+ Last updated : 8/1/2022+++
+#Customer intent: I want to be able to list, create, and delete the replicas of my configuration store.
++
+# Enable geo-replication (Preview)
+
+This article covers replication of Azure App Configuration stores. You'll learn about how to create and delete a replica in your configuration store.
+
+To learn more about the concept of geo-replication, see [Geo-replication in Azure App Configuration](./concept-soft-delete.md).
+
+## Prerequisites
+
+- An Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet)
+- We assume you already have an App Configuration store. If you want to create one, [create an App Configuration store](quickstart-aspnet-core-app.md).
++
+## Create and list a replica
+
+To create a replica of your configuration store in the portal, follow the steps below.
+
+<!-- ### [Portal](#tab/azure-portal) -->
+
+1. In your App Configuration store, under **Settings**, select **Geo-replication**.
+1. Under **Replica(s)**, select **Create**. Choose the location of your new replica in the dropdown, then assign the replica a name. This replica name must be unique.
+
+
+ :::image type="content" source="./media/how-to-geo-replication-create-flow.png" alt-text="Screenshot of the Geo Replication button being highlighted as well as the create button for a replica.":::
+
+
+1. Select **Create**.
+1. You should now see your new replica listed under Replica(s). Check that the status of the replica is "Succeeded", which indicates that it was created successfully.
+
+
+ :::image type="content" source="media/how-to-geo-replication-created-replica-successfully.png" alt-text="Screenshot of the list of replicas that have been created for the configuration store.":::
++
+<!-- ### [Azure CLI](#tab/azure-cli)
+
+1. In the CLI, run the following code to create a replica of your configuration store.
+
+ ```azurecli-interactive
+ az appconfig replica create --store-name MyConfigStoreName --name MyNewReplicaName --location MyNewReplicaLocation
+ ```
+
+1. Verify that the replica was created successfully by listing all replicas of your configuration store.
+
+ ```azurecli-interactive
+ az appconfig replica list --store-name MyConfigStoreName
+ ```
+ -->
++
+## Delete a replica
+
+To delete a replica in the portal, follow the steps below.
+
+<!-- ### [Portal](#tab/azure-portal) -->
+
+1. In your App Configuration store, under **Settings**, select **Geo-replication**.
+1. Under **Replica(s)**, select the **...** to the right of the replica you want to delete. Select **Delete** from the dropdown.
+
+ :::image type="content" source="./media/how-to-geo-replication-delete-flow.png" alt-text=" Screenshot showing the three dots on the right of the replica being selected, showing you the delete option.":::
+
+
+1. Verify the name of the replica to be deleted and select **OK** to confirm.
+1. Once the process is complete, check the list of replicas that the correct replica has been deleted.
+
+<!-- ### [Azure CLI](#tab/azure-cli)
+
+1. In the CLI, run the following code.
+
+ ```azurecli-interactive
+ az appconfig replica delete --store-name MyConfigStoreName --name MyNewReplicaName
+ ```
+1. Verify that the replica was deleted successfully by listing all replicas of your configuration store.
+
+ ```azurecli-interactive
+ az appconfig replica list --store-name MyConfigStoreName
+ ```
+
+ -->
++
+## Next steps
+> [!div class="nextstepaction"]
+> [Geo-replication concept](./concept-soft-delete.md)
azure-arc Deploy Active Directory Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-sql-managed-instance.md
To deploy an Azure Arc-enabled SQL Managed Instance for Azure Arc Active Directo
To support Active Directory authentication on SQL, the deployment specification uses the following fields:
+### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode)
+ - **Required** (For AD authentication) - `spec.security.activeDirectory.connector.name` Name of the pre-existing Active Directory connector custom resource to join for AD authentication. When provided, system will assume that AD authentication is desired.-
-### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode)
- - `spec.security.activeDirectory.accountName` Name of the Active Directory account for this managed instance. - `spec.security.activeDirectory.keytabSecret` Name of the Kubernetes secret hosting the pre-created keytab file by users. This secret must be in the same namespace as the managed instance. This parameter is only required for the AD deployment in customer-managed keytab mode.
+ - `spec.services.primary.dnsName`
+ You provide a DNS name for the primary SQL endpoint.
+ - `spec.services.primary.port`
+ You provide a port number for the primary SQL endpoint.
+
+- **Optional**
+ - `spec.security.activeDirectory.connector.namespace`
+ Kubernetes namespace of the pre-existing Active Directory connector to join for AD authentication. When not provided, system will assume the same namespace as SQL.
### [System-managed keytab mode](#tab/system-managed-keytab-mode)
+- **Required** (For AD authentication)
+ - `spec.security.activeDirectory.connector.name`
+ Name of the pre-existing Active Directory connector custom resource to join for AD authentication. When provided, system will assume that AD authentication is desired.
- `spec.security.activeDirectory.accountName` Name of the Active Directory (AD) account for this SQL. This account will be automatically generated for this SQL by the system and must not exist in the domain before deploying SQL. --- - `spec.services.primary.dnsName` You provide a DNS name for the primary SQL endpoint. - `spec.services.primary.port`
To support Active Directory authentication on SQL, the deployment specification
- **Optional** - `spec.security.activeDirectory.connector.namespace` Kubernetes namespace of the pre-existing Active Directory connector to join for AD authentication. When not provided, system will assume the same namespace as SQL.
+ - `spec.security.activeDirectory.encryptionTypes`
+ List of Kerberos encryption types to allow for the automatically generated AD account provided in `spec.security.activeDirectory.accountName`. Accepted values are RC4, AES128 and AES256. It defaults to allow all encryption types when there is no value provided. You can disable RC4 by providing only AES128 and AES256 as encryption types.
++ ### Prepare deployment specification for SQL Managed Instance for Azure Arc
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
Previously updated : 06/14/2022 Last updated : 08/02/2022 #Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc-enabled data services so that I can leverage the capability of the feature.
This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
+## August 9, 2022
+
+This release is published August 9, 2022.
+
+### Image tag
+
+`v1.10.0_2022-08-09`
+
+For complete release version information, see [Version log](version-log.md#august-9-2022).
+
+### Arc-enabled SQL Managed Instance
+
+- AES encryption can now be enabled for AD authentication.
+
+### `arcdata` Azure CLI extension
+
+- The Azure CLI help text for the Arc data controller, Arc-enabled SQL Managed Instance, and Active Directory connector command groups has been updated to reflect new naming conventions. Indirect mode arguments are now referred to as _Kubernetes API - targeted_ arguments, and direct mode arguments are now referred to as _Azure Resource Manager - targeted_ arguments.
+ ## July 12, 2022 This release is published July 12, 2022
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
Previously updated : 6/14/2022 Last updated : 08/02/2022 #Customer intent: As a data professional, I want to understand what versions of components align with specific releases.
This article identifies the component versions with each release of Azure Arc-enabled data services.
+## August 9, 2022
+
+|Component|Value|
+|--|--|
+|Container images tag |`v1.10.0_2022-08-09`|
+|CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2<br/>|
+|Azure Resource Manager (ARM) API version|2022-03-01-preview (No change)|
+|`arcdata` Azure CLI extension version|1.4.5 ([Download](https://arcdataazurecliextension.blob.core.windows.net/stage/arcdata-1.4.5-py2.py3-none-any.whl))|
+|Arc enabled Kubernetes helm chart extension version|1.2.20381002|
+|Arc Data extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.5.0 ([Download](https://azuredatastudioarcext.blob.core.windows.net/stage/arc-1.5.0.vsix))</br>1.5.0 ([Download](https://azuredatastudioarcext.blob.core.windows.net/stage/azcli-1.5.0.vsix))|
+ ## July 12, 2022 |Component|Value|
This article identifies the component versions with each release of Azure Arc-en
|Container images tag |`v1.9.0_2022-07-12`| |CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v5<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2<br/>| |Azure Resource Manager (ARM) API version|2022-03-01-preview (No change)|
-|`arcdata` Azure CLI extension version|1.4.3 ([Download](https://aka.ms/az-cli-arcdata-ext))|
+|`arcdata` Azure CLI extension version|1.4.3 ([Download](https://arcdataazurecliextension.blob.core.windows.net/stage/arcdata-1.4.3-py2.py3-none-any.whl))|
|Arc enabled Kubernetes helm chart extension version|1.2.20031002| |Arc Data extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.3.0 ([Download](https://azuredatastudioarcext.blob.core.windows.net/stage/arc-1.3.0.vsix))</br>1.3.0 ([Download](https://azuredatastudioarcext.blob.core.windows.net/stage/azcli-1.3.0.vsix))|
This article identifies the component versions with each release of Azure Arc-en
|Container images tag |`v1.8.0_2022-06-14`| |CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v5<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2<br/>| |ARM API version|2022-03-01-preview (No change)|
-|`arcdata` Azure CLI extension version|1.4.2 ([Download](https://aka.ms/az-cli-arcdata-ext))|
+|`arcdata` Azure CLI extension version|1.4.2 ([Download](https://arcdataazurecliextension.blob.core.windows.net/stage/arcdata-1.4.2-py2.py3-none-any.whl))|
|Arc enabled Kubernetes helm chart extension version|1.2.19831003| |Arc Data extension for Azure Data Studio|1.3.0 (No change)([Download](https://aka.ms/ads-arcdata-ext))|
azure-cache-for-redis Cache Best Practices Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-connection.md
Test your system's resiliency to connection breaks using a [reboot](cache-admini
## TCP settings for Linux-hosted client applications
-Some Linux versions use TCP settings that are too high by default. The higher TCP settings can create a situation where a client connection to a cache cannot be reestablished for a long time when a Redis server stops responding. The client waits too long before closing the connection gracefully.
+The default TCP settings in some Linux versions can cause Redis server connections to fail for 13 minutes or more. The default settings can prevent the client application from detecting closed connections and restoring them automatically if the connection was not closed gracefully.
-The failure to reestablish a connection can happen if the primary node of your Azure Cache For Redis becomes unavailable, for example, for unplanned maintenance.
+The failure to reestablish a connection can happen occur in situations where the network connection is disrupted or the Redis server goes offline for unplanned maintenance.
We recommend these TCP settings:
For more information about the scenario, see [Connection does not re-establish f
## Using ForceReconnect with StackExchange.Redis
-In rare cases, StackExchange.Redis fails to reconnect after a connection is dropped. In these cases, restarting the client or creating a new `ConnectionMultiplexer` fixes the issue. We recommend using a singleton `ConnectionMultiplexer` pattern while allowing apps to force a reconnection periodically. Take a look at the quickstart sample project that best matches the framework and platform your application uses. You can see an examples of this code pattern in our [quickstarts](https://github.com/Azure-Samples/azure-cache-redis-samples).
+In rare cases, StackExchange.Redis fails to reconnect after a connection is dropped. In these cases, restarting the client or creating a new `ConnectionMultiplexer` fixes the issue. We recommend using a singleton `ConnectionMultiplexer` pattern while allowing apps to force a reconnection periodically. Take a look at the quickstart sample project that best matches the framework and platform your application uses. You can see an example of this code pattern in our [quickstarts](https://github.com/Azure-Samples/azure-cache-redis-samples).
Users of the `ConnectionMultiplexer` must handle any `ObjectDisposedException` errors that might occur as a result of disposing the old one.
azure-cache-for-redis Cache Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-ml.md
Azure Cache for Redis is performant and scalable. When paired with an Azure Mach
> * `model` - The registered model that will be deployed. > * `inference_config` - The inference configuration for the model. >
-> For more information on setting these variables, see [Deploy models with Azure Machine Learning](../machine-learning/how-to-deploy-and-where.md).
+> For more information on setting these variables, see [Deploy models with Azure Machine Learning](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
## Create an Azure Cache for Redis instance
def run(data):
return error ```
-For more information on entry script, see [Define scoring code.](../machine-learning/how-to-deploy-and-where.md?tabs=python#define-an-entry-script)
+For more information on entry script, see [Define scoring code.](/azure/machine-learning/how-to-deploy-managed-online-endpoints)
* **Dependencies**, such as helper scripts or Python/Conda packages required to run the entry script or model
These entities are encapsulated into an **inference configuration**. The inferen
For more information on environments, see [Create and manage environments for training and deployment](../machine-learning/how-to-use-environments.md).
-For more information on inference configuration, see [Deploy models with Azure Machine Learning](../machine-learning/how-to-deploy-and-where.md?tabs=python#define-an-inference-configuration).
+For more information on inference configuration, see [Deploy models with Azure Machine Learning](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
> [!IMPORTANT] > When deploying to Functions, you do not need to create a **deployment configuration**.
pip install azureml-contrib-functions
To create the Docker image that is deployed to Azure Functions, use [azureml.contrib.functions.package](/python/api/azureml-contrib-functions/azureml.contrib.functions) or the specific package function for the trigger you want to use. The following code snippet demonstrates how to create a new package with an HTTP trigger from the model and inference configuration: > [!NOTE]
-> The code snippet assumes that `model` contains a registered model, and that `inference_config` contains the configuration for the inference environment. For more information, see [Deploy models with Azure Machine Learning](../machine-learning/how-to-deploy-and-where.md).
+> The code snippet assumes that `model` contains a registered model, and that `inference_config` contains the configuration for the inference environment. For more information, see [Deploy models with Azure Machine Learning](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
```python from azureml.contrib.functions import package
azure-maps Data Driven Style Expressions Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-web-sdk.md
Learn more about the layer options that support expressions:
> [PolygonLayerOptions](/javascript/api/azure-maps-control/atlas.polygonlayeroptions) > [!div class="nextstepaction"]
-> [SymbolLayerOptions](/javascript/api/azure-maps-control/atlas.symbollayeroptions)
+> [SymbolLayerOptions](/javascript/api/azure-maps-control/atlas.symbollayeroptions)
azure-maps How To Use Spatial Io Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-spatial-io-module.md
The feature we demonstrated here is only one of the many features available in t
Refer to the Azure Maps Spatial IO documentation: > [!div class="nextstepaction"]
-> [Azure Maps Spatial IO package](/javascript/api/azure-maps-spatial-io/)
+> [Azure Maps Spatial IO package](/javascript/api/azure-maps-spatial-io/)
azure-monitor Create Workspace Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md
The `New-AzApplicationInsights` PowerShell command does not currently support cr
``` > [!NOTE]
-> * For more information on resource properties, see [Property values](https://docs.microsoft.com/azure/templates/microsoft.insights/components?tabs=bicep#property-values)
+> * For more information on resource properties, see [Property values](/azure/templates/microsoft.insights/components?tabs=bicep#property-values)
> * Flow_Type and Request_Source are not used, but are included in this sample for completeness. #### Parameters file
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
If cloud role name is not set, the Application Insights resource's name will be
You can also set the cloud role name using the environment variable `APPLICATIONINSIGHTS_ROLE_NAME` (which will then take precedence over cloud role name specified in the json configuration).
+Or you can set the cloud role name using the Java system property `applicationinsights.role.name`
+(which will also take precedence over cloud role name specified in the json configuration).
+ If you have multiple applications deployed in the same JVM and want them to send telemetry to different cloud role names, see [Cloud role name overrides (preview)](#cloud-role-name-overrides-preview).
If you want to set the cloud role instance to something different rather than th
You can also set the cloud role instance using the environment variable `APPLICATIONINSIGHTS_ROLE_INSTANCE` (which will then take precedence over cloud role instance specified in the json configuration).
+Or you can set the cloud role instance using the Java system property `applicationinsights.role.instance`
+(which will also take precedence over cloud role instance specified in the json configuration).
+ ## Sampling Sampling is helpful if you need to reduce cost.
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
Before you start, confirm the following:
To verify your cluster is running the newer version of the agent, you can either:
- * Run the command: `kubectl describe <omsagent-pod-name> --namespace=kube-system`. In the status returned, note the value under **Image** for omsagent in the *Containers* section of the output.
+ * Run the command: `kubectl describe pod <omsagent-pod-name> --namespace=kube-system`. In the status returned, note the value under **Image** for omsagent in the *Containers* section of the output.
* On the **Nodes** tab, select the cluster node and on the **Properties** pane to the right, note the value under **Agent Image Tag**. The value shown for AKS should be version **ciprod05262020** or later. The value shown for Azure Arc-enabled Kubernetes cluster should be version **ciprod09252020** or later. If your cluster has an older version, see [How to upgrade the Container insights agent](container-insights-manage-agent.md#upgrade-agent-on-aks-cluster) for steps to get the latest version.
azure-monitor Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/cache.md
Title: Caching
description: To improve performance, responses can be served from a cache. By default, responses are stored for 2 minutes. Previously updated : 08/18/2021 Last updated : 08/06/2022 # Caching
azure-monitor Cross Workspace Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/cross-workspace-queries.md
Title: Cross workspace queries
description: The API supports the ability to query across multiple workspaces. Previously updated : 08/18/2021 Last updated : 08/06/2022 # Cross workspace queries
azure-monitor App Expression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/app-expression.md
description: The app expression is used in an Azure Monitor log query to retriev
Previously updated : 08/11/2021 Last updated : 08/06/2022
azure-monitor Resource Expression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/resource-expression.md
description: The resource expression is used in a resource-centric Azure Monitor
Previously updated : 08/19/2021 Last updated : 08/06/2022
azure-monitor Tutorial Logs Ingestion Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-portal.md
Instead of directly configuring the schema of the table, the portal allows you t
```kusto source | extend TimeGenerated = todatetime(Time)
- | parse RawData with
+ | parse RawData.value with
ClientIP:string ' ' * ' ' *
Instead of directly configuring the schema of the table, the portal allows you t
```kusto source | extend TimeGenerated = todatetime(Time)
- | parse RawData with
+ | parse RawData.value with
ClientIP:string ' ' * ' ' *
azure-monitor Workspace Design Service Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-design-service-providers.md
- Title: Azure Monitor Logs for Service Providers | Microsoft Docs
-description: Azure Monitor Logs can help Managed Service Providers (MSPs), large enterprises, Independent Software Vendors (ISVs) and hosting service providers manage and monitor servers in customer's on-premises or cloud infrastructure.
--- Previously updated : 02/03/2020---
-# Log Analytics workspace design for service providers
-
-Log Analytics workspaces in Azure Monitor can help managed service providers (MSPs), large enterprises, independent software vendors (ISVs), and hosting service providers manage and monitor servers in customer's on-premises or cloud infrastructure.
-
-Large enterprises share many similarities with service providers, particularly when there is a centralized IT team that is responsible for managing IT for many different business units. For simplicity, this document uses the term *service provider* but the same functionality is also available for enterprises and other customers.
-
-For partners and service providers who are part of the [Cloud Solution Provider (CSP)](https://partner.microsoft.com/membership/cloud-solution-provider) program, Log Analytics in Azure Monitor is one of the Azure services available in Azure CSP subscriptions.
-
-Log Analytics in Azure Monitor can also be used by a service provider managing customer resources through the Azure delegated resource management capability in [Azure Lighthouse](../../lighthouse/overview.md).
-
-## Architectures for Service Providers
-
-Log Analytics workspaces provide a method for the administrator to control the flow and isolation of [log](../logs/data-platform-logs.md) data and create an architecture that addresses its specific business needs. [This article](../logs/workspace-design.md) explains the design, deployment, and migration considerations for a workspace, and the [manage access](../logs/manage-access.md) article discusses how to apply and manage permissions to log data. Service providers have additional considerations.
-
-There are three possible architectures for service providers regarding Log Analytics workspaces:
-
-### 1. Distributed - Logs are stored in workspaces located in the customer's tenant
-
-In this architecture, a workspace is deployed in the customer's tenant that is used for all the logs of that customer.
-
-There are two ways that service provider administrators can gain access to a Log Analytics workspace in a customer tenant:
--- A customer can add individual users from the service provider as [Azure Active Directory guest users (B2B)](../../active-directory/external-identities/what-is-b2b.md). The service provider administrators will have to sign in to each customer's directory in the Azure portal to be able to access these workspaces. This also requires the customers to manage individual access for each service provider administrator.-- For greater scalability and flexibility, service providers can use [Azure Lighthouse](../../lighthouse/overview.md) to access the customerΓÇÖs tenant. With this method, the service provider administrators are included in an Azure AD user group in the service providerΓÇÖs tenant, and this group is granted access during the onboarding process for each customer. These administrators can then access each customerΓÇÖs workspaces from within their own service provider tenant, rather than having to log into each customerΓÇÖs tenant individually. Accessing your customersΓÇÖ Log Analytics workspaces resources in this way reduces the work required on the customer side, and can make it easier to gather and analyze data across multiple customers managed by the same service provider via tools such as [Azure Monitor Workbooks](../visualize/workbooks-overview.md). For more info, see [Monitor customer resources at scale](../../lighthouse/how-to/monitor-at-scale.md).-
-The advantages of the distributed architecture are:
-
-* The customer can confirm specific levels of permissions via [Azure delegated resource management](../../lighthouse/concepts/architecture.md), or can manage access to the logs using their own [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
-* Logs can be collected from all types of resources, not just agent-based VM data. For example, Azure Audit Logs.
-* Each customer can have different settings for their workspace such as retention and data capping.
-* Isolation between customers for regulatory and compliancy.
-* The charge for each workspace will be rolled into the customer's subscription.
-
-The disadvantages of the distributed architecture are:
-
-* Centrally visualizing and analyzing data [across customer tenants](cross-workspace-query.md) with tools such as Azure Monitor Workbooks can result in slower experiences, especially when analyzing data across more than 50+ workspaces.
-* If customers are not onboarded for Azure delegated resource management, service provider administrators must be provisioned in the customer directory, and it is harder for the service provider to manage a large number of customer tenants at once.
-
-### 2. Central - Logs are stored in a workspace located in the service provider tenant
-
-In this architecture, the logs are not stored in the customer's tenants but only in a central location within one of the service provider's subscriptions. The agents that are installed on the customer's VMs are configured to send their logs to this workspace using the workspace ID and secret key.
-
-The advantages of the centralized architecture are:
-
-* It is easy to manage a large number of customers and integrate them to various backend systems.
-* The service provider has full ownership over the logs and the various artifacts such as functions and saved queries.
-* The service provider can perform analytics across all of its customers.
-
-The disadvantages of the centralized architecture are:
-
-* This architecture is applicable only for agent-based VM data, it will not cover PaaS, SaaS and Azure fabric data sources.
-* It might be hard to separate the data between the customers when they are merged into a single workspace. The only good method to do so is to use the computer's fully qualified domain name (FQDN) or via the Azure subscription ID.
-* All data from all customers will be stored in the same region with a single bill and same retention and configuration settings.
-* Azure fabric and PaaS services such as Azure Diagnostics and Azure Audit Logs requires the workspace to be in the same tenant as the resource, thus they cannot send the logs to the central workspace.
-* All VM agents from all customers will be authenticated to the central workspace using the same workspace ID and key. There is no method to block logs from a specific customer without interrupting other customers.
-
-### 3. Hybrid - Logs are stored in workspace located in the customer's tenant and some of them are pulled to a central location.
-
-The third architecture mix between the two options. It is based on the first distributed architecture where the logs are local to each customer but using some mechanism to create a central repository of logs. A portion of the logs is pulled into a central location for reporting and analytics. This portion could be small number of data types or a summary of the activity such as daily statistics.
-
-There are two options to implement logs in a central location:
-
-1. Central workspace: The service provider can create a workspace in its tenant and use a script that utilizes the [Query API](https://dev.loganalytics.io/) with the [Data Collection API](../logs/data-collector-api.md) to bring the data from the various workspaces to this central location. Another option, other than a script, is to use [Azure Logic Apps](../../logic-apps/logic-apps-overview.md).
-
-2. Power BI as a central location: Power BI can act as the central location when the various workspaces export data to it using the integration between the Log Analytics workspace and [Power BI](./log-powerbi.md).
-
-## Next steps
-
-* Automate creation and configuration of workspaces using [Resource Manager templates](../logs/resource-manager-workspace.md)
-
-* Automate creation of workspaces using [PowerShell](../logs/powershell-workspace-configuration.md)
-
-* Use [Alerts](../alerts/alerts-overview.md) to integrate with existing systems
-
-* Generate summary reports using [Power BI](./log-powerbi.md)
-
-* Onboard customers to [Azure delegated resource management](../../lighthouse/concepts/architecture.md).
azure-monitor Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-design.md
The exception is if combining data in the same workspace helps you reach a [comm
If you create separate workspaces for other criteria then you'll usually create additional workspace pairs. For example, if you have two Azure tenants, you may create four workspaces - an operational and security workspace in each tenant. -- **If you use both Azure Monitor and Microsoft Sentinal**, create a separate workspace for each. Consider combining the two if it helps you reach a commitment tier.
+- **If you use both Azure Monitor and Microsoft Sentinel**, create a separate workspace for each. Consider combining the two if it helps you reach a commitment tier.
+- **If you use both Microsoft Sentinel and Microsoft Defender for Cloud**, consider using the same workspace for both solutions to keep security data in one place.
### Azure tenants
azure-monitor Workspace Expression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-expression.md
description: The workspace expression is used in an Azure Monitor log query to r
Previously updated : 08/19/2021 Last updated : 08/06/2022
azure-monitor Profiler Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-settings.md
Title: Configure Application Insights Profiler | Microsoft Docs
description: Use the Azure Application Insights Profiler settings pane to see Profiler status and start profiling sessions ms.contributor: Charles.Weininger Previously updated : 07/18/2022 Last updated : 08/09/2022 # Configure Application Insights Profiler
+Once you've enabled the Application Insights Profiler, you can:
+
+- Start a new profiling session
+- Configure Profiler triggers
+- View recent profiling sessions
+ To open the Azure Application Insights Profiler settings pane, select **Performance** from the left menu within your Application Insights page. :::image type="content" source="./media/profiler-settings/performance-blade-inline.png" alt-text="Screenshot of the link to open performance blade." lightbox="media/profiler-settings/performance-blade.png":::
azure-monitor Profiler Trackrequests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-trackrequests.md
Title: Write code to track requests with Azure Application Insights | Microsoft
description: Write code to track requests with Application Insights so you can get profiles for your requests. Previously updated : 08/06/2018- Last updated : 08/09/2022+ # Write code to track requests with Application Insights
-To view profiles for your application on the Performance page, Azure Application Insights needs to track requests for your application. Application Insights can automatically track requests for applications that are built on already-instrumented frameworks. Two examples are ASP.NET and ASP.NET Core.
+Azure Application Insights needs to track requests for your application in order to provide profiles for your application on the Performance page in the Azure portal.
-For other applications, such as Azure Cloud Services worker roles and Service Fabric stateless APIs, you need to write code to tell Application Insights where your requests begin and end. After you've written this code, requests telemetry is sent to Application Insights. You can view the telemetry on the Performance page, and profiles are collected for those requests.
+For applications built on already-instrumented frameworks (like ASP.NET and ASP.NET Core)S, Application Insights can automatically track requests.
+But for other applications (like Azure Cloud Services worker roles and Service Fabric stateless APIs), you need to track requests with code that tells Application Insights where your requests begin and end. Requests telemetry is then sent to Application Insights, which you can view on the Performance page. Profiles are collected for those requests.
-To manually track requests, do the following:
+To manually track requests:
1. Early in the application lifetime, add the following code:
To manually track requests, do the following:
// Replace with your own Application Insights instrumentation key. TelemetryConfiguration.Active.InstrumentationKey = "00000000-0000-0000-0000-000000000000"; ```
+
For more information about this global instrumentation key configuration, see [Use Service Fabric with Application Insights](https://github.com/Azure-Samples/service-fabric-dotnet-getting-started/blob/dev/appinsights/ApplicationInsights.md). 1. For any piece of code that you want to instrument, add a `StartOperation<RequestTelemetry>` **using** statement around it, as shown in the following example:
To manually track requests, do the following:
} ```
- Calling `StartOperation<RequestTelemetry>` within another `StartOperation<RequestTelemetry>` scope isn't supported. You can use `StartOperation<DependencyTelemetry>` in the nested scope instead. For example:
+ Calling `StartOperation<RequestTelemetry>` within another `StartOperation<RequestTelemetry>` scope isn't supported. You can use `StartOperation<DependencyTelemetry>` in the nested scope instead. For example:
```csharp using (var getDetailsOperation = client.StartOperation<RequestTelemetry>("GetProductDetails"))
To manually track requests, do the following:
} } ```++
+## Next steps
+
+Troubleshoot the [Application Insights Profiler](./profiler-troubleshooting.md).
azure-monitor Profiler Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-vm.md
In this article, you learn how to run Application Insights Profiler on your Azur
## Pre-requisites -- A functioning [ASP.NET Core application](https://docs.microsoft.com/aspnet/core/getting-started)
+- A functioning [ASP.NET Core application](/aspnet/core/getting-started)
- An [Application Insights resource](../app/create-workspace-resource.md). - Review the Azure Resource Manager templates for the Azure Diagnostics extension: - [VM](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/WindowsVirtualMachine.json)
Currently, Application Insights Profiler is not supported for on-premises server
Learn how to... > [!div class="nextstepaction"]
-> [Generate load and view Profiler traces](./profiler-data.md)
+> [Generate load and view Profiler traces](./profiler-data.md)
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
You should use Active Directory Domain Services (AD DS) in the following scenari
> [!NOTE] > Azure NetApp Files doesn't support the use of AD DS Read-only Domain Controllers (RODC).
-If you choose to use AD DS with Azure NetApp Files, follow the guidance in [Extend AD DS into Azure Architecture Guide](https://docs.microsoft.com/azure/architecture/reference-architectures/identity/adds-extend-domain) and ensure that you meet the Azure NetApp Files [network](#network-requirements) and [DNS requirements](#ad-ds-requirements) for AD DS.
+If you choose to use AD DS with Azure NetApp Files, follow the guidance in [Extend AD DS into Azure Architecture Guide](/azure/architecture/reference-architectures/identity/adds-extend-domain) and ensure that you meet the Azure NetApp Files [network](#network-requirements) and [DNS requirements](#ad-ds-requirements) for AD DS.
### Azure Active Directory Domain Services considerations
azure-resource-manager Compare Template Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/compare-template-syntax.md
workloadSetting: description
``` ```json
-"workloadSetting": "[variables('demoVar'))]"
+"workloadSetting": "[variables('description'))]"
``` ## Strings
azure-resource-manager Contribute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/contribute.md
Bicep is an open-source project. That means you can contribute to Bicep's develo
## Contribution types - **Azure Quickstart Templates.** You can contribute example Bicep files and ARM templates to the Azure Quickstart Templates repository. For more information, see the [Azure Quickstart Templates contribution guide](https://github.com/Azure/azure-quickstart-templates/blob/master/1-CONTRIBUTION-GUIDE/README.md#contribution-guide).-- **Documentation.** Bicep's documentation is open to contributions, too. For more information, see [Microsoft Docs contributor guide overview](/contribute/).
+- **Documentation.** Bicep's documentation is open to contributions, too. For more information, see the [Microsoft contributor guide overview](/contribute/).
- **Snippets.** Do you have a favorite snippet you think the community would benefit from? You can add it to the Visual Studio Code extension's collection of snippets. For more information, see [Contributing to Bicep](https://github.com/Azure/bicep/blob/main/CONTRIBUTING.md#snippets). - **Code changes.** If you're a developer and you have ideas you'd like to see in the Bicep language or tooling, you can contribute a pull request. For more information, see [Contributing to Bicep](https://github.com/Azure/bicep/blob/main/CONTRIBUTING.md).
azure-resource-manager Deploy To Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-subscription.md
For creating new resource groups, use:
For managing your subscription, use: * [budgets](/azure/templates/microsoft.consumption/budgets)
-* [configurations - Advisor ](/azure/templates/microsoft.advisor/configurations)
+* [configurations - Advisor](/azure/templates/microsoft.advisor/configurations)
* [lineOfCredit](/azure/templates/microsoft.billing/billingaccounts/lineofcredit) * [locks](/azure/templates/microsoft.authorization/locks)
-* [profile - Change Analysis ](/azure/templates/microsoft.changeanalysis/profile)
+* [profile - Change Analysis](/azure/templates/microsoft.changeanalysis/profile)
* [supportPlanTypes](/azure/templates/microsoft.addons/supportproviders/supportplantypes) * [tags](/azure/templates/microsoft.resources/tags)
azure-resource-manager Deployment Script Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-script-bicep.md
Property value details:
- `arguments`: Specify the parameter values. The values are separated by spaces.
- Deployment Scripts splits the arguments into an array of strings by invoking the [CommandLineToArgvW ](/windows/win32/api/shellapi/nf-shellapi-commandlinetoargvw) system call. This step is necessary because the arguments are passed as a [command property](/rest/api/container-instances/containergroups/createorupdate#containerexec)
+ Deployment Scripts splits the arguments into an array of strings by invoking the [CommandLineToArgvW](/windows/win32/api/shellapi/nf-shellapi-commandlinetoargvw) system call. This step is necessary because the arguments are passed as a [command property](/rest/api/container-instances/containergroups/createorupdate#containerexec)
to Azure Container Instance, and the command property is an array of string. If the arguments contain escaped characters, double escaped the characters. For example, in the previous sample Bicep, The argument is `-name \"John Dole\"`. The escaped string is `-name \\"John Dole\\"`.
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| Microsoft.Cache | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | | Microsoft.Capacity | core | | Microsoft.Cdn | [Content Delivery Network](../../cdn/index.yml) |
-| Microsoft.CertificateRegistration | [App Service Certificates](../../app-service/configure-ssl-certificate.md#import-an-app-service-certificate) |
+| Microsoft.CertificateRegistration | [App Service Certificates](../../app-service/configure-ssl-certificate.md#import-certificate-into-app-service) |
| Microsoft.ChangeAnalysis | [Azure Monitor](../../azure-monitor/index.yml) | | Microsoft.ClassicCompute | Classic deployment model virtual machine | | Microsoft.ClassicInfrastructureMigrate | Classic deployment model migration |
The resources providers that are marked with **- registered** are registered by
| Microsoft.Microservices4Spring | [Azure Spring Apps](../../spring-apps/overview.md) | | Microsoft.Migrate | [Azure Migrate](../../migrate/migrate-services-overview.md) | | Microsoft.MixedReality | [Azure Spatial Anchors](../../spatial-anchors/index.yml) |
+| Microsoft.MobileNetwork | [Azure Private 5G Core](../../private-5g-core/index.yml) |
| Microsoft.NetApp | [Azure NetApp Files](../../azure-netapp-files/index.yml) | | Microsoft.Network | [Application Gateway](../../application-gateway/index.yml)<br />[Azure Bastion](../../bastion/index.yml)<br />[Azure DDoS Protection](../../ddos-protection/ddos-protection-overview.md)<br />[Azure DNS](../../dns/index.yml)<br />[Azure ExpressRoute](../../expressroute/index.yml)<br />[Azure Firewall](../../firewall/index.yml)<br />[Azure Front Door Service](../../frontdoor/index.yml)<br />[Azure Private Link](../../private-link/index.yml)<br />[Azure Route Server](../../route-server/index.yml)<br />[Load Balancer](../../load-balancer/index.yml)<br />[Network Watcher](../../network-watcher/index.yml)<br />[Traffic Manager](../../traffic-manager/index.yml)<br />[Virtual Network](../../virtual-network/index.yml)<br />[Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md)<br />[Virtual WAN](../../virtual-wan/index.yml)<br />[VPN Gateway](../../vpn-gateway/index.yml)<br /> | | Microsoft.Notebooks | [Azure Notebooks](https://notebooks.azure.com/help/introduction) |
azure-resource-manager Extension Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/extension-resource-types.md
Title: Extension resource types description: Lists the Azure resource types are used to extend the capabilities of other resource types. Previously updated : 06/03/2022 Last updated : 08/10/2022 # Resource types that extend capabilities of other resources
An extension resource is a resource that adds to another resource's capabilities
* advisorScore * configurations
+* predict
* recommendations * suppressions
An extension resource is a resource that adds to another resource's capabilities
* serviceAssociationLinks
+## Microsoft.ContainerService
+
+* fleetMemberships
+ ## Microsoft.CostManagement * Alerts
An extension resource is a resource that adds to another resource's capabilities
* Exports * ExternalSubscriptions * Forecast
+* GenerateCostDetailsReport
* GenerateDetailedCostReport * Insights * Pricesheets
An extension resource is a resource that adds to another resource's capabilities
## Microsoft.Network
+* cloudServiceSlots
* networkManagerConnections ## Microsoft.PolicyInsights
An extension resource is a resource that adds to another resource's capabilities
* adaptiveNetworkHardenings * advancedThreatProtectionSettings * antiMalwareSettings
+* applications
* assessmentMetadata * assessments * Compliances * dataCollectionAgents
+* dataSensitivitySettings
* deviceSecurityGroups * governanceRules * InformationProtectionPolicies
An extension resource is a resource that adds to another resource's capabilities
* entities * entityQueryTemplates * fileImports
+* huntsessions
* incidents * metadata * MitreCoverageRecords * onboardingStates * overview
+* recommendations
* securityMLAnalyticsSettings * settings * sourceControls
azure-resource-manager Preview Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/preview-features.md
Title: Set up preview features in Azure subscription description: Describes how to list, register, or unregister preview features in your Azure subscription for a resource provider. Previously updated : 07/08/2022 Last updated : 08/10/2022 # Customer intent: As an Azure user, I want to use preview features in my subscription so that I can expose a resource provider's preview functionality.
Register a preview feature in your Azure subscription to expose more functionali
After a preview feature is registered in your subscription, you'll see one of two states: **Registered** or **Pending**. - For a preview feature that doesn't require approval, the state is **Registered**.-- If a preview feature requires approval, the registration state is **Pending**.
+- If a preview feature requires approval, the registration state is **Pending**. You must request approval from the Azure service offering the preview feature. Usually, you request access through a support ticket.
- To request approval, submit an [Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). - After the registration is approved, the preview feature's state changes to **Registered**.
+Some services require other methods, such as email, to get approval for pending request. Check announcements about the preview feature for information about how to get access.
+ # [Portal](#tab/azure-portal) 1. Sign in to the [Azure portal](https://portal.azure.com/).
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
Title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. Previously updated : 06/27/2022 Last updated : 08/10/2022 # Resources not limited to 800 instances per resource group
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.AlertsManagement
+* actionRules
* smartDetectorAlertRules ## Microsoft.Automation
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.AzureStack
+* generateDeploymentLicense
* linkedSubscriptions * registrations * registrations/customerSubscriptions
Some resources have a limit on the number instances per region. This limit is di
* clusters * namespaces
-## Microsoft.Experimentation
-
-* experimentWorkspaces
- ## Microsoft.GuestConfiguration * guestConfigurationAssignments
Some resources have a limit on the number instances per region. This limit is di
* virtualNetworks/privateDnsZoneLinks * virtualNetworkTaps
+## Microsoft.NotificationHubs
+
+* namespaces - By default, limited to 800 instances. That limit can be increased by contacting support.
+* namespaces/notificationHubs - By default, limited to 800 instances. That limit can be increased by contacting support.
+ ## Microsoft.PowerBI * workspaceCollections - By default, limited to 800 instances. That limit can be increased by contacting support.
Some resources have a limit on the number instances per region. This limit is di
* namespaces
+## Microsoft.Security
+
+* assignments
+ ## Microsoft.ServiceBus * namespaces
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
Title: Tag support for resources description: Shows which Azure resource types support tags. Provides details for all Azure services. Previously updated : 06/27/2022 Last updated : 08/10/2022 # Tag support for Azure resources
To get the same data as a file of comma-separated values, download [tag-support.
> | configurations | No | No | > | generateRecommendations | No | No | > | metadata | No | No |
+> | predict | No | No |
> | recommendations | No | No | > | suppressions | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | connectedEnvironments | Yes | Yes |
+> | connectedEnvironments / certificates | Yes | Yes |
> | containerApps | Yes | Yes | > | managedEnvironments | Yes | Yes | > | managedEnvironments / certificates | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | runtimeVersions | No | No |
> | Spring | Yes | Yes | > | Spring / apps | No | No | > | Spring / apps / deployments | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | - | -- | -- | > | b2cDirectories | Yes | No | > | b2ctenants | No | No |
+> | ciamDirectories | Yes | Yes |
> | guestUsages | Yes | Yes | ## Microsoft.AzureArcData
To get the same data as a file of comma-separated values, download [tag-support.
> | PostgresInstances | Yes | Yes | > | SqlManagedInstances | Yes | Yes | > | SqlServerInstances | Yes | Yes |
+> | SqlServerInstances / Databases | Yes | Yes |
## Microsoft.AzureCIS
To get the same data as a file of comma-separated values, download [tag-support.
> | accounts | Yes | Yes | > | accounts / devices | No | No | > | accounts / devices / sensors | No | No |
+> | accounts / sensors | No | No |
> | accounts / solutioninstances | No | No | > | accounts / solutions | No | No | > | accounts / targets | No | No |
+## Microsoft.AzureScan
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | scanningAccounts | Yes | Yes |
+ ## Microsoft.AzureSphere > [!div class="mx-tableFixed"]
To get the same data as a file of comma-separated values, download [tag-support.
> | catalogs / products | No | No | > | catalogs / products / devicegroups | No | No |
+## Microsoft.AzureSphereGen2
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | catalogs | Yes | Yes |
+> | catalogs / certificates | No | No |
+> | catalogs / deviceRegistrations | Yes | Yes |
+> | catalogs / provisioningPackages | Yes | Yes |
+ ## Microsoft.AzureStack > [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | cloudManifestFiles | No | No |
+> | generateDeploymentLicense | No | No |
> | linkedSubscriptions | Yes | Yes | > | registrations | Yes | Yes | > | registrations / customerSubscriptions | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | clusters / offers | No | No | > | clusters / publishers | No | No | > | clusters / publishers / offers | No | No |
-> | galleryImages | Yes | Yes |
+> | galleryimages | Yes | Yes |
+> | marketplacegalleryimages | Yes | Yes |
> | networkinterfaces | Yes | Yes |
+> | storagecontainers | Yes | Yes |
> | virtualharddisks | Yes | Yes | > | virtualmachines | Yes | Yes | > | virtualmachines / extensions | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | billingAccounts | No | No | > | billingAccounts / agreements | No | No | > | billingAccounts / appliedReservationOrders | No | No |
+> | billingAccounts / associatedTenants | No | No |
> | billingAccounts / billingPermissions | No | No | > | billingAccounts / billingProfiles | No | No | > | billingAccounts / billingProfiles / billingPermissions | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | billingAccounts / invoiceSections / transactions | No | No | > | billingAccounts / invoiceSections / transfers | No | No | > | billingAccounts / lineOfCredit | No | No |
+> | billingAccounts / notificationContacts | No | No |
> | billingAccounts / payableOverage | No | No | > | billingAccounts / paymentMethods | No | No | > | billingAccounts / payNow | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | calculateMigrationCost | No | No |
> | savingsPlanOrderAliases | No | No | > | savingsPlanOrders | No | No | > | savingsPlanOrders / savingsPlans | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | canMigrate | No | No |
> | CdnWebApplicationFirewallManagedRuleSets | No | No | > | CdnWebApplicationFirewallPolicies | Yes | Yes | > | edgenodes | No | No |
+> | migrate | No | No |
> | profiles | Yes | Yes | > | profiles / afdendpoints | Yes | Yes | > | profiles / afdendpoints / routes | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | VCenters | Yes | Yes | > | VCenters / InventoryItems | No | No | > | VirtualMachines | Yes | Yes |
+> | VirtualMachines / AssessPatches | No | No |
> | VirtualMachines / Extensions | Yes | Yes | > | VirtualMachines / GuestAgents | No | No | > | VirtualMachines / HybridIdentityMetadata | No | No |
+> | VirtualMachines / InstallPatches | No | No |
> | VirtualMachineTemplates | Yes | Yes | > | VirtualNetworks | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | containerServices | Yes | Yes |
+> | fleetMemberships | No | No |
+> | fleets | Yes | Yes |
+> | fleets / members | No | No |
> | managedClusters | Yes | Yes | > | ManagedClusters / eventGridFilters | No | No | > | managedclustersnapshots | Yes | Yes |
-> | openShiftManagedClusters | Yes | Yes |
> | snapshots | Yes | Yes | ## Microsoft.CostManagement
To get the same data as a file of comma-separated values, download [tag-support.
> | fetchMarketplacePrices | No | No | > | fetchPrices | No | No | > | Forecast | No | No |
+> | GenerateCostDetailsReport | No | No |
> | GenerateDetailedCostReport | No | No | > | Insights | No | No | > | Pricesheets | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | workspaces / proposals / entitlements / policies | No | No | > | workspaces / proposals / invitations | No | No | > | workspaces / proposals / scriptReferences | No | No |
+> | workspaces / proposals / virtualOutputReferences | No | No |
> | workspaces / resourceReferences | No | No | > | workspaces / scripts | No | No | > | workspaces / scripts / scriptrevisions | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | dataFactorySchema | No | No | > | factories | Yes | Yes | > | factories / integrationRuntimes | No | No |
+> | factories / pipelines | No | No |
> [!NOTE] > If you have Azure-SSIS integration runtimes in your data factory, their running cost will be tagged with data factory tags. Running Azure-SSIS integration runtimes must be stopped and restarted for new data factory tags to be applied to their running cost.
To get the same data as a file of comma-separated values, download [tag-support.
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | assessForMigration | No | No |
> | flexibleServers | Yes | Yes | > | getPrivateDnsZoneSuffix | No | No | > | servers | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | instances / sandboxes | Yes | Yes | > | instances / sandboxes / experiments | Yes | Yes |
+## Microsoft.DevCenter
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | devcenters | Yes | Yes |
+> | devcenters / attachednetworks | No | No |
+> | devcenters / catalogs | No | No |
+> | devcenters / devboxdefinitions | Yes | Yes |
+> | devcenters / environmentTypes | No | No |
+> | devcenters / galleries | No | No |
+> | devcenters / galleries / images | No | No |
+> | devcenters / galleries / images / versions | No | No |
+> | devcenters / images | No | No |
+> | networkconnections | Yes | Yes |
+> | projects | Yes | Yes |
+> | projects / attachednetworks | No | No |
+> | projects / devboxdefinitions | No | No |
+> | projects / environmentTypes | No | No |
+> | projects / pools | Yes | Yes |
+> | projects / pools / schedules | No | No |
+> | registeredSubscriptions | No | No |
+
+## Microsoft.DevHub
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | workflows | Yes | Yes |
+ ## Microsoft.Devices > [!div class="mx-tableFixed"]
To get the same data as a file of comma-separated values, download [tag-support.
> | namespaces / networkrulesets | No | No | > | namespaces / privateEndpointConnections | No | No |
-## Microsoft.Experimentation
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | experimentWorkspaces | Yes | Yes |
- ## Microsoft.Falcon > [!div class="mx-tableFixed"]
To get the same data as a file of comma-separated values, download [tag-support.
> | projects / environments / deployments | No | No | > | projects / environmentTypes | No | No | > | projects / pools | Yes | Yes |
+> | registeredSubscriptions | No | No |
## Microsoft.FluidRelay
To get the same data as a file of comma-separated values, download [tag-support.
> | services / privateEndpointConnectionProxies | No | No | > | services / privateEndpointConnections | No | No | > | services / privateLinkResources | No | No |
+> | validateMedtechMappings | No | No |
> | workspaces | Yes | Yes | > | workspaces / dicomservices | Yes | Yes | > | workspaces / eventGridFilters | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | provisionedClusters | Yes | Yes | > | provisionedClusters / agentPools | Yes | Yes | > | provisionedClusters / hybridIdentityMetadata | No | No |
+> | storageSpaces | Yes | Yes |
+> | virtualNetworks | Yes | Yes |
## Microsoft.HybridData
To get the same data as a file of comma-separated values, download [tag-support.
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | configurationGroupValues | Yes | Yes |
> | devices | Yes | Yes |
-> | networkFunctions | Yes | Yes |
+> | networkFunctionPublishers | No | No |
+> | networkFunctionPublishers / networkFunctionDefinitionGroups | No | No |
+> | networkFunctionPublishers / networkFunctionDefinitionGroups / publisherNetworkFunctionDefinitionVersions | No | No |
+> | networkfunctions | Yes | Yes |
> | networkFunctionVendors | No | No |
+> | publishers | Yes | Yes |
+> | publishers / artifactStores | Yes | Yes |
+> | publishers / artifactStores / artifactManifests | Yes | Yes |
+> | publishers / configurationGroupSchemas | Yes | Yes |
+> | publishers / networkFunctionDefinitionGroups | Yes | Yes |
+> | publishers / networkFunctionDefinitionGroups / networkFunctionDefinitionVersions | Yes | Yes |
+> | publishers / networkFunctionDefinitionGroups / previewSubscriptions | Yes | Yes |
+> | publishers / networkServiceDesignGroups | Yes | Yes |
+> | publishers / networkServiceDesignGroups / networkServiceDesignVersions | Yes | Yes |
> | registeredSubscriptions | No | No |
+> | siteNetworkServices | Yes | Yes |
+> | sites | Yes | Yes |
> | vendors | No | No | ## Microsoft.ImportExport
To get the same data as a file of comma-separated values, download [tag-support.
> | workspaces / onlineEndpoints | Yes | Yes | > | workspaces / onlineEndpoints / deployments | Yes | Yes | > | workspaces / registries | Yes | Yes |
+> | workspaces / schedules | No | No |
> | workspaces / services | No | No | > [!NOTE]
To get the same data as a file of comma-separated values, download [tag-support.
> | publishers / offers | No | No | > | publishers / offers / amendments | No | No | > | register | No | No |
+> | search | No | No |
## Microsoft.MarketplaceNotifications
To get the same data as a file of comma-separated values, download [tag-support.
> | - | -- | -- | > | assessmentProjects | Yes | Yes | > | migrateprojects | Yes | Yes |
+> | modernizeProjects | Yes | Yes |
> | moveCollections | Yes | Yes | > | projects | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | packetCoreControlPlanes | Yes | Yes | > | packetCoreControlPlanes / packetCoreDataPlanes | Yes | Yes | > | packetCoreControlPlanes / packetCoreDataPlanes / attachedDataNetworks | Yes | Yes |
+> | packetCoreControlPlaneVersions | No | No |
> | packetCores | Yes | Yes |
+> | simGroups | Yes | Yes |
+> | simGroups / sims | No | No |
> | sims | Yes | Yes | > | sims / simProfiles | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | azureWebCategories | No | No | > | bastionHosts | Yes | Yes | > | bgpServiceCommunities | No | No |
+> | cloudServiceSlots | No | No |
> | connections | Yes | Yes | > | customIpPrefixes | Yes | Yes | > | ddosCustomPolicies | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | serviceEndpointPolicies | Yes | Yes | > | trafficManagerGeographicHierarchies | No | No | > | trafficmanagerprofiles | Yes, see [note below](#network-limitations) | Yes |
+> | trafficmanagerprofiles / azureendpoints | No | No |
+> | trafficmanagerprofiles / externalendpoints | No | No |
> | trafficmanagerprofiles / heatMaps | No | No |
+> | trafficmanagerprofiles / nestedendpoints | No | No |
> | trafficManagerUserMetricsKeys | No | No | > | virtualHubs | Yes | Yes | > | virtualNetworkGateways | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | bareMetalMachines | Yes | Yes |
+> | cloudServicesNetworks | Yes | Yes |
> | clusterManagers | Yes | Yes | > | clusters | Yes | Yes |
+> | defaultCniNetworks | Yes | Yes |
+> | disks | Yes | Yes |
> | hybridAksClusters | Yes | Yes | > | hybridAksManagementDomains | Yes | Yes | > | hybridAksVirtualMachines | Yes | Yes |
+> | l2Networks | Yes | Yes |
+> | l3Networks | Yes | Yes |
> | rackManifests | Yes | Yes | > | racks | Yes | Yes |
+> | storageAppliances | Yes | Yes |
+> | trunkedNetworks | Yes | Yes |
> | virtualMachines | Yes | Yes | > | workloadNetworks | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | peeringServiceProviders | No | No | > | peeringServices | Yes | Yes |
+## Microsoft.Pki
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | Pki | Yes | Yes |
+> | Pkis | Yes | Yes |
+> | Pkis / certificateAuthorities | Yes | Yes |
+> | Pkis / enrollmentPolicies | Yes | Yes |
+ ## Microsoft.PlayFab > [!div class="mx-tableFixed"]
To get the same data as a file of comma-separated values, download [tag-support.
> | accounts | Yes | Yes | > | enterprisePolicies | Yes | Yes |
-## Microsoft.ProjectBabylon
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | accounts | Yes | Yes |
-> | deletedAccounts | No | No |
-> | getDefaultAccount | No | No |
-> | removeDefaultAccount | No | No |
-> | setDefaultAccount | No | No |
- ## Microsoft.ProviderHub > [!div class="mx-tableFixed"]
To get the same data as a file of comma-separated values, download [tag-support.
> | AvailabilitySets | Yes | Yes | > | Clouds | Yes | Yes | > | VirtualMachines | Yes | Yes |
+> | VirtualMachines / HybridIdentityMetadata | No | No |
> | VirtualMachineTemplates | Yes | Yes | > | VirtualNetworks | Yes | Yes | > | VMMServers | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | alertsSuppressionRules | No | No | > | allowedConnections | No | No | > | antiMalwareSettings | No | No |
+> | applications | No | No |
> | assessmentMetadata | No | No | > | assessments | No | No | > | assessments / governanceAssignments | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | customEntityStoreAssignments | Yes | Yes | > | dataCollectionAgents | No | No | > | dataScanners | Yes | Yes |
+> | dataSensitivitySettings | No | No |
> | deviceSecurityGroups | No | No | > | discoveredSecuritySolutions | No | No | > | externalSecuritySolutions | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | subAssessments | No | No | > | tasks | No | No | > | topologies | No | No |
-> | vmScanners | Yes | Yes |
+> | vmScanners | No | No |
> | workspaceSettings | No | No | ## Microsoft.SecurityDetonation
To get the same data as a file of comma-separated values, download [tag-support.
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | azureDevOpsConnectors | Yes | Yes |
+> | azureDevOpsConnectors / orgs | No | No |
+> | azureDevOpsConnectors / orgs / projects | No | No |
+> | azureDevOpsConnectors / orgs / projects / repos | No | No |
> | gitHubConnectors | Yes | Yes | > | gitHubConnectors / gitHubRepos | No | No |
+> | gitHubConnectors / owners | No | No |
+> | gitHubConnectors / owners / repos | No | No |
## Microsoft.SecurityInsights
To get the same data as a file of comma-separated values, download [tag-support.
> | entities | No | No | > | entityQueryTemplates | No | No | > | fileImports | No | No |
+> | huntsessions | No | No |
> | incidents | No | No | > | metadata | No | No | > | MitreCoverageRecords | No | No | > | onboardingStates | No | No | > | overview | No | No |
+> | recommendations | No | No |
> | securityMLAnalyticsSettings | No | No | > | settings | No | No | > | sourceControls | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | instancePools | Yes | Yes | > | managedInstances | Yes | Yes | > | managedInstances / administrators | No | No |
+> | managedInstances / advancedThreatProtectionSettings | No | No |
> | managedInstances / databases | Yes | Yes |
+> | managedInstances / databases / advancedThreatProtectionSettings | No | No |
> | managedInstances / databases / backupLongTermRetentionPolicies | No | No | > | managedInstances / databases / vulnerabilityAssessments | No | No | > | managedInstances / dnsAliases | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | servers / databases / metrics | No | No | > | servers / databases / recommendedSensitivityLabels | No | No | > | servers / databases / securityAlertPolicies | No | No |
+> | servers / databases / sqlvulnerabilityassessments | No | No |
> | servers / databases / syncGroups | No | No | > | servers / databases / syncGroups / syncMembers | No | No | > | servers / databases / topQueries | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | servers / restorableDroppedDatabases | No | No | > | servers / securityAlertPolicies | No | No | > | servers / serviceObjectives | No | No |
+> | servers / sqlvulnerabilityassessments | No | No |
> | servers / syncAgents | No | No | > | servers / tdeCertificates | No | No | > | servers / usages | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | caches / storageTargets | No | No | > | usageModels | No | No |
+## Microsoft.StorageMover
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | storageMovers | Yes | Yes |
+> | storageMovers / agents | No | No |
+> | storageMovers / endpoints | No | No |
+> | storageMovers / projects | No | No |
+> | storageMovers / projects / jobDefinitions | No | No |
+> | storageMovers / projects / jobDefinitions / jobRuns | No | No |
+ ## Microsoft.StoragePool > [!div class="mx-tableFixed"]
To get the same data as a file of comma-separated values, download [tag-support.
> | workspaces / sqlDatabases | Yes | Yes | > | workspaces / sqlPools | Yes | Yes |
-<a id="synapsenote"></a>
- > [!NOTE] > The Master database doesn't support tags, but other databases, including Azure Synapse Analytics databases, support tags. Azure Synapse Analytics databases must be in Active (not Paused) state. + ## Microsoft.TestBase > [!div class="mx-tableFixed"]
To get the same data as a file of comma-separated values, download [tag-support.
> | testBaseAccounts | Yes | Yes | > | testBaseAccounts / customerEvents | No | No | > | testBaseAccounts / emailEvents | No | No |
+> | testBaseAccounts / externalTestTools | No | No |
+> | testBaseAccounts / externalTestTools / testCases | No | No |
> | testBaseAccounts / flightingRings | No | No | > | testBaseAccounts / packages | Yes | Yes | > | testBaseAccounts / packages / favoriteProcesses | No | No |
azure-resource-manager Deployment Complete Mode Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-complete-mode-deletion.md
Title: Complete mode deletion description: Shows how resource types handle complete mode deletion in Azure Resource Manager templates. Previously updated : 06/27/2022 Last updated : 08/10/2022 # Deletion of Azure resources for complete mode deployments
The resources are listed by resource provider namespace. To match a resource pro
> | configurations | No | > | generateRecommendations | No | > | metadata | No |
+> | predict | No |
> | recommendations | No | > | suppressions | No |
The resources are listed by resource provider namespace. To match a resource pro
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | connectedEnvironments | Yes |
+> | connectedEnvironments / certificates | Yes |
> | containerApps | Yes | > | managedEnvironments | Yes | > | managedEnvironments / certificates | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | runtimeVersions | No |
> | Spring | Yes | > | Spring / apps | No | > | Spring / apps / deployments | No |
The resources are listed by resource provider namespace. To match a resource pro
> | - | -- | > | b2cDirectories | Yes | > | b2ctenants | No |
+> | ciamDirectories | Yes |
> | guestUsages | Yes | ## Microsoft.AzureArcData
The resources are listed by resource provider namespace. To match a resource pro
> | PostgresInstances | Yes | > | SqlManagedInstances | Yes | > | SqlServerInstances | Yes |
+> | SqlServerInstances / Databases | Yes |
## Microsoft.AzureCIS
The resources are listed by resource provider namespace. To match a resource pro
> | accounts | Yes | > | accounts / devices | No | > | accounts / devices / sensors | No |
+> | accounts / sensors | No |
> | accounts / solutioninstances | No | > | accounts / solutions | No | > | accounts / targets | No |
+## Microsoft.AzureScan
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | scanningAccounts | Yes |
+ ## Microsoft.AzureSphere > [!div class="mx-tableFixed"]
The resources are listed by resource provider namespace. To match a resource pro
> | catalogs / products | No | > | catalogs / products / devicegroups | No |
+## Microsoft.AzureSphereGen2
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | catalogs | Yes |
+> | catalogs / certificates | No |
+> | catalogs / deviceRegistrations | Yes |
+> | catalogs / provisioningPackages | Yes |
+ ## Microsoft.AzureStack > [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- | > | cloudManifestFiles | No |
+> | generateDeploymentLicense | No |
> | linkedSubscriptions | Yes | > | registrations | Yes | > | registrations / customerSubscriptions | No |
The resources are listed by resource provider namespace. To match a resource pro
> | clusters / offers | No | > | clusters / publishers | No | > | clusters / publishers / offers | No |
-> | galleryImages | Yes |
+> | galleryimages | Yes |
+> | marketplacegalleryimages | Yes |
> | networkinterfaces | Yes |
+> | storagecontainers | Yes |
> | virtualharddisks | Yes | > | virtualmachines | Yes | > | virtualmachines / extensions | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | billingAccounts | No | > | billingAccounts / agreements | No | > | billingAccounts / appliedReservationOrders | No |
+> | billingAccounts / associatedTenants | No |
> | billingAccounts / billingPermissions | No | > | billingAccounts / billingProfiles | No | > | billingAccounts / billingProfiles / billingPermissions | No |
The resources are listed by resource provider namespace. To match a resource pro
> | billingAccounts / invoiceSections / transactions | No | > | billingAccounts / invoiceSections / transfers | No | > | billingAccounts / lineOfCredit | No |
+> | billingAccounts / notificationContacts | No |
> | billingAccounts / payableOverage | No | > | billingAccounts / paymentMethods | No | > | billingAccounts / payNow | No |
The resources are listed by resource provider namespace. To match a resource pro
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | calculateMigrationCost | No |
> | savingsPlanOrderAliases | No | > | savingsPlanOrders | No | > | savingsPlanOrders / savingsPlans | No |
The resources are listed by resource provider namespace. To match a resource pro
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | canMigrate | No |
> | CdnWebApplicationFirewallManagedRuleSets | No | > | CdnWebApplicationFirewallPolicies | Yes | > | edgenodes | No |
+> | migrate | No |
> | profiles | Yes | > | profiles / afdendpoints | Yes | > | profiles / afdendpoints / routes | No |
The resources are listed by resource provider namespace. To match a resource pro
> | VCenters | Yes | > | VCenters / InventoryItems | No | > | VirtualMachines | Yes |
+> | VirtualMachines / AssessPatches | No |
> | VirtualMachines / Extensions | Yes | > | VirtualMachines / GuestAgents | No | > | VirtualMachines / HybridIdentityMetadata | No |
+> | VirtualMachines / InstallPatches | No |
> | VirtualMachineTemplates | Yes | > | VirtualNetworks | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | Resource type | Complete mode deletion | > | - | -- | > | containerServices | Yes |
+> | fleetMemberships | No |
+> | fleets | Yes |
+> | fleets / members | No |
> | managedClusters | Yes | > | ManagedClusters / eventGridFilters | No | > | managedclustersnapshots | Yes |
-> | openShiftManagedClusters | Yes |
> | snapshots | Yes | ## Microsoft.CostManagement
The resources are listed by resource provider namespace. To match a resource pro
> | fetchMarketplacePrices | No | > | fetchPrices | No | > | Forecast | No |
+> | GenerateCostDetailsReport | No |
> | GenerateDetailedCostReport | No | > | Insights | No | > | Pricesheets | No |
The resources are listed by resource provider namespace. To match a resource pro
> | workspaces / proposals / entitlements / policies | No | > | workspaces / proposals / invitations | No | > | workspaces / proposals / scriptReferences | No |
+> | workspaces / proposals / virtualOutputReferences | No |
> | workspaces / resourceReferences | No | > | workspaces / scripts | No | > | workspaces / scripts / scriptrevisions | No |
The resources are listed by resource provider namespace. To match a resource pro
> | dataFactorySchema | No | > | factories | Yes | > | factories / integrationRuntimes | No |
+> | factories / pipelines | No |
## Microsoft.DataLakeAnalytics
The resources are listed by resource provider namespace. To match a resource pro
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | assessForMigration | No |
> | flexibleServers | Yes | > | getPrivateDnsZoneSuffix | No | > | servers | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | instances / sandboxes | Yes | > | instances / sandboxes / experiments | Yes |
+## Microsoft.DevCenter
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | devcenters | Yes |
+> | devcenters / attachednetworks | No |
+> | devcenters / catalogs | No |
+> | devcenters / devboxdefinitions | Yes |
+> | devcenters / environmentTypes | No |
+> | devcenters / galleries | No |
+> | devcenters / galleries / images | No |
+> | devcenters / galleries / images / versions | No |
+> | devcenters / images | No |
+> | networkconnections | Yes |
+> | projects | Yes |
+> | projects / attachednetworks | No |
+> | projects / devboxdefinitions | No |
+> | projects / environmentTypes | No |
+> | projects / pools | Yes |
+> | projects / pools / schedules | No |
+> | registeredSubscriptions | No |
+
+## Microsoft.DevHub
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | workflows | Yes |
+ ## Microsoft.Devices > [!div class="mx-tableFixed"]
The resources are listed by resource provider namespace. To match a resource pro
> | namespaces / networkrulesets | No | > | namespaces / privateEndpointConnections | No |
-## Microsoft.Experimentation
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | experimentWorkspaces | Yes |
- ## Microsoft.Falcon > [!div class="mx-tableFixed"]
The resources are listed by resource provider namespace. To match a resource pro
> | projects / environments / deployments | No | > | projects / environmentTypes | No | > | projects / pools | Yes |
+> | registeredSubscriptions | No |
## Microsoft.FluidRelay
The resources are listed by resource provider namespace. To match a resource pro
> | services / privateEndpointConnectionProxies | No | > | services / privateEndpointConnections | No | > | services / privateLinkResources | No |
+> | validateMedtechMappings | No |
> | workspaces | Yes | > | workspaces / dicomservices | Yes | > | workspaces / eventGridFilters | No |
The resources are listed by resource provider namespace. To match a resource pro
> | provisionedClusters | Yes | > | provisionedClusters / agentPools | Yes | > | provisionedClusters / hybridIdentityMetadata | No |
+> | storageSpaces | Yes |
+> | virtualNetworks | Yes |
## Microsoft.HybridData
The resources are listed by resource provider namespace. To match a resource pro
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | configurationGroupValues | Yes |
> | devices | Yes |
-> | networkFunctions | Yes |
+> | networkFunctionPublishers | No |
+> | networkFunctionPublishers / networkFunctionDefinitionGroups | No |
+> | networkFunctionPublishers / networkFunctionDefinitionGroups / publisherNetworkFunctionDefinitionVersions | No |
+> | networkfunctions | Yes |
> | networkFunctionVendors | No |
+> | publishers | Yes |
+> | publishers / artifactStores | Yes |
+> | publishers / artifactStores / artifactManifests | Yes |
+> | publishers / configurationGroupSchemas | Yes |
+> | publishers / networkFunctionDefinitionGroups | Yes |
+> | publishers / networkFunctionDefinitionGroups / networkFunctionDefinitionVersions | Yes |
+> | publishers / networkFunctionDefinitionGroups / previewSubscriptions | Yes |
+> | publishers / networkServiceDesignGroups | Yes |
+> | publishers / networkServiceDesignGroups / networkServiceDesignVersions | Yes |
> | registeredSubscriptions | No |
+> | siteNetworkServices | Yes |
+> | sites | Yes |
> | vendors | No | ## Microsoft.ImportExport
The resources are listed by resource provider namespace. To match a resource pro
> | workspaces / onlineEndpoints | Yes | > | workspaces / onlineEndpoints / deployments | Yes | > | workspaces / registries | Yes |
+> | workspaces / schedules | No |
> | workspaces / services | No | ## Microsoft.Maintenance
The resources are listed by resource provider namespace. To match a resource pro
> | publishers / offers | No | > | publishers / offers / amendments | No | > | register | No |
+> | search | No |
## Microsoft.MarketplaceNotifications
The resources are listed by resource provider namespace. To match a resource pro
> | - | -- | > | assessmentProjects | Yes | > | migrateprojects | Yes |
+> | modernizeProjects | Yes |
> | moveCollections | Yes | > | projects | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | packetCoreControlPlanes | Yes | > | packetCoreControlPlanes / packetCoreDataPlanes | Yes | > | packetCoreControlPlanes / packetCoreDataPlanes / attachedDataNetworks | Yes |
+> | packetCoreControlPlaneVersions | No |
> | packetCores | Yes |
+> | simGroups | Yes |
+> | simGroups / sims | No |
> | sims | Yes | > | sims / simProfiles | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | azureWebCategories | No | > | bastionHosts | Yes | > | bgpServiceCommunities | No |
+> | cloudServiceSlots | No |
> | connections | Yes | > | customIpPrefixes | Yes | > | ddosCustomPolicies | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | serviceEndpointPolicies | Yes | > | trafficManagerGeographicHierarchies | No | > | trafficmanagerprofiles | Yes |
+> | trafficmanagerprofiles / azureendpoints | No |
+> | trafficmanagerprofiles / externalendpoints | No |
> | trafficmanagerprofiles / heatMaps | No |
+> | trafficmanagerprofiles / nestedendpoints | No |
> | trafficManagerUserMetricsKeys | No | > | virtualHubs | Yes | > | virtualNetworkGateways | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | Resource type | Complete mode deletion | > | - | -- | > | bareMetalMachines | Yes |
+> | cloudServicesNetworks | Yes |
> | clusterManagers | Yes | > | clusters | Yes |
+> | defaultCniNetworks | Yes |
+> | disks | Yes |
> | hybridAksClusters | Yes | > | hybridAksManagementDomains | Yes | > | hybridAksVirtualMachines | Yes |
+> | l2Networks | Yes |
+> | l3Networks | Yes |
> | rackManifests | Yes | > | racks | Yes |
+> | storageAppliances | Yes |
+> | trunkedNetworks | Yes |
> | virtualMachines | Yes | > | workloadNetworks | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | peeringServiceProviders | No | > | peeringServices | Yes |
+## Microsoft.Pki
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | Pki | Yes |
+> | Pkis | Yes |
+> | Pkis / certificateAuthorities | Yes |
+> | Pkis / enrollmentPolicies | Yes |
+ ## Microsoft.PlayFab > [!div class="mx-tableFixed"]
The resources are listed by resource provider namespace. To match a resource pro
> | accounts | Yes | > | enterprisePolicies | Yes |
-## Microsoft.ProjectBabylon
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | accounts | Yes |
-> | deletedAccounts | No |
-> | getDefaultAccount | No |
-> | removeDefaultAccount | No |
-> | setDefaultAccount | No |
- ## Microsoft.ProviderHub > [!div class="mx-tableFixed"]
The resources are listed by resource provider namespace. To match a resource pro
> | AvailabilitySets | Yes | > | Clouds | Yes | > | VirtualMachines | Yes |
+> | VirtualMachines / HybridIdentityMetadata | No |
> | VirtualMachineTemplates | Yes | > | VirtualNetworks | Yes | > | VMMServers | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | alertsSuppressionRules | No | > | allowedConnections | No | > | antiMalwareSettings | No |
+> | applications | No |
> | assessmentMetadata | No | > | assessments | No | > | assessments / governanceAssignments | No |
The resources are listed by resource provider namespace. To match a resource pro
> | customEntityStoreAssignments | Yes | > | dataCollectionAgents | No | > | dataScanners | Yes |
+> | dataSensitivitySettings | No |
> | deviceSecurityGroups | No | > | discoveredSecuritySolutions | No | > | externalSecuritySolutions | No |
The resources are listed by resource provider namespace. To match a resource pro
> | subAssessments | No | > | tasks | No | > | topologies | No |
-> | vmScanners | Yes |
+> | vmScanners | No |
> | workspaceSettings | No | ## Microsoft.SecurityDetonation
The resources are listed by resource provider namespace. To match a resource pro
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | azureDevOpsConnectors | Yes |
+> | azureDevOpsConnectors / orgs | No |
+> | azureDevOpsConnectors / orgs / projects | No |
+> | azureDevOpsConnectors / orgs / projects / repos | No |
> | gitHubConnectors | Yes | > | gitHubConnectors / gitHubRepos | No |
+> | gitHubConnectors / owners | No |
+> | gitHubConnectors / owners / repos | No |
## Microsoft.SecurityInsights
The resources are listed by resource provider namespace. To match a resource pro
> | entities | No | > | entityQueryTemplates | No | > | fileImports | No |
+> | huntsessions | No |
> | incidents | No | > | metadata | No | > | MitreCoverageRecords | No | > | onboardingStates | No | > | overview | No |
+> | recommendations | No |
> | securityMLAnalyticsSettings | No | > | settings | No | > | sourceControls | No |
The resources are listed by resource provider namespace. To match a resource pro
> | instancePools | Yes | > | managedInstances | Yes | > | managedInstances / administrators | No |
+> | managedInstances / advancedThreatProtectionSettings | No |
> | managedInstances / databases | Yes |
+> | managedInstances / databases / advancedThreatProtectionSettings | No |
> | managedInstances / databases / backupLongTermRetentionPolicies | No | > | managedInstances / databases / vulnerabilityAssessments | No | > | managedInstances / dnsAliases | No |
The resources are listed by resource provider namespace. To match a resource pro
> | servers / databases / metrics | No | > | servers / databases / recommendedSensitivityLabels | No | > | servers / databases / securityAlertPolicies | No |
+> | servers / databases / sqlvulnerabilityassessments | No |
> | servers / databases / syncGroups | No | > | servers / databases / syncGroups / syncMembers | No | > | servers / databases / topQueries | No |
The resources are listed by resource provider namespace. To match a resource pro
> | servers / restorableDroppedDatabases | No | > | servers / securityAlertPolicies | No | > | servers / serviceObjectives | No |
+> | servers / sqlvulnerabilityassessments | No |
> | servers / syncAgents | No | > | servers / tdeCertificates | No | > | servers / usages | No |
The resources are listed by resource provider namespace. To match a resource pro
> | caches / storageTargets | No | > | usageModels | No |
+## Microsoft.StorageMover
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | storageMovers | Yes |
+> | storageMovers / agents | No |
+> | storageMovers / endpoints | No |
+> | storageMovers / projects | No |
+> | storageMovers / projects / jobDefinitions | No |
+> | storageMovers / projects / jobDefinitions / jobRuns | No |
+ ## Microsoft.StoragePool > [!div class="mx-tableFixed"]
The resources are listed by resource provider namespace. To match a resource pro
> | testBaseAccounts | Yes | > | testBaseAccounts / customerEvents | No | > | testBaseAccounts / emailEvents | No |
+> | testBaseAccounts / externalTestTools | No |
+> | testBaseAccounts / externalTestTools / testCases | No |
> | testBaseAccounts / flightingRings | No | > | testBaseAccounts / packages | Yes | > | testBaseAccounts / packages / favoriteProcesses | No |
azure-resource-manager Deployment Script Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template.md
Property value details:
- `arguments`: Specify the parameter values. The values are separated by spaces.
- Deployment Scripts splits the arguments into an array of strings by invoking the [CommandLineToArgvW ](/windows/win32/api/shellapi/nf-shellapi-commandlinetoargvw) system call. This step is necessary because the arguments are passed as a [command property](/rest/api/container-instances/containergroups/createorupdate#containerexec)
+ Deployment Scripts splits the arguments into an array of strings by invoking the [CommandLineToArgvW](/windows/win32/api/shellapi/nf-shellapi-commandlinetoargvw) system call. This step is necessary because the arguments are passed as a [command property](/rest/api/container-instances/containergroups/createorupdate#containerexec)
to Azure Container Instance, and the command property is an array of string. If the arguments contain escaped characters, use [JsonEscaper](https://www.jsonescaper.com/) to double escaped the characters. Paste your original escaped string into the tool, and then select **Escape**. The tool outputs a double escaped string. For example, in the previous sample template, The argument is `-name \"John Dole\"`. The escaped string is `-name \\\"John Dole\\\"`.
azure-sql-edge Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/overview.md
Azure SQL Edge makes developing and maintaining applications easier and more pro
- [Deploy SQL Edge through Azure portal](deploy-portal.md) - [Machine Learning and Artificial Intelligence with SQL Edge](onnx-overview.md)-- [Building an end-to-end IoT solution with SQL Edge](tutorial-deploy-azure-resources.md)
+- [Building an end-to-end IoT solution with SQL Edge](tutorial-deploy-azure-resources.md)
azure-sql-edge Tutorial Renewable Energy Demo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/tutorial-renewable-energy-demo.md
This Azure SQL Edge demo is based on a Contoso Renewable Energy, a wind turbine
This demo will walk you through resolving an alert being raised because of wind turbulence being detected at the device. You will train a model and deploy it to SQL DB Edge that will correct the detected wind wake and ultimately optimize power output. Azure SQL Edge - renewable Energy demo video on Channel 9:
-> [!VIDEO /shows/Data-Exposed/Azure-SQL-Edge-Demo-Renewable-Energy/player]
+> [!VIDEO https://docs.microsoft.com/shows/Data-Exposed/Azure-SQL-Edge-Demo-Renewable-Energy/player]
## Setting up the demo on your local computer Git will be used to copy all files from the demo to your local computer.
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-storage.md
description: Learn about storage capacity, storage policies, fault tolerance, an
Previously updated : 07/27/2022 Last updated : 08/09/2022 # Azure VMware Solution storage concepts
vSAN datastores use data-at-rest encryption by default using keys stored in Azur
## Azure storage integration
-You can use Azure storage services in workloads running in your private cloud. The Azure storage services include Storage Accounts, Table Storage, and Blob Storage. The connection of workloads to Azure storage services doesn't traverse the internet. This connectivity provides more security and enables you to use SLA-based Azure storage services in your private cloud workloads. You can also connect Azure disk pools or [Azure NetApp Files datastores](attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) to expand the storage capacity. This functionality is in preview.
+You can use Azure storage services in workloads running in your private cloud. The Azure storage services include Storage Accounts, Table Storage, and Blob Storage. The connection of workloads to Azure storage services doesn't traverse the internet. This connectivity provides more security and enables you to use SLA-based Azure storage services in your private cloud workloads. You can also connect Azure disk pools or [Azure NetApp Files datastores](attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) to expand the storage capacity.
## Alerts and monitoring
backup Automation Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/automation-backup.md
Once you assign an Azure Policy to a scope, all VMs that meet your criteria are
The following video illustrates how Azure Policy works for backup: <br><br>
-> [!VIDEO /shows/IT-Ops-Talk/Configure-backups-at-scale-using-Azure-Policy/player]
+> [!VIDEO https://docs.microsoft.com/shows/IT-Ops-Talk/Configure-backups-at-scale-using-Azure-Policy/player]
### Export backup-operational data
For more information on how to set up this runbook, see [Automatic retry of fail
The following video provides an end-to-end walk-through of the scenario: <br><br>
- > [!VIDEO /shows/IT-Ops-Talk/Automatically-retry-failed-backup-jobs-using-Azure-Resource-Graph-and-Azure-Automation-Runbooks/player]
+ > [!VIDEO https://docs.microsoft.com/shows/IT-Ops-Talk/Automatically-retry-failed-backup-jobs-using-Azure-Resource-Graph-and-Azure-Automation-Runbooks/player]
## Additional resources
backup Microsoft Azure Backup Server Protection V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/microsoft-azure-backup-server-protection-v3.md
Azure Backup Server can protect data in the following clustered applications:
- If you run Hyper-V on Windows Server 2008 R2, make sure to install the update described in KB [975354](https://support.microsoft.com/kb/975354). - If you run Hyper-V on Windows Server 2008 R2 in a cluster configuration, make sure you install SP2 and KB [971394](https://support.microsoft.com/kb/971394).
- Note that Windows Server 2008 R2 is at end of support and we recommend you to upgrade it soon.fff
+ Note that Windows Server 2008 R2 is at end of support and we recommend you to upgrade it soon.
* Exchange Server - Azure Backup Server can protect non-shared disk clusters for supported Exchange Server versions (cluster-continuous replication), and can also protect Exchange Server configured for local continuous replication.
backup Soft Delete Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/soft-delete-virtual-machines.md
Title: Soft delete for virtual machines description: Learn how soft delete for virtual machines makes backups more secure. Previously updated : 04/30/2020 Last updated : 08/10/2022 +++ # Soft delete for virtual machines
Soft delete for VMs protects the backups of your VMs from unintended deletion. E
## Supported regions
-Soft delete is currently supported in the West Central US, East Asia, Canada Central, Canada East, France Central, France South, Korea Central, Korea South, UK South, UK West, Australia East, Australia South East, North Europe, West US, West US2, Central US, South East Asia, North Central US, South Central US, Japan East, Japan West, India South, India Central, India West, East US 2, Switzerland North, Switzerland West, Norway West, Norway East, and all National regions.
+Soft delete is available in all Azure Public and National regions.
## Soft delete for VMs using Azure portal
chaos-studio Chaos Studio Tutorial Aks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-cli.md
The Azure Cloud Shell is a free interactive shell that you can use to run the st
To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also open Cloud Shell in a separate browser tab by going to [https://shell.azure.com/bash](https://shell.azure.com/bash). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and select **Enter** to run it.
-If you prefer to install and use the CLI locally, this tutorial requires Azure CLI version 2.0.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+If you prefer to install and use the CLI locally, this tutorial requires Azure CLI version 2.0.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
> [!NOTE] > These instructions use a Bash terminal in Azure Cloud Shell. Some commands may not work as described if running the CLI locally or in a PowerShell terminal.
cloud-services Cloud Services Dotnet Install Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-dotnet-install-dotnet.md
Title: Install .NET on Azure Cloud Services (classic) roles | Microsoft Docs
-description: This article describes how to manually install the .NET Framework on your cloud service web and worker roles
+ Title: Install .NET on Azure Cloud Services (classic) roles
+description: This article describes how to manually install the .NET Framework on your cloud service web and worker roles.
Last updated 10/14/2020 - # Install .NET on Azure Cloud Services (classic) roles
This article describes how to install versions of .NET Framework that don't come
For example, you can install .NET Framework 4.6.2 on the Guest OS family 4, which doesn't come with any release of .NET Framework 4.6. (The Guest OS family 5 does come with .NET Framework 4.6.) For the latest information on the Azure Guest OS releases, see the [Azure Guest OS release news](cloud-services-guestos-update-matrix.md). >[!IMPORTANT]
->The Azure SDK 2.9 contains a restriction on deploying .NET Framework 4.6 on the Guest OS family 4 or earlier. A fix for the restriction is available on the [Microsoft Docs](https://github.com/MicrosoftDocs/azure-cloud-services-files/tree/master/Azure%20Targets%20SDK%202.9) site.
+>The Azure SDK 2.9 contains a restriction on deploying .NET Framework 4.6 on the Guest OS family 4 or earlier. A fix for the restriction is available in the [`azure-cloud-services-files` GitHub repo](https://github.com/MicrosoftDocs/azure-cloud-services-files/tree/master/Azure%20Targets%20SDK%202.9).
To install .NET on your web and worker roles, include the .NET web installer as part of your cloud service project. Start the installer as part of the role's startup tasks.
When you deploy your cloud service, the startup tasks install the .NET Framework
<!--Image references--> [1]: ./media/cloud-services-dotnet-install-dotnet/rolecontentwithinstallerfiles.png [2]: ./media/cloud-services-dotnet-install-dotnet/rolecontentwithallfiles.png---
cloud-shell Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/overview.md
You can access the Cloud Shell in three ways:
![Icon to launch the Cloud Shell from the Azure portal](media/overview/portal-launch-icon.png) -- **Code snippets**: On [docs.microsoft.com]() and [Microsoft Learn](/learn/), select the **Try It** button that appears with Azure CLI and Azure PowerShell code snippets:
+- **Code snippets**: In Microsoft [technical documentation](/) and [training resources](/learn), select the **Try It** button that appears with Azure CLI and Azure PowerShell code snippets:
```azurecli-interactive az account show
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md
Known resolutions for troubleshooting issues in Azure Cloud Shell include:
### Disabling Cloud Shell in a locked down network environment -- **Details**: Administrators may wish to disable access to Cloud Shell for their users. Cloud Shell utilizes access to the `ux.console.azure.com` domain, which can be denied, stopping any access to Cloud Shell's entrypoints including portal.azure.com, shell.azure.com, Visual Studio Code Azure Account extension, and docs.microsoft.com. In the US Government cloud, the entrypoint is `ux.console.azure.us`; there is no corresponding shell.azure.us.
+- **Details**: Administrators may wish to disable access to Cloud Shell for their users. Cloud Shell utilizes access to the `ux.console.azure.com` domain, which can be denied, stopping any access to Cloud Shell's entrypoints including `portal.azure.com`, `shell.azure.com`, Visual Studio Code Azure Account extension, and `docs.microsoft.com`. In the US Government cloud, the entrypoint is `ux.console.azure.us`; there is no corresponding `shell.azure.us`.
- **Resolution**: Restrict access to `ux.console.azure.com` or `ux.console.azure.us` via network settings to your environment. The Cloud Shell icon will still exist in the Azure portal, but will not successfully connect to the service. ### Storage Dialog - Error: 403 RequestDisallowedByPolicy
cognitive-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
In this article, you'll download and install the following software packages. Th
* [Azure IoT Edge](../../iot-edge/how-to-provision-single-device-linux-symmetric.md) runtime. #### [Azure VM with GPU](#tab/virtual-machine)
-In our example, we'll utilize an [NC series VM](../../virtual-machines/nc-series.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) that has one K80 GPU.
+In our example, we'll utilize an [NCv3 series VM](../../virtual-machines/ncv3-series.md) that has one v100 GPU.
Use the following bash script to install the required Nvidia graphics drivers, a
```bash wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
-sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
+sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub
sudo add-apt-repository "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /" sudo apt-get update sudo apt-get -y install cuda
Use the below steps to deploy the container using the Azure CLI.
#### [Azure VM with GPU](#tab/virtual-machine)
-An Azure Virtual Machine with a GPU can also be used to run Spatial Analysis. The example below will use an [NC series](../../virtual-machines/nc-series.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) VM that has one K80 GPU.
+An Azure Virtual Machine with a GPU can also be used to run Spatial Analysis. The example below will use a [NCv3 series VM](../../virtual-machines/ncv3-series.md) that has one v100 GPU.
#### Create the VM
Give your VM a name and select the region to be (US) West US 2.
:::image type="content" source="media/spatial-analysis/virtual-machine-instance-details.jpg" alt-text="Virtual machine configuration details." lightbox="media/spatial-analysis/virtual-machine-instance-details.jpg":::
-To locate the VM size, select "See all sizes" and then view the list for "Non-premium storage VM sizes", shown below.
+To locate the VM size, select "See all sizes" and then view the list for "N-Series" and select **NC6s_v3**, shown below.
:::image type="content" source="media/spatial-analysis/virtual-machine-sizes.png" alt-text="Virtual machine sizes." lightbox="media/spatial-analysis/virtual-machine-sizes.png":::
-Then, select either **NC6** or **NC6_Promo**.
-- Next, Create the VM. Once created, navigate to the VM resource in the Azure portal and select `Extensions` from the left pane. Select on "Add" to bring up the extensions window with all available extensions. Search for and select `NVIDIA GPU Driver Extension`, click create, and complete the wizard. Once the extension is successfully applied, navigate to the VM main page in the Azure portal and click `Connect`. The VM can be accessed either through SSH or RDP. RDP will be helpful as it will be enable viewing of the visualizer window (explained later). Configure the RDP access by following [these steps](../../virtual-machines/linux/use-remote-desktop.md) and opening a remote desktop connection to the VM.
cognitive-services Migrate V2 To V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migrate-v2-to-v3.md
Previously updated : 02/12/2022 Last updated : 08/09/2022
Compared to v2, the v3 version of the Speech services REST API for speech-to-text is more reliable, easier to use, and more consistent with APIs for similar services. Most teams can migrate from v2 to v3 in a day or two. > [!IMPORTANT]
-> The Speech-to-text REST API v2.0 is deprecated. Please migrate your applications to the [Speech-to-text REST API v3.0](rest-speech-to-text.md).
+> The Speech-to-text REST API v2.0 is deprecated and will be retired by February 29, 2024. Please migrate your applications to the [Speech-to-text REST API v3.0](rest-speech-to-text.md).
## Forward compatibility
cognitive-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/authentication.md
curl -X POST 'https://api.cognitive.microsofttranslator.com/translate?api-versio
Some Azure Cognitive Services accept, and in some cases require, an access token. Currently, these services support access tokens: * Text Translation API
-* Speech
-* Speech
+* Speech
+* Speech
>[!NOTE] > QnA Maker also uses the Authorization header, but requires an endpoint key. For more information, see [QnA Maker: Get answer from knowledge base](./qnamaker/quickstarts/get-answer-from-knowledge-base-using-url-tool.md).
cognitive-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/concepts/evaluation-metrics.md
Previously updated : 05/24/2022 Last updated : 08/08/2022
So what does it actually mean to have high precision or high recall for a certai
| High | Low | The model extracts this entity well, however it is with low confidence as it is sometimes extracted as another type. | | Low | Low | This entity type is poorly handled by the model, because it is not usually extracted. When it is, it is not with high confidence. |
+## Guidance
+
+After you trained your model, you will see some guidance and recommendation on how to improve the model. It's recommended to have a model covering all points in the guidance section.
+
+* Training set has enough data: When an entity type has fewer than 15 labeled instances in the training data, it can lead to lower accuracy due to the model not being adequately trained on these cases. In this case, consider adding more labeled data in the training set. You can check the *data distribution* tab for more guidance.
+
+* All entity types are present in test set: When the testing data lacks labeled instances for an entity type, the modelΓÇÖs test performance may become less comprehensive due to untested scenarios. You can check the *test set data distribution* tab for more guidance.
+
+* Entity types are balanced within training and test sets: When sampling bias causes an inaccurate representation of an entity typeΓÇÖs frequency, it can lead to lower accuracy due to the model expecting that entity type to occur too often or too little. You can check the *data distribution* tab for more guidance.
+
+* Entity types are evenly distributed between training and test sets: When the mix of entity types doesnΓÇÖt match between training and test sets, it can lead to lower testing accuracy due to the model being trained differently from how itΓÇÖs being tested. You can check the *data distribution* tab for more guidance.
+
+* Unclear distinction between entity types in training set: When the training data is similar for multiple entity types, it can lead to lower accuracy because the entity types may be frequently misclassified as each other. Review the following entity types and consider merging them if theyΓÇÖre similar. Otherwise, add more examples to better distinguish them from each other. You can check the *confusion matrix* tab for more guidance.
++ ## Confusion matrix A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of entities.
You can use the Confusion matrix to identify entities that are too close to each
The highlighted diagonal in the image below is the correctly predicted entities, where the predicted tag is the same as the actual tag. You can calculate the entity-level and model-level evaluation metrics from the confusion matrix:
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/faq.md
Previously updated : 05/09/2022 Last updated : 08/08/2022
See the [data selection and schema design](how-to/design-schema.md) article for
## How do I improve model performance?
-* View the model [confusion matrix](how-to/view-model-evaluation.md). If you notice that a certain entity type is frequently not predicted correctly, consider adding more tagged instances for this class. If you notice that two entity types are frequently predicted as each other, this means the schema is ambiguous and you should consider merging them both into one entity type for better performance.
+* View the model [confusion matrix](how-to/view-model-evaluation.md). If you notice that a certain entity type is frequently not predicted correctly, consider adding more tagged instances for this class. If you notice that two entity types are frequently predicted as each other, this means the schema is ambiguous, and you should consider merging them both into one entity type for better performance.
-* [Review test set predictions](how-to/improve-model.md#review-test-set-predictions). If one of the entity types has a lot more tagged instances than the others, your model may be biased towards this type. Add more data to the other entity types or remove examples from the dominating type.
+* [Review test set predictions](how-to/view-model-evaluation.md). If one of the entity types has a lot more tagged instances than the others, your model may be biased towards this type. Add more data to the other entity types or remove examples from the dominating type.
* Learn more about [data selection and schema design](how-to/design-schema.md).
-* [Review your test set](how-to/improve-model.md) to see predicted and tagged entities side-by-side so you can get a better idea of your model performance, and decide if any changes in the schema or the tags are necessary.
+* [Review your test set](how-to/view-model-evaluation.md) to see predicted and tagged entities side-by-side so you can get a better idea of your model performance, and decide if any changes in the schema or the tags are necessary.
## Why do I get different results when I retrain my model? * When you [train your model](how-to/train-model.md), you can determine if you want your data to be split randomly into train and test sets. If you do, so there is no guarantee that the reflected model evaluation is on the same test set, so results are not comparable.
-* If you're retraining the same model, your test set will be the same but you might notice a slight change in predictions made by the model. This is because the trained model is not robust enough and this is a factor of how representative and distinct your data is and the quality of your tagged data.
+* If you're retraining the same model, your test set will be the same, but you might notice a slight change in predictions made by the model. This is because the trained model is not robust enough and this is a factor of how representative and distinct your data is and the quality of your tagged data.
## How do I get predictions in different languages?
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/glossary.md
Within your project you can do the following actions:
* **Label your data**: The process of labeling your data so that when you train your model it learns what you want to extract. * **Build and train your model**: The core step of your project, where your model starts learning from your labeled data. * **View model evaluation details**: Review your model performance to decide if there is room for improvement, or you are satisfied with the results.
-* **Improve model**: When you know what went wrong with your model, and how to improve performance.
* **Deployment**: After you have reviewed the model's performance and decided it can be used in your environment, you need to assign it to a deployment to use it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). * **Test model**: After deploying your model, test your deployment in [Language Studio](https://aka.ms/LanguageStudio) to see how it would perform in production.
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/deploy-model.md
Once you are satisfied with how your model performs, it is ready to be deployed
* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account. * [Labeled data](tag-data.md) and successfully [trained model](train-model.md) * Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
-* (optional) [Made improvements](improve-model.md) to your model if its performance isn't satisfactory.
See [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
cognitive-services Improve Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/improve-model.md
- Title: How to improve Custom Named Entity Recognition (NER) model performance-
-description: Learn about improving a model for Custom Named Entity Recognition (NER).
------ Previously updated : 05/06/2022----
-# Improve model performance
-
-In some cases, the model is expected to extract entities that are inconsistent with your labeled ones. In this page you can observe these inconsistencies and decide on the needed changes needed to improve your model performance.
-
-## Prerequisites
-
-* A successfully [created project](create-project.md) with a configured Azure blob storage account
-* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](tag-data.md)
-* A [successfully trained model](train-model.md)
-* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
-* Familiarized yourself with the [evaluation metrics](../concepts/evaluation-metrics.md).
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
--
-## Review test set predictions
-
-After you have viewed your [model's evaluation](view-model-evaluation.md), you'll have formed an idea on your model performance. In this page, you can view how your model performs vs how it's expected to perform. You can view predicted and labeled entities side by side for each document in your test set. You can review entities that were extracted differently than they were originally labeled.
--
-To review inconsistent predictions in the [test set](train-model.md) from within the [Language Studio](https://aka.ms/LanguageStudio):
-
-1. Select **Improve model** from the left side menu.
-
-2. Choose your trained model from **Model** drop-down menu.
-
-3. For easier analysis, you can toggle **Show incorrect predictions only** to view entities that were incorrectly predicted only. You should see all documents that include entities that were incorrectly predicted.
-
-5. You can expand each document to see more details about predicted and labeled entities.
-
- Use the following information to help guide model improvements.
-
- * If entity `X` is constantly identified as entity `Y`, it means that there is ambiguity between these entity types and you need to reconsider your schema. Learn more about [data selection and schema design](design-schema.md#schema-design). Another solution is to consider labeling more instances of these entities, to help the model improve and differentiate between them.
-
- * If a complex entity is repeatedly not predicted, consider [breaking it down to simpler entities](design-schema.md#schema-design) for easier extraction.
-
- * If an entity is predicted while it was not labeled in your data, this means to you need to review your labels. Be sure that all instances of an entity are properly labeled in all documents.
-
-
- :::image type="content" source="../media/review-predictions.png" alt-text="A screenshot showing model predictions in Language Studio." lightbox="../media/review-predictions.png":::
----
-## Next steps
-
-Once you're satisfied with how your model performs, you can [deploy your model](deploy-model.md).
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/train-model.md
# Train your custom named entity recognition model
-Training is the process where the model learns from your [labeled data](tag-data.md). After training is completed, you'll be able to [view model performance](view-model-evaluation.md) to determine if you need to [improve your model](improve-model.md).
+Training is the process where the model learns from your [labeled data](tag-data.md). After training is completed, you'll be able to view the [model's performance](view-model-evaluation.md) to determine if you need to improve your model.
To train a model, you start a training job and only successfully completed jobs create a model. Training jobs expire after seven days, which means you won't be able to retrieve the job details after this time. If your training job completed successfully and a model was created, the model won't be affected. You can only have one training job running at a time, and you can't start other jobs in the same project.
Training could take sometime depending on the size of your training data and com
## Next steps
-After training is completed, you'll be able to [view model performance](view-model-evaluation.md) to optionally [improve your model](improve-model.md) if needed. Once you're satisfied with your model, you can deploy it, making it available to use for [extracting entities](call-api.md) from text.
+After training is completed, you'll be able to view [model performance](view-model-evaluation.md) to optionally improve your model if needed. Once you're satisfied with your model, you can deploy it, making it available to use for [extracting entities](call-api.md) from text.
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/view-model-evaluation.md
See the [project development lifecycle](../overview.md#project-development-lifec
## Next steps
-* After reviewing your model's evaluation, you can start [improving your model](improve-model.md).
+* [Deploy your model](deploy-model.md)
* Learn about the [metrics used in evaluation](../concepts/evaluation-metrics.md).
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/overview.md
Using custom NER typically involves several different steps.
3. **Train the model**: Your model starts learning from your labeled data.
-4. **View the model's performance**: After training is completed, view the model's evaluation details and its performance.
-
-5. **Improve the model**: After reviewing model's performance, you can then learn how you can improve the model.
+4. **View the model's performance**: After training is completed, view the model's evaluation details, its performance and guidance on how to improve it.
6. **Deploy the model**: Deploying a model makes it available for use via the [Analyze API](https://aka.ms/ct-runtime-swagger).
-8. **Extract entities**: Use your custom models for entity extraction tasks.
+7. **Extract entities**: Use your custom models for entity extraction tasks.
## Reference documentation and code samples
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/quickstart.md
When you start to create your own custom NER projects, use the how-to articles t
* [Data selection and schema design](how-to/design-schema.md) * [Tag data](how-to/tag-data.md) * [Train a model](how-to/train-model.md)
-* [View model evaluation](how-to/view-model-evaluation.md)
-* [Improve a model](how-to/improve-model.md)
+* [Model evaluation](how-to/view-model-evaluation.md)
+
cognitive-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/concepts/evaluation-metrics.md
Previously updated : 05/06/2022 Last updated : 08/08/2022
As another example, if your scenario involves categorizing email as "*important
If you want to optimize for general purpose scenarios or when precision and recall are both important, you can utilize the F1 score. Evaluation scores are subjective depending on your scenario and acceptance criteria. There is no absolute metric that works for every scenario.
+## Guidance
+
+After you trained your model, you will see some guidance and recommendation on how to improve the model. It's recommended to have a model covering all points in the guidance section.
+
+* Training set has enough data: When a class type has fewer than 15 labeled instances in the training data, it can lead to lower accuracy due to the model not being adequately trained on these cases.
+
+* All class types are present in test set: When the testing data lacks labeled instances for a class type, the modelΓÇÖs test performance may become less comprehensive due to untested scenarios.
+
+* Class types are balanced within training and test sets: When sampling bias causes an inaccurate representation of a class typeΓÇÖs frequency, it can lead to lower accuracy due to the model expecting that class type to occur too often or too little.
+
+* Class types are evenly distributed between training and test sets: When the mix of class types doesnΓÇÖt match between training and test sets, it can lead to lower testing accuracy due to the model being trained differently from how itΓÇÖs being tested.
+
+* Class types in training set are clearly distinct: When the training data is similar for multiple class types, it can lead to lower accuracy because the class types may be frequently misclassified as each other.
+ ## Confusion matrix > [!Important]
You can use the Confusion matrix to identify classes that are too close to each
All correct predictions are located in the diagonal of the table, so it is easy to visually inspect the table for prediction errors, as they will be represented by values outside the diagonal. You can calculate the class-level and model-level evaluation metrics from the confusion matrix:
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/faq.md
See the [data selection and schema design](how-to/design-schema.md) article for
* Review the [data selection and schema design](how-to/design-schema.md) article for more information.
-* [Review your test set](how-to/improve-model.md) to see predicted and tagged classes side-by-side so you can get a better idea of your model performance, and decide if any changes in the schema or the tags are necessary.
+* [Review your test set](how-to/view-model-evaluation.md) to see predicted and tagged classes side-by-side so you can get a better idea of your model performance, and decide if any changes in the schema or the tags are necessary.
## When I retrain my model I get different results, why is this?
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/glossary.md
Within your project you can do the following:
* **Label your data**: The process of labeling your data so that when you train your model it learns what you want to extract. * **Build and train your model**: The core step of your project, where your model starts learning from your labeled data. * **View model evaluation details**: Review your model performance to decide if there is room for improvement, or you are satisfied with the results.
-* **Improve model**: When you know what went wrong with your model, and how to improve performance.
* **Deployment**: After you have reviewed model performance and decide it's fit to be used in your environment; you need to assign it to a deployment to be able to query it. Assigning the model to a deployment makes it available for use through the [prediction API](https://aka.ms/ct-runtime-swagger). * **Test model**: After deploying your model, you can use this operation in [Language Studio](https://aka.ms/LanguageStudio) to try it out your deployment and see how it would perform in production.
cognitive-services Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/deploy-model.md
Once you are satisfied with how your model performs, it is ready to be deployed;
* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account. * [Labeled data](tag-data.md) and successfully [trained model](train-model.md) * Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
-* (optional) [Made improvements](improve-model.md) to your model if its performance isn't satisfactory.
See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
cognitive-services Improve Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/improve-model.md
- Title: How to improve custom text classification model performance-
-description: Learn about improving a model for Custom Text Classification.
------ Previously updated : 05/05/2022----
-# Improve custom text classification model performance
-
-In some cases, the model is expected to make predictions that are inconsistent with your labeled classes. Use this article to learn how to observe these inconsistencies and decide on the needed changes needed to improve your model performance.
--
-## Prerequisites
-
-To optionally improve a model, you'll need to have:
-
-* [A custom text classification project](create-project.md) with a configured Azure blob storage account,
-* Text data that has [been uploaded](design-schema.md#data-preparation) to your storage account.
-* [Labeled data](tag-data.md) to successfully [train a model](train-model.md).
-* Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
-* Familiarized yourself with the [evaluation metrics](../concepts/evaluation-metrics.md).
-
-See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
-
-## Review test set predictions
-
-After you have viewed your [model's evaluation](view-model-evaluation.md), you'll have formed an idea on your model performance. In this page, you can view how your model performs vs how it's expected to perform. You can view predicted and labeled classes side by side for each document in your test set. You can review documents that were predicted differently than they were originally labeled.
--
-To review inconsistent predictions in the [test set](train-model.md#data-splitting) from within the [Language Studio](https://aka.ms/LanguageStudio):
-
-1. Select **Improve model** from the left side menu.
-
-2. Choose your trained model from **Model** drop-down menu.
-
-3. For easier analysis, you can toggle **Show incorrect predictions only** to view documents that were incorrectly predicted only.
-
-Use the following information to help guide model improvements.
-
-* If a file that should belong to class `X` is constantly classified as class `Y`, it means that there is ambiguity between these classes and you need to reconsider your schema. Learn more about [data selection and schema design](design-schema.md#schema-design).
-
-* Another solution is to consider adding more data to these classes, to help the model improve and differentiate between them.
-
-* Consider adding more data, to help the model differentiate between different classes.
-
- :::image type="content" source="../media/review-validation-set.png" alt-text="A screenshot showing model predictions in Language Studio." lightbox="../media/review-validation-set.png":::
--
-## Next steps
-
-* Once you're satisfied with how your model performs, you can [deploy your model](call-api.md).
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/train-model.md
# How to train a custom text classification model
-Training is the process where the model learns from your [labeled data](tag-data.md). After training is completed, you will be able to [view the model's performance](view-model-evaluation.md) to determine if you need to [improve your model](improve-model.md).
+Training is the process where the model learns from your [labeled data](tag-data.md). After training is completed, you will be able to [view the model's performance](view-model-evaluation.md) to determine if you need to improve your model.
To train a model, start a training job. Only successfully completed jobs create a usable model. Training jobs expire after seven days. After this period, you won't be able to retrieve the job details. If your training job completed successfully and a model was created, it won't be affected by the job expiration. You can only have one training job running at a time, and you can't start other jobs in the same project.
Training could take sometime depending on the size of your training data and com
## Next steps
-After training is completed, you will be able to [view the model's performance](view-model-evaluation.md) to optionally [improve your model](improve-model.md) if needed. Once you're satisfied with your model, you can deploy it, making it available to use for [classifying text](call-api.md).
+After training is completed, you will be able to [view the model's performance](view-model-evaluation.md) to optionally improve your model if needed. Once you're satisfied with your model, you can deploy it, making it available to use for [classifying text](call-api.md).
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/view-model-evaluation.md
Previously updated : 05/24/2022 Last updated : 08/09/2022
See the [project development lifecycle](../overview.md#project-development-lifec
## Next steps
-As you review your how your model performs, learn about the [evaluation metrics](../concepts/evaluation-metrics.md) that are used. Once you know whether your model performance needs to improve, you can begin [improving the model](improve-model.md).
+As you review your how your model performs, learn about the [evaluation metrics](../concepts/evaluation-metrics.md) that are used. Once you know whether your model performance needs to improve, you can begin improving the model.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/overview.md
Follow these steps to get the most out of your model:
4. **View the model's performance**: View the evaluation details for your model to determine how well it performs when introduced to new data.
-5. **Improve the model**: Work on improving your model performance by examining the incorrect model predictions and examining data distribution.
+5. **Deploy the model**: Deploying a model makes it available for use via the [Analyze API](https://aka.ms/ct-runtime-swagger).
-6. **Deploy the model**: Deploying a model makes it available for use via the [Analyze API](https://aka.ms/ct-runtime-swagger).
-
-7. **Classify text**: Use your custom model for custom text classification tasks.
+6. **Classify text**: Use your custom model for custom text classification tasks.
## Reference documentation and code samples
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/quickstart.md
When you start to create your own custom text classification projects, use the h
* [Tag data](how-to/tag-data.md) * [Train a model](how-to/train-model.md) * [View model evaluation](how-to/view-model-evaluation.md)
-* [Improve a model](how-to/improve-model.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/overview.md
An AI system includes not only the technology, but also the people who will use
[!INCLUDE [Responsible AI links](../includes/overview-responsible-ai-links.md)]
+## Scenarios
+
+* Enhance search capabilities and search indexing - Customers can build knowledge graphs based on entities detected in documents to enhance document search as tags.
+* Automate business processes - For example, when reviewing insurance claims, recognized entities like name and location could be highlighted to facilitate the review. Or a support ticket could be generated with a customer's name and company automatically from an email.
+* Customer analysis ΓÇô Determine the most popular information conveyed by customers in reviews, emails, and calls to determine the most relevant topics that get brought up and determine trends over time.
+ ## Next steps There are two ways to get started using the Named Entity Recognition (NER) feature:
cognitive-services How To Call For Conversations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/how-to-call-for-conversations.md
Currently the conversational PII preview API only supports English language.
### Region support
-Currently the conversational PII preview API supports the following regions: East US, North Europe and UK south.
+Currently the conversational PII preview API supports all Azure regions supported by the Language service.
## Submitting data
When you get results from PII detection, you can stream the results to an applic
Use the following example if you have conversations transcribed using the Speech service's [speech-to-text](../../Speech-Service/speech-to-text.md) feature: ```bash
-curl -i -X POST https://your-language-endpoint-here/language/analyze-conversations?api-version=2022-05-15-preview \
+curl -i -X POST https://your-language-endpoint-here/language/analyze-conversations/jobs?api-version=2022-05-15-preview \
-H "Content-Type: application/json" \ -H "Ocp-Apim-Subscription-Key: your-key-here" \ -d \
curl -i -X POST https://your-language-endpoint-here/language/analyze-conversatio
Use the following example if you have conversations that originated in text. For example, conversations through a text-based chat client. ```bash
-curl -i -X POST https://your-language-endpoint-here/language/analyze-conversations?api-version=2022-05-15-preview \
+curl -i -X POST https://your-language-endpoint-here/language/analyze-conversations/jobs?api-version=2022-05-15-preview \
-H "Content-Type: application/json" \ -H "Ocp-Apim-Subscription-Key: your-key-here" \ -d \
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
Title: What is Azure OpenAI?
+ Title: What is Azure OpenAI? (Preview)
description: Apply advanced language models to variety of use cases with the Azure OpenAI service
recommendations: false
keywords:
-# What is Azure OpenAI?
+# What is Azure OpenAI? (Preview)
The Azure OpenAI service provides REST API access to OpenAI's powerful language models including the GPT-3, Codex and Embeddings model series. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure OpenAI Studio.
communication-services Advisor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/advisor-overview.md
Title: Leverage Azure Advisor for Azure Communication Services description: Learn about Azure Advisor offerings for Azure Communication Services.-+ -+ Last updated 09/30/2021
communication-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/authentication.md
Title: Authenticate to Azure Communication Services description: Learn about the various ways an app or service can authenticate to Communication Services.-+ -+ Last updated 06/30/2021
communication-services Call Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-flows.md
Title: Call flows in Azure Communication Services description: Learn about call flows in Azure Communication Services.-+ -+ Last updated 06/30/2021
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md
Title: Chat concepts in Azure Communication Services description: Learn about Communication Services Chat concepts.-+ -+ Last updated 06/30/2021
Some SDKs (like the JavaScript Chat SDK) support real-time notifications. This f
- `chatThreadPropertiesUpdated` - when chat thread properties are updated; currently, only updating the topic for the thread is supported. - `participantsAdded` - when a user is added as a chat thread participant. - `participantsRemoved` - when an existing participant is removed from the chat thread.
+ - `realTimeNotificationConnected` - when real time notification is connected.
+ - `realTimeNotificationDisconnected` -when real time notification is disconnected.
## Push notifications To send push notifications for messages missed by your users while they were away, Communication Services provides two different ways to integrate: - Use an Event Grid resource to subscribe to chat related events (post operation) which can be plugged into your custom app notification service. For more details, see [Server Events](../../../event-grid/event-schema-communication-services.md?bc=https%3a%2f%2fdocs.microsoft.com%2fen-us%2fazure%2fbread%2ftoc.json&toc=https%3a%2f%2fdocs.microsoft.com%2fen-us%2fazure%2fcommunication-services%2ftoc.json).
- - `chatMessageReceived` - when a new message is sent to a chat thread by a participant.
+ - Connect a Notification Hub resource with Communication Services resource to send push notifications and notify your application users about incoming chats and messages when the mobile app is not running in the foreground.
+
+ IOS and Android SDK can support the below event:
+ - `chatMessageReceived` - when a new message is sent to a chat thread by a participant.
+
+ Android SDK can support the below additional events:
- `chatMessageEdited` - when a message is edited in a chat thread. - `chatMessageDeleted` - when a message is deleted in a chat thread. - `chatThreadCreated` - when a chat thread is created by a Communication Services user.
To send push notifications for messages missed by your users while they were awa
For more details, see [Push Notifications](../notifications.md). > [!NOTE]
-> Currently sending chat push notifications with Notification Hub is only supported for Android SDK in version 1.1.0-beta.4.
+> Currently sending chat push notifications with Notification Hub is generally available in Android version 1.1.0 and in public preview for iOS version 1.3.0-beta.1.
## Build intelligent, AI powered chat experiences
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/sdk-features.md
Title: Chat SDK overview for Azure Communication Services description: Learn about the Azure Communication Services Chat SDK.-+ -+ Last updated 06/30/2021
The following list presents the set of features which are currently available in
| | Add metadata to chat messages | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | | Add display name to typing indicator notification | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | |Real-time notifications (enabled by proprietary signaling package**)| Chat clients can subscribe to get real-time updates for incoming messages and other operations occurring in a chat thread. To see a list of supported updates for real-time notifications, see [Chat concepts](concepts.md#real-time-notifications) | ✔️ | ❌ | ❌ | ❌ | ✔️ | ✔️ |
-|Mobile push notifications with Notification Hub | The Chat SDK provides APIs allowing clients to be notified for incoming messages and other operations occurring in a chat thread by connecting an Azure Notification Hub to your Communication Services resource. In situations where your mobile app is not running in the foreground, patterns are available to [fire pop-up notifications](../notifications.md) ("toasts") to inform end-users, see [Chat concepts](concepts.md#push-notifications). | ❌ | ❌ | ❌ | ❌ | ❌ | ✔️ |
+|Mobile push notifications with Notification Hub | The Chat SDK provides APIs allowing clients to be notified for incoming messages and other operations occurring in a chat thread by connecting an Azure Notification Hub to your Communication Services resource. In situations where your mobile app is not running in the foreground, patterns are available to [fire pop-up notifications](../notifications.md) ("toasts") to inform end-users, see [Chat concepts](concepts.md#push-notifications). | ❌ | ❌ | ❌ | ❌ | ✔️ | ✔️ |
| Server Events with Event Grid | Use the chat events available in Azure Event Grid to plug custom notification services or post that event to a webhook to execute business logic like updating CRM records after a chat is finished | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | Reporting </br>(This info is available under Monitoring tab for your Communication Services resource on Azure portal) | Understand API traffic from your chat app by monitoring the published metrics in Azure Metrics Explorer and set alerts to detect abnormalities | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | | Monitor and debug your Communication Services solution by enabling diagnostic logging for your resource | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
communication-services Client And Server Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/client-and-server-architecture.md
Title: Client and server architecture description: Learn about Communication Services' architecture.-+ -+ Last updated 06/30/2021
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email/sdk-features.md
The following list presents the set of features that are currently available in
| Feature | Capability | JS | Java | .NET | Python | | -- | - | | - | - | |
-| Sendmail | Send Email messages </br> *Attachments are supported* | ✔️ | ❌ | ✔️ | ❌ |
-| Get Status | Receive Delivery Reports for messages sent | ✔️ | ❌ | ✔️ | ❌ |
+| Sendmail | Send Email messages </br> *Attachments are supported* | ✔️ | ✔️ | ✔️ | ✔️ |
+| Get Status | Receive Delivery Reports for messages sent | ✔️ | ✔️ | ✔️ | ✔️ |
## API Throttling and Timeouts
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/known-issues.md
Title: Azure Communication Services - known issues description: Learn more about Azure Communication Services-+ -+ Last updated 06/30/2021
communication-services Logging And Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/logging-and-diagnostics.md
Title: Communication Services Logs description: Learn about logging in Azure Communication Services-+ -+ Last updated 06/30/2021
communication-services Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/metrics.md
Title: Metric definitions for Azure Communication Service description: This document covers definitions of metrics available in the Azure portal.-+ -+ Last updated 06/30/2021
communication-services Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/notifications.md
Title: Notifications in Azure Communication Services description: Send notifications to users of apps built on Azure Communication Services.-+ -+ Last updated 06/30/2021
communication-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/reference.md
Title: Reference documentation overview for Azure Communication Services description: Learn about Communication Services' reference documentation.-+ -+ Last updated 05/09/2022
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sdk-options.md
Title: SDKs and REST APIs for Azure Communication Services description: Learn more about Azure Communication Services SDKs and REST APIs.-+ -+ Last updated 06/30/2021
The Calling package supports UWP apps build with .NET Native or C++/WinRT on:
## REST APIs
-Communication Services APIs are documented alongside other [Azure REST APIs in docs.microsoft.com](/rest/api/azure/). This documentation will tell you how to structure your HTTP messages and offers guidance for using [Postman](../tutorials/postman-tutorial.md). REST interface documentation is also published in Swagger format on [GitHub](https://github.com/Azure/azure-rest-api-specs). You can find throttling limits for individual APIs on [service limits page](./service-limits.md).
+Communication Services APIs are documented alongside other [Azure REST APIs](/rest/api/azure/). This documentation will tell you how to structure your HTTP messages and offers guidance for using [Postman](../tutorials/postman-tutorial.md). REST interface documentation is also published in Swagger format on [GitHub](https://github.com/Azure/azure-rest-api-specs). You can find throttling limits for individual APIs on [service limits page](./service-limits.md).
### REST API Throttles
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/concepts.md
Title: SMS concepts in Azure Communication Services description: Learn about Communication Services SMS concepts.-+ -+ Last updated 06/30/2021
communication-services About Call Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/about-call-types.md
Title: Voice and video concepts in Azure Communication Services description: Learn about Communication Services call types.-+ -+ Last updated 06/30/2021
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
Title: Azure Communication Services Calling SDK overview description: Provides an overview of the Calling SDK.-+ -+ Last updated 06/30/2021
communication-services User Facing Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/user-facing-diagnostics.md
Title: Azure Communication Services User Facing Diagnostics description: Provides an overview of the User Facing Diagnostics feature.--++
communication-services Call Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/call-transcription.md
Title: Display call transcription state on the client description: Use Azure Communication Services SDKs to display the call transcription state--++
communication-services Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/events.md
Title: Subscribe to SDK events description: Use Azure Communication Services SDKs to subscribe to SDK events.--++
communication-services Manage Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/manage-calls.md
Title: Manage calls description: Use Azure Communication Services SDKs to manage calls.--++
communication-services Manage Video https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/manage-video.md
Title: Manage video during calls description: Use Azure Communication Services SDKs to manage video calls.--++
communication-services Push Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/push-notifications.md
Title: Enable push notifications for calls. description: Use Azure Communication Services SDKs to enable push notifications for calls.--++
communication-services Record Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/record-calls.md
Title: Manage call recording on the client description: Use Azure Communication Services SDKs to manage call recording on the client.--++
communication-services Teams Interoperability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/teams-interoperability.md
Title: Join a Teams meeting description: Use Azure Communication Services SDKs to join a Teams meeting.--++
communication-services Transfer Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/transfer-calls.md
Title: Transfer calls description: Use Azure Communication Services SDKs to transfer calls.--++
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/overview.md
Title: What is Azure Communication Services? description: Learn how Azure Communication Services helps you develop rich user experiences with real-time communications.-+ -+ Last updated 06/30/2021
communication-services Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/get-started.md
Title: Quickstart - Add chat to your app description: This quickstart shows you how to add Communication Services chat to your app.-+ -+ Last updated 06/30/2021
communication-services Create Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/create-communication-resource.md
Title: Quickstart - Create and manage resources in Azure Communication Services description: In this quickstart, you'll learn how to create and manage your first Azure Communication Services resource.-+ -+ Last updated 06/30/2021
communication-services Handle Sms Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/handle-sms-events.md
Title: Quickstart - Handle SMS and delivery report events description: "In this quickstart, you'll learn how to handle Azure Communication Services events. See how to create, receive, and subscribe to SMS and delivery report events."-+ -+ Last updated 05/25/2022
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/send.md
Title: Quickstart - Send an SMS message description: "In this quickstart, you'll learn how to send an SMS message by using Azure Communication Services. See code examples in C#, JavaScript, Java, and Python."-+ -+ Last updated 05/25/2022
communication-services Port Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/port-phone-number.md
Title: Quickstart - Port a phone number into Azure Communication Services description: Learn how to port a phone number into your Communication Services resource-+ -+ Last updated 06/30/2021
communication-services Get Started With Video Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-with-video-calling.md
Title: Quickstart - Add video calling to your app (JavaScript) description: In this quickstart, you'll learn how to add video calling capabilities to your app using Azure Communication Services.--++ Last updated 06/30/2021
communication-services Getting Started With Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/getting-started-with-calling.md
Title: Quickstart - Add voice calling to your app description: In this quickstart, you'll learn how to add calling capabilities to your app using Azure Communication Services.--++ Last updated 06/30/2021
communication-services Learn Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/learn-modules.md
Title: Microsoft Learn modules for Azure Communication Services description: Learn about the available Learn modules for Azure Communication Services.-+ -+ Last updated 06/30/2021
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/overview.md
Title: Samples overview page description: Overview of available sample projects for Azure Communication Services.-+ -+ Last updated 06/30/2021
communication-services Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/support.md
Title: Support and help options for Azure Communication Services description: Learn about the various help and support options available for Azure Communication Services.-+ -+ Last updated 06/30/2021
communication-services Add Chat Push Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/add-chat-push-notifications.md
+
+ Title: Enable push notifications in your chat app
+
+description: Learn how to enable push notification in IOS App by using Azure Communication Chat SDK
++++ Last updated : 08/09/2022++++
+# Enable Push Notifications in your chat app
+
+>[!IMPORTANT]
+>This Push Notification feature is currently in public preview. Preview APIs and SDKs are provided without a service-level agreement, and aren't recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+
+This tutorial will guide you to enable Push Notification in your IOS App by using Azure Communication Chat SDK.
+Push notifications alert clients of incoming messages in a chat thread in situations where the mobile app isn't running in the foreground. Azure Communication Services supports two versions of push notifications.
+
+- `Basic Version` : The user will be able to see a badge number of 1 on the appΓÇÖs icon, receive a notification sound and see a pop-up alert banner.
+- `Advanced Version`: Except for the features supported in basic version, the Contoso will be able to customize the title & message preview section in alert banner.
+
+ <img src="./media/add-chat-push-notification/basic-version.png" width="340" height="270" alt="Screenshot of basic version of push notification.">
+
+ [Basic Version]
+
+ <img src="./media/add-chat-push-notification/advanced-version.png" width="340" height="270" alt="Screenshot of advanced version of push notification.">
+
+ [Advanced Version]
++
+## Download code
+
+Access the sample code for this tutorial on [GitHub](https://github.com/Azure-Samples/communication-services-ios-quickstarts/tree/main/add-chat-push-notifications).
+
+## Prerequisites
+
+1. Finish all the prerequisite steps in [Chat Quickstart](https://docs.microsoft.com/azure/communication-services/quickstarts/chat/get-started?pivots=programming-language-swift)
+
+2. ANH Setup
+Create an Azure Notification Hub within the same subscription as your Communication Services resource and link the Notification Hub to your Communication Services resource. See [Notification Hub provisioning](../concepts/notifications.md#notification-hub-provisioning).
+
+3. APNS Cert Configuration
+Here we recommend creating a .p12 APNS cert and set it in Notification Hub.
+
+ `If you are not a Microsoft internal client`, please follow step 1 to step 9.
+ `If you are a Microsoft internal client`, please submit a ticket [here](https://aka.ms/mapsupport) and provide the bundle ID of your app to get a .p12 cert. Once you get a valid certificate issued, please execute the step 9.
+
+* Step 1: Log in to the Apple Developer Portal. Navigate to `Certificates, IDs & Profiles > Identifiers > App IDs` and click the App ID associated with your app.
+
+ <img src="./media/add-chat-push-notification/cert-1.png" width="700" height="230" alt="Screenshot of APNS Cert Configuration step 1.">
+
+* Step 2: On the screen for your App ID, check ΓÇ»`Capabilities > Push Notifications`. Click Save and respond ΓÇ£ConfirmΓÇ¥ to the Modify App Capabilities dialog box that appears.
+
+ <img src="./media/add-chat-push-notification/cert-2.png" width="700" height="350" alt="Screenshot of APNS Cert Configuration step 2-1.">
+
+ <img src="./media/add-chat-push-notification/cert-3.png" width="700" height="210" alt="Screenshot of APNS Cert Configuration step 2-2.">
+
+* Step 3: In the same page, click `Capabilities > Push Notifications > Configure`. Click one of the following buttons:
+ * Development SSL Certificate > Create Certificate (for testing push notifications while developing an iOS app)
+ * Production SSL Certificate > Create Certificate (for sending push notifications in production)
+
+ <img src="./media/add-chat-push-notification/cert-4.png" width="700" height="320" alt="Screenshot of APNS Cert Configuration step 3.">
+
+* Step 4: Then you're navigated to the below page. Here, you'll upload a Certificate Signing Request (CSR). Follow the next step to create a CSR.
+
+ <img src="./media/add-chat-push-notification/cert-5.png" width="700" height="400" alt="Screenshot of APNS Cert Configuration step 4.">
+
+* Step 5: In a new browser tab, follow this [help page](https://help.apple.com/developer-account/#/devbfa00fef7) to create a CSR and save the file as ΓÇ£App name.cerΓÇ¥.
+
+ <img src="./media/add-chat-push-notification/cert-6.png" width="700" height="360" alt="Screenshot of APNS Cert Configuration step 5 - 1.">
+ <img src="./media/add-chat-push-notification/cert-7.png" width="700" height="500" alt="Screenshot of APNS Cert Configuration step 5 - 2.">
+
+* Step 6: Drag the .cer file to ΓÇ£Choose FileΓÇ¥ area. Then hit ΓÇ£continueΓÇ¥ on the right top corner.
+
+ <img src="./media/add-chat-push-notification/cert-8.png" width="880" height="400" alt="Screenshot of APNS Cert Configuration step 6.">
+
+* Step 7: Click ΓÇ£DownloadΓÇ¥ and save the file to local disk.
+
+ <img src="./media/add-chat-push-notification/cert-9.png" width="700" height="220" alt="Screenshot of APNS Cert Configuration step 7.">
+
+* Step 8: Open the .cer file you downloaded; it will open Keychain Access. Select your certificate, right-click, and export your certificate in .p12 format.
+
+ <img src="./media/add-chat-push-notification/cert-11.png" width="700" height="400" alt="Screenshot of APNS Cert Configuration step 8.">
+
+* Step 9: Go to your notification hub, click ΓÇ£Apple (APNS)ΓÇ¥ under Settings and select ΓÇ£CertificateΓÇ¥ under Authentication Mode. Also select the Application Mode based on your need. Then upload the .p12 file you just created.
+
+ <img src="./media/add-chat-push-notification/cert-12.png" width="700" height="360" alt="Screenshot of APNS Cert Configuration step 9.">
+
+4. XCode Configuration
+* In XCode, go to  `Signing & Capabilities`. Add a capability by selecting "+ Capability", and then select “Push Notifications”.
+
+* Add another capability by selecting “+ Capability”, and then select “Background Modes”. Also select “Remote Notifications” under Background Modes.
+
+<img src="./media/add-chat-push-notification/xcode-config.png" width="730" height="500" alt="Screenshot of Enable Push Notifications and Background modes in Xcode.">
+
+## Implementation
++
+### 1 - Basic Version
+If you want to implement a basic version of Push Notification, you need to register for remote notifications with APNS (Apple Push Notification Service). Refer to [sample code](https://github.com/Azure-Samples/communication-services-ios-quickstarts/blob/main/add-chat-push-notifications/SwiftPushTest/AppDelegate.swift) to see the related implementation in `AppDelegate.swift`.
+
+### 2 - Advanced Version
+If you want to implement an advanced version of Push Notification, you need to include the following items in your app. The reason is that we encrypt customer content (e.g. chat message content, sender display name, etc.) in push notification payload and it requires some workaround on your side.
+
+* Item 1: Data Storage for encryption keys
+
+First, you should create a persistent data storage in IOS device. This data storage should be able to share data between Main App and App Extensions (Refer to Item 2 for more information about App Extension ΓÇô Notification Service Extension).
+
+In our sample code, we'll choose ΓÇ£App GroupΓÇ¥ as the data storage. Below are the suggested steps to create and use ΓÇ£App GroupΓÇ¥:
+
+Follow the steps in [Add a capability](https://developer.apple.com/documentation/xcode/adding-capabilities-to-your-app?changes=latest_minor#Add-a-capability) to add the Apps Groups capability to your appΓÇÖs targets ΓÇô both Main App and Notification Service Extension (Refer to Item 2 on how to create a Notification Service Extension).
+
+Also follow the steps in this [Apple official doc](https://developer.apple.com/documentation/xcode/configuring-app-groups?changes=latest_minor) to configure App Group. Make sure your Main App and App Extension have the same container name.
+
+* Item 2: Notification Service Extension
+
+Second, you should implement a ΓÇ£Notification Service ExtensionΓÇ¥ bundled with main app. This is used for decrypting the push notification payload when receiving it.
+
+Go to this [Apple official doc](https://developer.apple.com/documentation/usernotifications/modifying_content_in_newly_delivered_notifications). Follow the step ΓÇ£Add a Service App Extension to your projectΓÇ¥ and ΓÇ£Implement Your ExtensionΓÇÖs Handler MethodsΓÇ¥.
+
+Notice that in the step ΓÇ£Implement Your ExtensionΓÇÖs Handler Methods,ΓÇ¥ Apple provides the sample code to decrypt data and we'll follow the overall structure. However, since we use chat SDK for decryption, we need to replace the part starting from `ΓÇ£// Try to decode the encrypted message data.ΓÇ¥` with our customized logic. Refer to the [sample code](https://github.com/Azure-Samples/communication-services-ios-quickstarts/blob/main/add-chat-push-notifications/SwiftPushTestNotificationExtension/NotificationService.swift) to see the related implementation in `NotificationService.swift`.
+
+* Item 3: Implementation of PushNotificationKeyHandler Protocol
+
+Third, `PushNotificationKeyHandler` is required for advanced version. As the SDK user, you could use the default `AppGroupPushNotificationKeyHandler` class provided by chat SDK to generate a key handler. If you donΓÇÖt use `App Group` as the key storage or would like to customize key handling methods, create your own class which conforms to PushNotificationKeyHandler protocol.
+
+For PushNotificationKeyHandler, it defines two methods: `onPersistKey(encryptionKey:expiryTime)` and `onRetrieveKeys() -> [String]`.
+
+The first method is used to persist the encryption key in the storage of userΓÇÖs IOS device. Chat SDK will set 45 minutes as the expiry time of the encryption key. If you want Push Notification to be effect for more than 45 minutes, you need to schedule to call `chatClient.startPushNotifications(deviceToken:)` on a comparatively frequent basis (eg. every 15 minutes) so a new encryption key could be registered before the old key expires.
+
+The second method is used to retrieve the valid keys previously stored. You have the flexibility to provide the customization based on the data storage (item 1) you choose.
+
+In protocol extension, chat SDK provides the implementation of `decryptPayload(notification:) -> PushNotificationEvent` method which you can take advantage of. Refer to the [sample code](https://github.com/Azure-Samples/communication-services-ios-quickstarts/blob/main/add-chat-push-notifications/SwiftPushTestNotificationExtension/NotificationService.swift) to see the related implementation in `NotificationService.swift`.
+
+## Testing
+1. Create a Chat Thread with User A and User B.
+
+2. Download the Sample App Repo and follow the above steps in the prerequisites and implementation section.
+
+3. Put User AΓÇÖs <ACESS_TOEKN> and <ACS_RESOURCE_ENDPOINT> into `AppSettings.plist`.
+
+4. Set ΓÇ£Enable BitcodeΓÇ¥ to ΓÇ£NoΓÇ¥ for two Pods targets ΓÇô AzureCommunicationChat and Trouter.
+
+5. Plug the IOS device into your mac, run the program and click ΓÇ£allowΓÇ¥ when asked to authorize push notification on device.
+
+6. As User B, send a chat message. You (User A) should be able to receive a push notification in your IOS device.
++
communication-services Postman Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/postman-tutorial.md
Title: Tutorial - Sign and make requests to Azure Communication Services' SMS API with Postman description: Learn how to sign and makes requests for Azure Communication Services with Postman to send an SMS Message.-+ -+ Last updated 06/30/2021
confidential-computing Virtual Machine Solutions Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions-amd.md
Last updated 11/15/2021
# Azure Confidential VM options on AMD
-> [!IMPORTANT]
-> Confidential virtual machines (confidential VMs) in Azure Confidential Computing is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- Azure Confidential Computing offers multiple options for confidential VMs that run on AMD processors backed by [AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP)](https://www.amd.com/system/files/TechDocs/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-more.pdf) technology. ## Sizes
container-apps Github Actions Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/github-actions-cli.md
az containerapp github-action add \
```azurecli az containerapp github-action add ` --repo-url "https://github.com/<OWNER>/<REPOSITORY_NAME>" `
- --content-path "./dockerfile" `
+ --context-path "./dockerfile" `
--branch <BRANCH_NAME> ` --name <CONTAINER_APP_NAME> ` --resource-group <RESOURCE_GROUP> `
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
Previously updated : 05/03/2022 Last updated : 08/10/2022
The following quotas are on a per subscription basis for Azure Container Apps.
-| Feature | Quantity |
-|||
-| Environments per region | 5 per subscription |
-| Container apps per environment | 20 per subscription |
-| Replicas per container app | 30 per subscription |
-| Cores | 2 per replica |
-| Cores | 20 per environment |
+| Feature | Quantity | Scope | Remarks |
+|--|--|--|--|
+| Environments | 5 | For a subscription per region | |
+| Container Apps | 20 | Environment | |
+| Revisions | 100 | Container app | |
+| Replicas | 30 | Revision | |
+| Cores | 2 | Replica | Maximum number of cores that can be requested by a revision replica. |
+| Cores | 20 | Environment | Calculated by the total cores an environment can accommodate. For instance, the sum of cores requested by each active replica of all revisions in an environment. |
+
+## Considerations
+
+* Pay-as-you-go and trial subscriptions are limited to 1 environment per region per subscription.
+* If an environment runs out of allowed cores:
+ * Provisioning times out with a failure
+ * The app silently refuses to scale out
To request an increase in quota amounts for your container app, [submit a support ticket](https://azure.microsoft.com/support/create-ticket/).
container-apps Revisions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions-manage.md
Example: (Replace the \<placeholders\> with your values.)
```azurecli az containerapp revision show \
- --name <REVISION_NAME> \
+ --name <APPLICATION_NAME> \
+ --revision <REVISION_NAME> \
--resource-group <RESOURCE_GROUP_NAME> ``` # [PowerShell](#tab/powershell) ```azurecli
-az containerapp revision show `
- --name <REVISION_NAME> `
+ --name <APPLICATION_NAME> `
+ --revision <REVISION_NAME> `
--resource-group <RESOURCE_GROUP_NAME> ```
cosmos-db Cassandra Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-support.md
export SSL_VERSION=TLSv1_2
export SSL_VALIDATE=false # Connect to Azure Cosmos DB API for Cassandra:
-cqlsh <YOUR_ACCOUNT_NAME>.cassandra.cosmosdb.azure.com 10350 -u <YOUR_ACCOUNT_NAME> -p <YOUR_ACCOUNT_PASSWORD> --ssl
+cqlsh <YOUR_ACCOUNT_NAME>.cassandra.cosmosdb.azure.com 10350 -u <YOUR_ACCOUNT_NAME> -p <YOUR_ACCOUNT_PASSWORD> --ssl --protocol-version=4
``` **Connect with Docker:** ```bash
cosmos-db Linux Emulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/linux-emulator.md
Use the following steps to run the emulator on Linux:
| Ports: `-p` | | Currently, only ports 8081 and 10251-10255 are needed by the emulator endpoint. | | `AZURE_COSMOS_EMULATOR_PARTITION_COUNT` | 10 | Controls the total number of physical partitions, which in return controls the number of containers that can be created and can exist at a given point in time. We recommend starting small to improve the emulator start up time, i.e 3. | | Memory: `-m` | | On memory, 3 GB or more is required. |
-| Cores: `--cpus` | | Make sure to allocate enough memory and CPU cores; while the emulator can run with as little as 0.5 cores, but at least two cores are recommended. |
+| Cores: `--cpus` | | Make sure to allocate enough memory and CPU cores. At least two cores are recommended. |
|`AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE` | false | This setting used by itself will help persist the data between container restarts. | |`AZURE_COSMOS_EMULATOR_ENABLE_MONGODB_ENDPOINT` | | This setting enables the MongoDB API endpoint for the emulator and configures the MongoDB server version. (Valid server version values include ``3.2``, ``3.6``, and ``4.0``) |
cosmos-db Local Emulator Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/local-emulator-release-notes.md
Previously updated : 09/21/2020 Last updated : 08/10/2022+
-# Azure Cosmos DB Emulator - Release notes and download information
+# Azure Cosmos DB Emulator - release notes and download information
+ [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-This article shows the Azure Cosmos DB Emulator released versions and it details the updates that were made. Only the latest version is made available to download and use and previous versions aren't actively supported by the Azure Cosmos DB Emulator developers.
+This article shows the Azure Cosmos DB Emulator released versions and it details the latest updates. The download center only has the latest version of the emulator available to download.
+
+> [!IMPORTANT]
+> Earlier versions of the Azure Cosmos DB Emulator aren't actively supported by the developer team.
## Download | | Link |
-|||
-|**MSI download**|[Microsoft Download Center](https://aka.ms/cosmosdb-emulator)|
-|**Get started**|[Develop locally with Azure Cosmos DB Emulator](local-emulator.md)|
+| | |
+| **Download** | [Microsoft Download Center](https://aka.ms/cosmosdb-emulator) |
+| **Get started** | [Develop locally with Azure Cosmos DB Emulator](local-emulator.md) |
## Release notes
-### 2.14.7 (May 9, 2022)
+### `2.14.9` (July 7, 2022)
+
+- This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB.
- * Update Data Explorer to the latest content and fix a broken link for the quick start sample documentation.
- * Add option to enable the Mongo API version for the Linux Cosmos DB emulator by setting the environment variable: "AZURE_COSMOS_EMULATOR_ENABLE_MONGODB_ENDPOINT" in the Docker container setting. Valid setting are: "3.2", "3.6", "4.0" and "4.2"
+### `2.14.8`
-### 2.14.6 (March 7, 2022)
+- This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB.
- * Fix for an issue related to high CPU usage when the emulator is running.
- * Add PowerShell option to set the Mongo API version: "-MongoApiVersion". Valid setting are: "3.2", "3.6" and "4.0"
+### `2.14.7` (May 9, 2022)
-### 2.14.5 (January 18, 2022)
+- This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. In addition to this update, there are a couple of issues addressed in this release:
+ - Update Data Explorer to the latest content and fix a broken link for the quick start sample documentation.
+ - Add option to enable the Mongo API version for the Linux Cosmos DB emulator by setting the environment variable: `AZURE_COSMOS_EMULATOR_ENABLE_MONGODB_ENDPOINT` in the Docker container. Valid settings are: `3.2`, `3.6`, `4.0` and `4.2`
+### `2.14.6` (March 7, 2022)
-### 2.14.4 (October 25, 2021)
+- This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. In addition to this update, there are a couple of issues addressed in this release:
+ - Fix for an issue related to high CPU usage when the emulator is running.
+ - Add PowerShell option to set the Mongo API version: `-MongoApiVersion`. Valid settings are: `3.2`, `3.6` and `4.0`
+### `2.14.5` (January 18, 2022)
-### 2.14.3 (September 8, 2021)
+- This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. One other important update with this release is to reduce the number of services executed in the background and start them as needed.
+### `2.14.4` (October 25, 2021)
-### 2.14.2 (August 12, 2021)
+- This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB.
+### `2.14.3` (September 8, 2021)
-### 2.14.1 (June 18, 2021)
+- This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. It also addresses issues with performance data that's collected and resets the base image for the Linux Cosmos emulator Docker image.
+### `2.14.2` (August 12, 2021)
-### 2.14.0 (June 15, 2021)
+- This release updates the local Data Explorer content to latest Microsoft Azure version and resets the base for the Linux Cosmos emulator Docker image.
+### `2.14.1` (June 18, 2021)
-### 2.11.13 (April 21, 2021)
+- This release improves the start-up time for the emulator while reducing the footprint of its data on the disk. Activate this new optimization by using the `/EnablePreview` argument.
+### `2.14.0` (June 15, 2021)
-### 2.11.11 (February 22, 2021)
+- This release updates the local Data Explorer content to latest Microsoft Azure version. It also fixes an issue when importing many items by using the JSON file upload feature.
+### `2.11.13` (April 21, 2021)
+- This release updates the local Data Explorer content to latest Microsoft Azure version and adds a new MongoDB endpoint configuration, `4.0`.
-### 2.11.10 (January 5, 2021)
+### `2.11.11` (February 22, 2021)
+- This release updates the local Data Explorer content to latest Microsoft Azure version.
-### 2.11.9 (December 3, 2020)
+### `2.11.10` (January 5, 2021)
- * Fix for an issue where large document payload requests fail when using Direct mode and Java client applications.
- * Fix for a connectivity issue with MongoDB endpoint version 3.6 when targeted by .NET based applications.
+- This release updates the local Data Explorer content to latest Microsoft Azure version. It also adds a new public option, `/ExportPemCert`, which enables the emulator user to directly export the public emulator's certificate as a `.PEM` file.
-### 2.11.8 (November 6, 2020)
+### `2.11.9` (December 3, 2020)
+- This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. It also addresses couple issues with the Azure Cosmos DB Emulator functionality:
+ - Fix for an issue where large document payload requests fail when using Direct mode and Java client applications.
+ - Fix for a connectivity issue with MongoDB endpoint version 3.6 when targeted by .NET based applications.
-### 2.11.6 (October 6, 2020)
+### `2.11.8` (November 6, 2020)
+- This release includes an update for the Azure Cosmos DB Emulator Data Explorer and fixes an issue where **transport layer security (TLS) 1.3** clients try to open the Data Explorer.
-### 2.11.5 (August 23, 2020)
+### `2.11.6` (October 6, 2020)
-This release adds two new Azure Cosmos DB Emulator startup options:
+- This release addresses a concurrency-related issue when creating more than one container at the same time. The issue can leave the emulator in a corrupted state and future API requests to the emulator's endpoint will fail with *service unavailable* errors. The work-around is to stop the emulator, reset of the emulator's local data and restart.
-* "/EnablePreview" - it enables preview features for the Azure Cosmos DB Emulator. The preview features that are still under development and they can be accessed via CI and sample writing.
-* "/EnableAadAuthentication" - it enables the emulator to accept custom Azure Active Directory tokens as an alternative to the Azure Cosmos primary keys. This feature is still under development; specific role assignments and other permission-related settings aren't currently supported.
+### `2.11.5` (August 23, 2020)
-### 2.11.2 (July 7, 2020)
+- This release adds two new Azure Cosmos DB Emulator startup options:
+ - `/EnablePreview` - Enables preview features for the Azure Cosmos DB Emulator. The preview features that are still under development and are available via CI and sample writing.
+ - `/EnableAadAuthentication` - Enables the emulator to accept custom Azure Active Directory tokens as an alternative to the Azure Cosmos primary keys. This feature is still under development; specific role assignments and other permission-related settings aren't currently supported.
-- This release changes how ETL traces required when troubleshooting the Azure Cosmos DB Emulator are collected. WPR (Windows Performance Runtime tools) is now the default tools for capturing ETL-based traces while old LOGMAN based capturing has been deprecated. With the latest Windows security update, LOGMAN stopped working as expected when executed through the Azure Cosmos DB Emulator.
+### `2.11.2` (July 7, 2020)
-### 2.11.1 (June 10, 2020)
+- This release changes how the Azure Cosmos DB Emulator collects traces. Windows Performance Runtime (WPR) is now the default tools for capturing event trace log-based traces while deprecating logman based capturing. With the latest Windows security update, LOGMAN stopped working as expected when executed through the Azure Cosmos DB Emulator.
-This release fixes couple bugs related to Azure Cosmos DB Emulator Data Explorer:
+### `2.11.1` (June 10, 2020)
-* Data Explorer fails to connect to the Azure Cosmos DB Emulator endpoint when hosted in some Web browser versions. Emulator users might not be able to create a database or a container through the Web page.
-* Address an issue that prevented emulator users from creating an item from a JSON file using Data Explorer upload action.
+- This release fixes couple bugs related to Azure Cosmos DB Emulator Data Explorer:
+ - Data Explorer fails to connect to the Azure Cosmos DB Emulator endpoint when hosted in some Web browser versions. Emulator users might not be able to create a database or a container through the Web page.
+ - Resolved bug that prevented emulator users from creating an item from a JSON file using Data Explorer upload action.
-### 2.11.0
+### `2.11.0`
- This release introduces support for autoscale provisioned throughput. The added features include the option to set a custom maximum provisioned throughput level in request units (RU/s), enable autoscale on existing databases and containers, and API support through Azure Cosmos DB SDK. - Fix an issue while querying through large number of documents (over 1 GB) were the emulator will fail with internal error status code 500.
-### 2.9.2
+### `2.9.2`
-- This release fixes a bug while enabling support for MongoDb endpoint version 3.2. It also adds support for generating ETL traces for troubleshooting purposes using WPR instead of LOGMAN.
+- This release fixes a bug while enabling support for MongoDb endpoint version 3.2. It also adds support for generating trace messages for troubleshooting purposes using [Windows Performance Recorder (WPR)](/windows-hardware/test/wpt/wpr-command-line-options) instead of [logman](/windows-server/administration/windows-commands/logman).
-### 2.9.1
+### `2.9.1`
- This release fixes couple issues in the query API support and restores compatibility with older OSs such as Windows Server 2012.
-### 2.9.0
+### `2.9.0`
- This release adds the option to set the consistency to consistent prefix and increase the maximum limits for users and permissions.
-### 2.7.2
+### `2.7.2`
-- This release adds MongoDB version 3.6 server support to the Azure Cosmos DB Emulator. To start a MongoDB endpoint that target version 3.6 of the service, start the emulator from an Administrator command line with "/EnableMongoDBEndpoint=3.6" option.
+- This release adds MongoDB version 3.6 server support to the Azure Cosmos DB Emulator. To start a MongoDB endpoint that target version 3.6 of the service, start the emulator from an Administrator command line with `/EnableMongoDBEndpoint=3.6` option.
-### 2.7.0
+### `2.7.0`
- This release fixes a regression in the Azure Cosmos DB Emulator that prevented users from executing SQL related queries. This issue impacts emulator users that configured SQL API endpoint and they're using .NET core or x86 .NET based client applications.
-### 2.4.6
+### `2.4.6`
- This release provides parity with the features in the Azure Cosmos service as of July 2019, with the exceptions noted in [Develop locally with Azure Cosmos DB Emulator](local-emulator.md). It also fixes several bugs related to emulator shut down when invoked via command line and internal IP address overrides for SDK clients using direct mode connectivity.
-### 2.4.3
+### `2.4.3`
-- Disabled starting the MongoDB service by default. Only the SQL endpoint is enabled as default. The user must start the endpoint manually using the emulator's "/EnableMongoDbEndpoint" command-line option. Now, it's like all the other service endpoints, such as Gremlin, Cassandra, and Table.-- Fixed a bug in the emulator when starting with ΓÇ£/AllowNetworkAccessΓÇ¥ where the Gremlin, Cassandra, and Table endpoints weren't properly handling requests from external clients.
+- MongoDB service is no longer started by default. By default, the emulator enables the SQL endpoint. The user must start the endpoint manually using the emulator's `/EnableMongoDbEndpoint` command-line option. Now, it's like all the other service endpoints, such as Gremlin, Cassandra, and Table.
+- Fixes a bug in the emulator when starting with ΓÇ£/AllowNetworkAccessΓÇ¥ where the Gremlin, Cassandra, and Table endpoints weren't correctly handling requests from external clients.
- Add direct connection ports to the Firewall Rules settings.
-### 2.4.0
+### `2.4.0`
- Fixed an issue with emulator failing to start when network monitoring apps, such as Pulse Client, are present on the host computer.+
+## Next steps
+
+- [Install and use the Azure Cosmos DB Emulator for local development and testing](local-emulator.md)
cosmos-db Change Feed Design Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/change-feed-design-patterns.md
A materialized view of current shopping cart contents is maintained for each cus
## Examples
-Here are some real-world change feed code examples that extend beyond the scope of the samples provided in Microsoft docs:
+Here are some real-world change feed code examples that extend beyond the scope of the provided samples:
- [Introduction to the change feed](https://azurecosmosdb.github.io/labs/dotnet/labs/08-change_feed_with_azure_functions.html) - [IoT use case centered around the change feed](https://github.com/AzureCosmosDB/scenario-based-labs)
Here are some real-world change feed code examples that extend beyond the scope
* [Using change feed with Azure Functions](change-feed-functions.md) * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+ * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-snowflake.md
Previously updated : 06/17/2022 Last updated : 07/28/2022 # Copy and transform data in Snowflake using Azure Data Factory or Azure Synapse Analytics
To copy data from Snowflake, the following properties are supported in the Copy
| additionalCopyOptions | Additional copy options, provided as a dictionary of key-value pairs. Examples: MAX_FILE_SIZE, OVERWRITE. For more information, see [Snowflake Copy Options](https://docs.snowflake.com/en/sql-reference/sql/copy-into-location.html#copy-options-copyoptions). | No | | additionalFormatOptions | Additional file format options that are provided to COPY command as a dictionary of key-value pairs. Examples: DATE_FORMAT, TIME_FORMAT, TIMESTAMP_FORMAT. For more information, see [Snowflake Format Type Options](https://docs.snowflake.com/en/sql-reference/sql/copy-into-location.html#format-type-options-formattypeoptions). | No |
+>[!Note]
+> Make sure you have permission to execute the following command and access the schema *INFORMATION_SCHEMA* and the table *COLUMNS*.
+>
+>- `COPY INTO <location>`
+ #### Direct copy from Snowflake If your sink data store and format meet the criteria described in this section, you can use the Copy activity to directly copy from Snowflake to sink. The service checks the settings and fails the Copy activity run if the following criteria is not met:
To copy data to Snowflake, the following properties are supported in the Copy ac
| additionalCopyOptions | Additional copy options, provided as a dictionary of key-value pairs. Examples: ON_ERROR, FORCE, LOAD_UNCERTAIN_FILES. For more information, see [Snowflake Copy Options](https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html#copy-options-copyoptions). | No | | additionalFormatOptions | Additional file format options provided to the COPY command, provided as a dictionary of key-value pairs. Examples: DATE_FORMAT, TIME_FORMAT, TIMESTAMP_FORMAT. For more information, see [Snowflake Format Type Options](https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html#format-type-options-formattypeoptions). | No |
+>[!Note]
+> Make sure you have permission to execute the following command and access the schema *INFORMATION_SCHEMA* and the table *COLUMNS*.
+>
+>- `SELECT CURRENT_REGION()`
+>- `COPY INTO <table>`
+>- `SHOW REGIONS`
+>- `CREATE OR REPLACE STAGE`
+>- `DROP STAGE`
+ #### Direct copy to Snowflake If your source data store and format meet the criteria described in this section, you can use the Copy activity to directly copy from source to Snowflake. The service checks the settings and fails the Copy activity run if the following criteria is not met:
data-factory Parameterize Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameterize-linked-services.md
Previously updated : 07/25/2022 Last updated : 08/09/2022
All the linked service types are supported for parameterization.
- SAP ODP - SFTP - SharePoint Online List
+- Snowflake
- SQL Server **Advanced authoring:** For other linked service types that are not in above list, you can parameterize the linked service by editing the JSON on UI:
data-factory Pricing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-concepts.md
Title: Understanding Azure Data Factory pricing through examples description: This article explains and demonstrates the Azure Data Factory pricing model with detailed examples--++ Previously updated : 09/07/2021 Last updated : 08/09/2022 # Understanding Data Factory pricing through examples
To accomplish the scenario, you need to create a pipeline with the following ite
- Pipeline Orchestration &amp; Execution = **$0.17007** - Activity Runs = 0.001\*4 = $0.004 [1 run = $1/1000 = 0.001] - Data Movement Activities = $0.166 (Prorated for 10 minutes of execution time. $0.25/hour on Azure Integration Runtime)
- - Pipeline Activity = $0.00003 (Prorated for 1 minute of execution time. $0.002/hour on Azure Integration Runtime)
+ - Pipeline Activity = $0.00003 (Prorated for 1 minute of execution time. $0.005/hour on Azure Integration Runtime)
- External Pipeline Activity = $0.000041 (Prorated for 10 minutes of execution time. $0.00025/hour on Azure Integration Runtime) ## Run SSIS packages on Azure-SSIS integration runtime
data-factory Transform Data Using Databricks Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-databricks-notebook.md
The pipeline in this sample triggers a Databricks Notebook activity and passes a
- Trigger a pipeline run.
- - Monitor the pipeline run.
+ - Monitor the pipeline run.
databox-online Azure Stack Edge Add Hardware Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-add-hardware-terms.md
+
+ Title: Azure Stack Edge hardware additional terms | Microsoft Docs
+description: Describes additional terms for Azure Stack Edge hardware.
+++++ Last updated : 08/10/2022++
+# Azure Stack Edge hardware additional terms
++
+This article documents additional terms for Azure Stack Edge hardware.
+
+## Availability of the Azure Stack Edge device
+
+Microsoft isn't obligated to continue to offer the Azure Stack Edge device or any other hardware product in connection with the Service. The Azure Stack Edge device may not be offered in all regions or jurisdictions, and even where it is offered, it may be subject to availability. Microsoft reserves the right to refuse to offer the Azure Stack Edge device to anyone in its sole discretion and judgment.
+
+## Use of the Azure Stack Edge device
+As part of the Service, Microsoft allows Customer to use the Azure Stack Edge device for as long as Customer has an active subscription to the Service. If Customer no longer has an active subscription and fails to return the Azure Stack Edge device, Microsoft may deem the Azure Stack Edge device as lost as described in the ΓÇ£Title and risk of loss; shipment and return responsibilitiesΓÇ¥ section.
+
+## Title and risk of loss; shipment and return responsibilities
+
+### Title and risk of loss
+
+All right, title and interest in each Azure Stack Edge device is and shall remain the property of Microsoft, and except as described in these Additional Terms, no rights are granted to any Azure Stack Edge device (including under any patent, copyright, trade secret, trademark or other proprietary rights). Customer will compensate Microsoft for any loss, damage or destruction to or of any Azure Stack Edge device once it has been delivered by the carrier to CustomerΓÇÖs designated address until the Microsoft-designated carrier accepts the Azure Stack Edge device for return delivery, including while it is at any of CustomerΓÇÖs locations (other than expected wear and tear that don't compromise the structure or functionality) or such circumstances as described in the ΓÇ£Responsibilities if a government customer moves an Azure Stack Edge device between customerΓÇÖs locationsΓÇ¥ section. Customer is responsible for inspecting the Azure Stack Edge device upon receipt from the carrier and for promptly reporting any damages to Microsoft Support at [adbeops@microsoft.com](mailto:adbeops@microsoft.com).
+
+If Customer prefers to arrange CustomerΓÇÖs own pick-up and/or return of the Azure Stack Edge device pursuant to the ΓÇ£Shipment and return of Azure Stack Edge deviceΓÇ¥ section below, Customer is responsible for the entire risk of loss of, or any damage to the Azure Stack Edge device until it has been returned to and accepted by Microsoft.
+Microsoft may charge Customer for a lost device fee for the Azure Stack Edge device (or equivalent) as described on the pricing pages for the specific Azure Stack Edge device models under the **FAQ** section at https://azure.microsoft.com/pricing/details/azure-stack/edge/ for the following reasons: (i) the Azure Stack Edge device is lost or materially damaged while it is CustomerΓÇÖs responsibility as described above, (ii) Customer does not provide the Azure Stack Edge device to the Microsoft-designated carrier for return or return the Azure Stack Edge device pursuant to the ΓÇ£Shipment and return of Azure Stack Edge deviceΓÇ¥ section below within 30 days from the end of CustomerΓÇÖs use of the Service. Microsoft reserves the right to change the fee charged for lost or damaged devices, including charging different amounts for different device form factors.
+
+### Shipment and return of Azure Stack Edge device
+
+Customer will be responsible for a one-time metered shipping fee for the shipment of the Azure Stack Edge device from Microsoft to Customer and return shipping of the same, in addition to any metered amounts for carrier charges, any taxes, or applicable customs fees. When returning an Azure Stack Edge device to Microsoft, Customer will package and ship the same in accordance with MicrosoftΓÇÖs instructions, including using a carrier designated by Microsoft and the packaging materials provided by Microsoft. If Customer prefers to arrange CustomerΓÇÖs own pick-up and/or return of the same, then Customer is responsible for the costs of shipping the Azure Stack Edge device, including protections against any loss or damage of the Azure Stack Edge device (for example, insurance coverage) while in transit. Customer will package and ship the Azure Stack Edge device in accordance with MicrosoftΓÇÖs packaging instructions. Customer is also responsible to ensure that it removes all CustomerΓÇÖs data from the Azure Stack Edge device prior to returning it to Microsoft, including following any Microsoft-issued processes for wiping or clearing the Azure Stack Edge device.
+
+### Responsibilities if a government customer moves an Azure Stack Edge device between customerΓÇÖs locations
+
+Government Customer agrees to comply with and be responsible for all applicable import, export and general trade laws and regulations should Customer decide to transport the Azure Stack Edge device beyond the country border in which Customer receives the Azure Stack Edge device. For clarity, but not limited to, if a government Customer is in possession of an Azure Stack Edge device, only the government Customer may, at government CustomerΓÇÖs sole risk and expense, transport the Azure Stack Edge device to its different locations in accordance with this section and the requirements of the Additional Terms. Customer is responsible for obtaining at CustomerΓÇÖs own risk and expense any export license, import license and other official authorization for the exportation and importation of the Azure Stack Edge device and CustomerΓÇÖs data to any different Customer location. Customer shall also be responsible for customs clearance to any different Customer location, and will bear all duties, taxes, and other official charges payable upon importation as well as all costs and risks of carrying out customs formalities in a timely manner.
+
+If Customer transports the Azure Stack Edge device to a different location, Customer agrees to return the Azure Stack Edge device to the country location where Customer received it initially, prior to shipping the Azure Stack Edge device back to Microsoft. Customer acknowledges that there are inherent risks in shipping data on and in connection with the Azure Stack Edge device, and that Microsoft will have no liability to Customer for any damage, theft, or loss occurring to an Azure Stack Edge device or any data stored on one, including during transit. It's CustomerΓÇÖs responsibility to obtain the appropriate support agreement from Microsoft to meet CustomerΓÇÖs operating objectives for the Azure Stack Edge device; however, depending on the location to which Customer intends to move the Azure Stack Edge device, MicrosoftΓÇÖs ability to provide hardware servicing and support may be delayed, or may not be available.
+
+## Fees
+
+Microsoft will charge Customer specified fees in connection with CustomerΓÇÖs use of the Azure Stack Edge device as part of the Service, with [the current schedule of fees for each Azure Stack Edge model](https://azure.microsoft.com/pricing/details/azure-stack/edge/). Customer may use other Azure services in connection with CustomerΓÇÖs use of the Service, and Microsoft deems such services as separate services that may be subject to separate metered fees and costs. By way of example only, Azure Storage, Azure Compute, and Azure IoT Hub are separate Azure services, and if used (even in connection with its use of the Service), separate Azure metered services will apply.
+
+## Next steps
+
+- [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/#overview).
+- [Azure Stack Edge pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/).
databox-online Azure Stack Edge Gpu 2008 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2008-release-notes.md
The following table provides a summary of known issues for the Azure Stack Edge
| No. | Feature | Issue | Workaround/comments | | | | | |
-| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [https://docs.microsoft.com/azure/iot-edge/tutorial-store-data-sql-server#create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <ul><li>In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**</li><li>Download [sqlcmd utility](/sql/tools/sqlcmd-utility) on your client machine. </li><li>Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.</li><li>Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd".</li>After this, steps 3-4 from the current documentation should be identical. </li></ul> |
+| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Tutorial: Store data at the edge with SQL Server databases](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <ul><li>In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**</li><li>Download [sqlcmd utility](/sql/tools/sqlcmd-utility) on your client machine. </li><li>Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.</li><li>Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd".</li>After this, steps 3-4 from the current documentation should be identical. </li></ul> |
| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<ul><li>Create blob in cloud. Or delete a previously uploaded blob from the device.</li><li>Refresh blob from the cloud into the appliance using the refresh functionality.</li><li>Update only a portion of the blob using Azure SDK REST APIs.</li></ul>These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.| |**3.**|Throttling|During throttling, if new writes are not allowed into the device, writes done by NFS client fail with "Permission Denied" error.| The error will show as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: cannot create directory 'test': Permission deniedΓÇï| |**4.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits are not provided for AzCopy, then it could potentially send a large number of requests to the device and result in issues with the service.|
The following table provides a summary of known issues for the Azure Stack Edge
## Next steps -- [Prepare to deploy Azure Stack Edge Pro device with GPU](azure-stack-edge-gpu-deploy-prep.md)
+- [Prepare to deploy Azure Stack Edge Pro device with GPU](azure-stack-edge-gpu-deploy-prep.md)
databox-online Azure Stack Edge Gpu 2010 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2010-release-notes.md
The following table provides a summary of known issues for the Azure Stack Edge
| No. | Feature | Issue | Workaround/comments | | | | | | |**1.**|Preview features |For this GA release, the following features: Local Azure Resource Manager, VMs, Kubernetes, Azure Arc-enabled Kubernetes, Multi-Process service (MPS) for GPU - are all available in preview for your Azure Stack Edge Pro device. |These features will be generally available in a later release. |
-| **2.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [https://docs.microsoft.com/azure/iot-edge/tutorial-store-data-sql-server#create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <ul><li>In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**</li><li>Download [sqlcmd utility](/sql/tools/sqlcmd-utility) on your client machine.</li><li>Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.</li><li>Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd".</li>After this, steps 3-4 from the current documentation should be identical. </li></ul> |
+| **2.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Tutorial: Store data at the edge with SQL Server databases](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <ul><li>In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**</li><li>Download [sqlcmd utility](/sql/tools/sqlcmd-utility) on your client machine.</li><li>Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.</li><li>Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd".</li>After this, steps 3-4 from the current documentation should be identical. </li></ul> |
| **3.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<ul><li>Create blob in cloud. Or delete a previously uploaded blob from the device.</li><li>Refresh blob from the cloud into the appliance using the refresh functionality.</li><li>Update only a portion of the blob using Azure SDK REST APIs.</li></ul>These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.| |**4.**|Throttling|During throttling, if new writes are not allowed into the device, writes done by NFS client fail with "Permission Denied" error.| The error will show as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: cannot create directory 'test': Permission deniedΓÇï| |**5.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits are not provided for AzCopy, then it could potentially send a large number of requests to the device and result in issues with the service.|
databox-online Azure Stack Edge Gpu Create Virtual Machine Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-image.md
Previously updated : 05/19/2022 Last updated : 08/09/2022 #Customer intent: As an IT admin, I need to understand how to create Azure VM images that I can use to deploy virtual machines on my Azure Stack Edge Pro GPU device.
To deploy VMs on your Azure Stack Edge Pro GPU device, you need to be able to create custom VM images that you can use to create VMs in Azure. This article describes the steps to create custom VM images in Azure for Windows and Linux VMs and download or copy those images to an Azure Storage account.
-There's a required workflow for preparing a custom VM image. For the image source, you need to use a fixed VHD from a Gen1 VM of any size that Azure supports. For VM size options, see [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes).
+There's a required workflow for preparing a custom VM image. For the image source, you need to use a fixed VHD from any size that Azure supports. For VM size options, see [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes).
## Prerequisites
Do the following steps to create a Windows VM image:
The virtual machine can be a Generation 1 or Generation 2 VM. The OS disk that you use to create your VM image must be a fixed-size VHD of any size that Azure supports. For VM size options, see [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes).
- You can use any Windows Gen1 or Gen2 VM with a fixed-size VHD in Azure Marketplace. For a list Azure Marketplace images that could work, see [Commonly used Azure Marketplace images for Azure Stack Edge](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#commonly-used-marketplace-images).
+ You can use any Windows Gen1 or Gen2 VM with a fixed-size VHD in Azure Marketplace. For a of list Azure Marketplace images that could work, see [Commonly used Azure Marketplace images for Azure Stack Edge](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#commonly-used-marketplace-images).
2. Generalize the virtual machine. To generalize the VM, [connect to the virtual machine](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#connect-to-a-windows-vm), open a command prompt, and run the following `sysprep` command:
Do the following steps to create a Linux VM image:
### Using RHEL BYOS images
-If using Red Hat Enterprise Linux (RHEL) images, only the Red Hat Enterprise Linux Bring Your Own Subscription (RHEL BYOS) images, also known as the Red Hat gold images, are supported and can be used to create your VM image. The standard pay-as-you-go RHEL images on Azure Marketplace are not supported on Azure Stack Edge.
+If using Red Hat Enterprise Linux (RHEL) images, only the Red Hat Enterprise Linux Bring Your Own Subscription (RHEL BYOS) images, also known as the Red Hat gold images, are supported and can be used to create your VM image. The standard pay-as-you-go RHEL images on Azure Marketplace aren't supported on Azure Stack Edge.
To create a VM image using the RHEL BYOS image, follow these steps:
-1. Log in to [Red Hat Subscription Management](https://access.redhat.com/management). Navigate to the [Cloud Access Dashboard](https://access.redhat.com/management/cloud) from the top menu bar.
+1. Sign in to [Red Hat Subscription Management](https://access.redhat.com/management). Navigate to the [Cloud Access Dashboard](https://access.redhat.com/management/cloud) from the top menu bar.
1. Enable your Azure subscription. See [detailed instructions](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/getting-started-with-ca_cloud-access). Enabling the subscription will allow you to access the Red Hat Gold Images. 1. Accept the Azure terms of use (only once per Azure Subscription, per image) and provision a VM. See [instructions](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/understanding-gold-images_cloud-access).
databox-online Azure Stack Edge Gpu Virtual Machine Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-virtual-machine-sizes.md
Previously updated : 03/29/2022 Last updated : 08/09/2022 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device by using APIs, so that I can efficiently manage my VMs.
databox Data Box Hardware Additional Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-hardware-additional-terms.md
+
+ Title: Azure Data Box hardware additional terms
+description: Describes additional terms for Azure Data Box hardware.
+++++ Last updated : 08/09/2022++
+# Azure Data Box hardware additional terms
+
+This article documents additional terms for Azure Data Box hardware.
+
+## Availability of Data Box devices
+
+The Data Box Device may not be offered in all regions or jurisdictions, and even where it is offered, it may be subject to availability. Microsoft is not responsible for delays related to the Service outside of its direct control. Microsoft reserves the right to refuse to offer the Service and corresponding Data Box Device to anyone in its sole discretion and judgment.
+
+## Possession and return of the Data Box device
+
+As part of the Service, Microsoft allows Customer to retain the Data Box Device for limited periods of time which may vary based on the Data Box Device type. If Customer retains the Data Box Device beyond the specified time period, Microsoft may charge Customer additional daily fees as described at https://go.microsoft.com/fwlink/?linkid=2052173.
+
+## Shipment and title; fees
+
+### Title and risk of loss
+
+All right, title and interest in each Data Box Device is and shall remain the property of Microsoft, and except as described in the Additional Terms, no rights are granted to any Data Box Device (including under any patent, copyright, trade secret, trademark or other proprietary rights). Customer will compensate Microsoft for any loss, material damage or destruction to or of any Data Box Device while it is at any of CustomerΓÇÖs locations as described in Shipment and Title; Fees, Table 1. Customer is responsible for inspecting the Data Box Device upon receipt from the carrier and for promptly reporting any damage to Microsoft Support at databoxsupport@microsoft.com. Customer is responsible for the entire risk of loss of, or any damage to, the Data Box Device once it has been delivered by the carrier to CustomerΓÇÖs designated address until the Microsoft-designated carrier accepts the Data Box Device for delivery back to the Designated Azure Data Center.
+
+### Fees
+
+Microsoft may charge Customer specified fees in connection with its use of the Data Box Device as part of the Service, as described at https://go.microsoft.com/fwlink/?linkid=2052173. For clarity, Azure Storage and Azure IoT Hub are separate Azure Services, and if used (even in connection with its use of the Service), separate Azure metered fees will apply. Additional Azure services Customer uses after completing a transfer of data using the Azure Data Box Service are also subject to separate usage fees. For Data Box Devices, Microsoft may charge Customer a lost device fee, as provided in Table 1 below, if (i) the Data Box Device is lost or materially damaged while it is in CustomerΓÇÖs care; and/or (ii) Customer does not provide the Data Box Device to the Microsoft-designated carrier for return within the time period after the date it was delivered to Customer as provided in Table 1 below. Microsoft reserves the right to change the fees charged for Data Box Device types, including charging different amounts for different device form factors.
+
+Table 1:
+
+|Data Box device type | Lost or materially damaged time period and amounts|
+|||
+|Data Box | Period: After 90 days<br> Amount: $40,000.00 USD |
+|Data Box Disk | Period: After 90 days<br> Amount: $2,500.00 USD |
+|Data Box Heavy | Period: After 90 days<br> Amount: $250,000.00 USD |
+|Data Box Gateway | N/A |
+
+### Shipment and return of Data Box device
+
+Microsoft will designate a carrier for shipping and delivery of Data Box Devices that are transported or delivered between Customer and a Designated Azure Data Center or a Microsoft entity. Customer will be responsible for costs of shipping a Data Box Device from Microsoft or a Designated Azure Data Center to Customer and return shipping of the Data Box Device, including any metered amounts for carrier charges, any taxes, or applicable customs fees. When returning a Data Box Device to Microsoft, Customer will package and ship the Data Box Device in accordance with MicrosoftΓÇÖs instructions, including using a carrier designated by Microsoft and the packaging materials provided by Microsoft.
+
+### Transit risks
+
+Although data on a Data Box Device is encrypted, Customer acknowledges that there are inherent risks in shipping data on and in connection with the Data Box Device, and that Microsoft will have no liability to Customer for any damage, theft, or loss occurring to a Data Box Device or any data stored on one, including during transit.
+
+### Self-managed shipment
+
+Alternatively, Customer may elect to use CustomerΓÇÖs designated carrier or Customer itself to ship and return the Data Box Device by selecting this option in the Service portal. Once selected, (i) Microsoft will inform Customer about Data Box Device availability; (ii) Microsoft will prepare the Data Box Device for pick-up by CustomerΓÇÖs designated carrier or Customer itself; and (iii) Customer will coordinate with Microsoft and Designated Azure Data Center personnel for pick-up and return of the Data Box Device by CustomerΓÇÖs designated carrier or Customer directly. CustomerΓÇÖs election for self-managed shipment is subject to the following: (i) Customer abides by all other applicable terms and conditions related to the Service and Data Box Device, including the Product Terms and the Azure Data Box Hardware Terms; (ii) Customer is responsible for the entire risk of loss of, or any damage to, the Data Box Device (as described in the ΓÇ£Shipment and Title; FeesΓÇ¥ section, under subsection (a) ΓÇ£Title and Risk of LossΓÇ¥) from the time that Microsoft makes the Data Box Device available for pick-up by CustomerΓÇÖs designated carrier or Customer, to the time Microsoft has accepted the Data Box Device from CustomerΓÇÖs designated carrier or Customer at the Designated Azure Data Center; (iii) Customer is fully responsible for the costs of shipping a Data Box Device from Microsoft or a Designated Azure Data Center to Customer and return shipping of the same, including carrier charges, any taxes, or applicable customs fees; (iv) When returning a Data Box Device to Microsoft or a Designated Azure Data Center, Customer will package and ship the Data Box Device in accordance with MicrosoftΓÇÖs instructions and any packaging materials provided by Microsoft; (v) Customer will be charged applicable fees (as described in the ΓÇ£Shipment and Title; FeesΓÇ¥ section, under subsection (b) ΓÇ£FeesΓÇ¥) which commence from the time the Data Box Device is ready for pick-up at the agreed upon time and location, and will cease once the Data Box Device has been delivered to Microsoft or the Designated Azure Data Center; and (vi) Customer acknowledges that there are inherent risks in shipping data on and in connection with the Data Box Device, and that Microsoft will have no liability to Customer for any damage, theft, or loss occurring to a Data Box Device or any data stored on one, including during transit when shipped by CustomerΓÇÖs designated carrier.
+
+## Responsibilities if Customer moves a Data Box device between locations
+
+While Customer is in possession of a Data Box Device, Customer may, at its sole risk and expense, transport the Data Box Device to its domestic locations, and international locations as permitted by Microsoft in writing, for use to upload its data in accordance with this section and the requirements of the Additional Terms.
+If Customer wishes to move a Data Box Device to another country, then Customer must be the exporter of record from the country of export and importer of record into the country where the Data Box Device is being imported. Customer is responsible for obtaining, at its own risk and expense, any export license, import license and other official authorization for the exportation and importation of the Data Box Device and CustomerΓÇÖs data to any such different Customer location. Customer shall also be responsible for customs clearance at any such different Customer location, and will bear all duties, taxes, fines, penalties (if applicable) and all charges payable for exporting and importing the Data Box Device, as well as any and all costs and risks of carrying out customs formalities in a timely manner. Customer agrees to comply with and be responsible for all applicable import, export and general trade laws and regulations should Customer decide to transport the Data Box Device beyond the country border in which Customer receives the Data Box Device. Additionally, if Customer transports the Data Box Device to a different country, prior to shipping the Data Box Device back to the original point of origin, whether a specified Microsoft entity or a Designated Azure Data Center, Customer agrees to return the Data Box Device to the country location where Customer initially received the Data Box Device. If requested, Microsoft may provide MicrosoftΓÇÖs estimated value of the Data Box Device as supplied by Microsoft to Customer and share available product certifications for the Data Box Device.
++
+## Next steps
+
+- [Azure Data Box](data-box-overview.md)
+- [Azure Data Box pricing](https://azure.microsoft.com/pricing/details/databox/)
databox Data Box System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-system-requirements.md
Previously updated : 06/16/2022 Last updated : 08/09/2022 # Azure Data Box system requirements
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
Title: Microsoft Defender for Containers feature availability description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment. Previously updated : 08/01/2022 Last updated : 08/10/2022
The **tabs** below show the features that are available, by environment, for Mic
<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
-<sup><a name="footnote2"></a>2</sup>To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for you should onboard to [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you will need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
-<sup><a name="footnote2"></a>2</sup>To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for you should onboard to [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you will need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
-### Supported host operating systems
-
-Defender for Containers relies on the **Defender extension** for several features. The Defender extension is supported on the following host operating systems:
--- Amazon Linux 2-- CentOS 8-- Debian 10-- Debian 11-- Google Container-Optimized OS -- Red Hat Enterprise Linux 8-- Ubuntu 16.04 -- Ubuntu 18.04-- Ubuntu 20.04-- Ubuntu 22.04-
-Ensure your Kubernetes node is running on one of the verified supported operating systems. Clusters with different host operating systems, will only get partial coverage. Check out the [Supported features by environment](#supported-features-by-environment) for more information.
- ### Network restrictions #### Private link
Outbound proxy without authentication and outbound proxy with basic authenticati
<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
-<sup><a name="footnote2"></a>2</sup>To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for you should onboard to [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you will need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
-### Supported host operating systems
-
-Defender for Containers relies on the **Defender extension** for several features. The Defender extension is supported on the following host operating systems:
--- Amazon Linux 2-- CentOS 8-- Debian 10-- Debian 11-- Google Container-Optimized OS -- Red Hat Enterprise Linux 8-- Ubuntu 16.04 -- Ubuntu 18.04-- Ubuntu 20.04-- Ubuntu 22.04-
-Ensure your Kubernetes node is running on one of the verified supported operating systems. Clusters with different host operating systems, will only get partial coverage. Check out the [Supported features by environment](#supported-features-by-environment) for more information.
- ### Network restrictions #### Private link
Outbound proxy without authentication and outbound proxy with basic authenticati
<sup><a name="footnote3"></a>3</sup> VA can detect vulnerabilities for these [language specific packages](#registries-and-images-1).
-<sup><a name="footnote4"></a>4</sup> Runtime protection can detect threats for these [Supported host operating systems](#supported-host-operating-systems-2).
+<sup><a name="footnote4"></a>4</sup> Runtime protection can detect threats for these [Supported host operating systems](#supported-host-operating-systems).
## Additional information
Outbound proxy without authentication and outbound proxy with basic authenticati
<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
-<sup><a name="footnote2"></a>2</sup>To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for you should onboard to [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you will need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
Defender for Containers relies on the **Defender extension** for several feature
- Debian 10 - Debian 11 - Google Container-Optimized OS
+- Mariner 1.0
+- Mariner 2.0
- Red Hat Enterprise Linux 8 - Ubuntu 16.04 - Ubuntu 18.04
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 07/28/2022 Last updated : 08/10/2022 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| [Deprecate API App policies for App Service](#deprecate-api-app-policies-for-app-service) | July 2022 | | [Change in pricing of Runtime protection for Arc-enabled Kubernetes clusters](#change-in-pricing-of-runtime-protection-for-arc-enabled-kubernetes-clusters) | August 2022 | | [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | September 2022 |
+| [Removing security alerts for machines reporting to cross tenant Log Analytics workspaces](#removing-security-alerts-for-machines-reporting-to-cross-tenant-log-analytics-workspaces) | September 2022 |
+| [Legacy Assessments APIs deprecation](#legacy-assessments-apis-deprecation) | September 2022 |
### Changes to recommendations for managing endpoint protection solutions
The new release will bring the following capabilities:
|Blocked accounts with owner permissions on Azure resources should be removed|050ac097-3dda-4d24-ab6d-82568e7a50cf| |Blocked accounts with read and write permissions on Azure resources should be removed| 1ff0b4c9-ed56-4de6-be9c-d7ab39645926 |
+### Removing security alerts for machines reporting to cross-tenant Log Analytics workspaces
+
+**Estimated date for change:** September 2022
+
+Defender for Cloud lets you choose the workspace that your Log Analytics agents report to. When a machine belongs to one tenant (ΓÇ£Tenant AΓÇ¥) but its Log Analytics agent reports to a workspace in a different tenant (ΓÇ£Tenant BΓÇ¥), security alerts about the machine are reported to the first tenant (ΓÇ£Tenant AΓÇ¥).
+
+With this change, alerts on machines connected to Log Analytics workspace in a different tenant will no longer appear in Defender for Cloud.
+
+If you want to continue receiving the alerts in Defender for Cloud, connect the Log Analytics agent of the relevant machines to the workspace in the same tenant as the machine.
+
+### Legacy Assessments APIs deprecation
+
+The following APIs are set to be deprecated:
+
+- Security Tasks
+- Security Statuses
+- Security Summaries
+
+These three APIs exposed old formats of assessments and will be replaced by the [Assessments APIs](/rest/api/securitycenter/assessments) and [SubAssessments APIs](/rest/api/securitycenter/sub-assessments). All data that is exposed by these legacy APIs will also be available in the new APIs.
+ ## Next steps For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md)
defender-for-iot Extra Deploy Enterprise Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/extra-deploy-enterprise-iot.md
+
+ Title: Extra deployment steps and samples for Enterprise IoT deployment - Microsoft Defender for IoT
+description: Describes additional deployment and validation procedures to use when deploying an Enterprise IoT network sensor.
+ Last updated : 08/08/2022++
+# Extra steps and samples for Enterprise IoT deployment
+
+This article provides extra steps for deploying an Enterprise IoT sensor, including a sample SPAN port configuration procedure, and CLI steps to validate your deployment or delete a sensor.
+
+For more information, see [Tutorial: Get started with Enterprise IoT monitoring](tutorial-getting-started-eiot-sensor.md).
+
+## Configure a SPAN monitoring interface for a virtual appliance
+
+While a virtual switch doesn't have mirroring capabilities, you can use *Promiscuous mode* in a virtual switch environment as a workaround for configuring a SPAN port.
+
+*Promiscuous mode* is a mode of operation and a security, monitoring, and administration technique that is defined at the virtual switch or portgroup level. When Promiscuous mode is used, any of the virtual machineΓÇÖs network interfaces that are in the same portgroup can view all network traffic that goes through that virtual switch. By default, Promiscuous mode is turned off.
+
+This procedure describes an example of how to configure a SPAN port on your vSwitch with VMware ESXi. Enterprise IoT sensors also support VMs using Microsoft Hyper-V.
+
+**To configure a SPAN monitoring interface**:
+
+1. On your vSwitch, open the vSwitch properties and select **Add** > **Virtual Machine** > **Next**.
+
+1. Enter **SPAN Network** as your network label, and then select **VLAN ID** > **All** > **Next** > **Finish**.
+
+1. Select **SPAN Network** > **Edit** > **Security**, and verify that the **Promiscuous Mode** policy is set to **Accept** mode.
+
+1. Select **OK**, and then select **Close** to close the vSwitch properties.
+
+1. Open the **IoT Sensor VM** properties.
+
+1. For **Network Adapter 2**, select the **SPAN** network.
+
+1. Select **OK**.
+
+1. Connect to the sensor, and verify that mirroring works.
+
+If you've jumped to this procedure from the tutorial procedure for [Prepare a physical appliance or VM](tutorial-getting-started-eiot-sensor.md#prepare-a-physical-appliance-or-vm), continue with [step 2](tutorial-getting-started-eiot-sensor.md#sign-in) to continue preparing your appliance.
+
+## Validate your Enterprise IoT sensor setup
+
+If, after completing the Enterprise IoT sensor installation and setup, you don't see your sensor showing on the **Sites and sensors** page in the Azure portal, this procedure can help validate your installation directly on the sensor.
+
+Wait 1 minute after your sensor installation has completed before starting this procedure.
+
+**To validate the sensor setup from the sensor**:
+
+1. To process your system sanity, run:
+
+ ```bash
+ sudo docker ps
+ ```
+
+1. In the results that display, ensure that the following containers are up and listed as healthy.
+
+ - `compose_attributes-collector_1`
+ - `compose_cloud-communication_1`
+ - `compose_logstash_1`
+ - `compose_horizon_1`
+ - `compose_statistics-collector_1`
+ - `compose_properties_1`
+
+ For example:
+
+ :::image type="content" source="media/tutorial-get-started-eiot/validate-setup.png" alt-text="Screenshot of the validated containers listed." lightbox="media/tutorial-get-started-eiot/validate-setup.png":::
+
+1. Check your port validation to see which interface is defined to handle port mirroring. Run:
+
+ ```bash
+ sudo docker logs compose_horizon_1
+ ````
+
+ For example, the following response might be displayed: `Found env variable for monitor interfaces: ens192`
+
+1. Wait 5 minutes and then check your traffic D2C sanity. Run:
+
+ ```bash
+ sudo docker logs -f compose_attributes-collector_1
+ ```
+
+ Check the results to ensure that packets are being sent as expected.
+
+## Remove an Enterprise IoT network sensor (optional)
+
+Remove a sensor if it's no longer in use with Defender for IoT.
+
+**To remove a sensor**, run the following command on the sensor server or VM:
+
+```bash
+sudo apt purge -y microsoft-eiot-sensor
+```
+
+> [!IMPORTANT]
+> If you want to cancel your plan for Enterprise IoT networks only, do so from [Defender for Endpoint](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
+>
+> If you want to cancel your plan for both OT and Enterprise IoT networks together, you can use the [**Pricing**](how-to-manage-subscriptions.md) page in Defender for IoT in the Azure portal.
+>
+
+## Next steps
+
+For more information, see [Tutorial: Get started with Enterprise IoT monitoring](tutorial-getting-started-eiot-sensor.md) and [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md).
+
+<!--for example?-->
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
Title: View and manage alerts in the Defender for IoT portal on Azure
-description: View and manage alerts detected by cloud-connected network sensors in the Defender for IoT portal on Azure.
Previously updated : 06/02/2022
+ Title: View and manage alerts in the Microsoft Defender for IoT portal on Azure
+description: View and manage alerts detected by cloud-connected network sensors in the Microsoft Defender for IoT portal on Azure.
Last updated : 06/30/2022 # View and manage alerts from the Azure portal
-This article describes how to manage your alerts from Defender for IoT on the Azure portal.
+This article describes how to manage your alerts from Microsoft Defender for IoT on the Azure portal.
If you're integrating with Microsoft Sentinel, the alert details and entity information are also sent to Microsoft Sentinel, where you can also view them from the **Alerts** page.
The following alert details are displayed by default in the grid:
| **Name** | The alert title. | | **Site** | The site associated with the sensor that detected the alert, as listed on the **Sites and sensors** page. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).| | **Engine** | The sensor engine that detected the Operational Technology (OT) traffic. For more information, see [Detection engines](how-to-control-what-traffic-is-monitored.md#detection-engines). For device builders, the term *micro-agent* is displayed instead. |
-| **Detection time** | The time the alert was detected, for as long as the alert status remains **New**. If an alert is closed and the same traffic is seen again, this alert time is updated to the new time. |
+| **Last detection** | The last time the alert was detected. <br>- If an alert's status is **New**, and the same traffic is seen again, the **Last detection** time is updated for the same alert. <br>- If the alert's status is **Closed** and traffic is seen again, the **Last detection** time is *not* updated, and a new alert is triggered.|
| **Status** | The alert status: *New*, *Active*, *Closed* | | **Source device** | The IP address, MAC, or device name. | | **Tactics** | The MITRE ATT&CK stage. |
-Select **Edit columns** to add other details to the grid, including:
-
-| Column | Description
-|--|--|
-| **Source device address** |The IP address of the source device. |
-| **Destination device address** | The IP address of the destination device. |
-| **Destination device** | The IP address, MAC, or destination device name.|
-| **ID** |The unique alert ID.|
-| **Protocol** | The protocol detected in the network traffic for this alert.|
-| **Sensor** | The sensor that detected the alert.|
-| **Zone** | The zone assigned to the sensor that detected the alert.|
-| **Category**| The category associated with the alert, such as *operational issues*,*custom alerts*, or *illegal commands*. |
-| **Type**| The internal name of the alert. |
+**To view additional information:**
+
+1. Select **Edit columns** from the Alerts page.
+1. In the Edit Columns dialog box, select **Add Column** and choose an item to add. The following items are available:
+
+ | Column | Description
+ |--|--|
+ | **Source device address** |The IP address of the source device. |
+ | **Destination device address** | The IP address of the destination device. |
+ | **Destination device** | The IP address, MAC, or destination device name.|
+ | **First detection** | Defines the first time the alert was detected in the network. |
+ | **ID** |The unique alert ID.|
+ | **Last activity** | Defines the last time the alert was changed, including manual updates for severity or status, or automated changes for device updates or device/alert de-duplication |
+ | **Protocol** | The protocol detected in the network traffic for this alert.|
+ | **Sensor** | The sensor that detected the alert.|
+ | **Zone** | The zone assigned to the sensor that detected the alert.|
+ | **Category**| The category associated with the alert, such as *operational issues*,*custom alerts*, or *illegal commands*. |
+ | **Type**| The internal name of the alert. |
### Filter alerts displayed
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
Make the downloaded activation file accessible to the sensor console admin so th
:::image type="content" source="media/tutorial-get-started-eiot/successful-registration.png" alt-text="Screenshot of the successful registration of an Enterprise I O T sensor.":::
-1. Copy the command to a safe location, and continue with installing the sensor. For more information, see [Install the sensor](tutorial-getting-started-eiot-sensor.md#install-the-sensor).
+1. Copy the command to a safe location, and continue with installing the sensor. For more information, see [Tutorial: Get started with Enterprise IoT monitoring](tutorial-getting-started-eiot-sensor.md#install-the-sensor-software).
> [!NOTE] > As opposed to OT sensors, where you define your sensor's site, all Enterprise IoT sensors are automatically added to the **Enterprise network** site.
Use the options on the **Sites and sensor** page and a sensor details page to do
|:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-export.png" border="false"::: **Export sensor data** | Available from the **Sites and sensors** toolbar only, to download a CSV file with details about all the sensors listed. | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-export.png" border="false"::: **Download an activation file** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit a sensor zone** | For individual sensors only, from the **...** options menu or a sensor details page. <br><br>Select **Edit**, and then elect a new zone from the **Zone** menu or select **Create new zone**. Select **Submit** to save your changes. |
-|:::image type="icon" source="medi#install-the-sensor). |
+|:::image type="icon" source="medi#install-the-sensor-software). |
|:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit automatic threat intelligence updates** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. <br><br>Select **Edit** and then toggle the **Automatic Threat Intelligence Updates (Preview)** option on or off as needed. Select **Submit** to save your changes. | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-delete.png" border="false"::: **Delete a sensor** | For individual sensors only, from the **...** options menu or a sensor details page. | | **Download SNMP MIB file** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Set up SNMP MIB monitoring](how-to-set-up-snmp-mib-monitoring.md).|
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Business considerations may require that you apply your existing IoT sensors to
**To switch to a new subscription**:
-1. Onboard a new plan to the new subscription you want to use. For more information, see:
+1. Onboard a new plan to the new subscription you want to use. For more information, see:
[Onboard a plan for OT networks](#onboard-a-defender-for-iot-plan-for-ot-networks) in the Azure portal [Onboard a plan for Enterprise IoT networks](#onboard-a-defender-for-iot-plan-for-enterprise-iot-networks) in Defender for Endpoint
-1. Register your sensors under the new subscription. For more information, see [Set up an Enterprise IoT sensor](tutorial-getting-started-eiot-sensor.md#set-up-an-enterprise-iot-sensor).
+1. Onboard your sensors again under the new subscription. For OT sensors, [upload a new activation](how-to-manage-individual-sensors.md#upload-new-activation-files) file for your sensors.
-1. [Upload a new activation](how-to-manage-individual-sensors.md#upload-new-activation-files) file for your sensors.
-
-1. Delete the sensor identities from the legacy subscription. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal)..
+1. Delete the sensor identities from the legacy subscription. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
1. If relevant, [cancel the Defender for IoT plan](#cancel-a-defender-for-iot-plan-from-a-subscription) from the legacy subscription.
defender-for-iot How To View Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-view-alerts.md
Title: View alerts details on the sensor Alerts page
+ Title: View alerts details on the sensor Alerts page - Microsoft Defender for IoT
description: View alerts detected by your Defender for IoT sensor. Last updated 06/02/2022
Alerts are triggered when sensor engines detect changes or suspicious activity in network traffic that need your attention.
-This article describes how to view alerts triggered by your sensors.
+This article describes how to view alerts triggered by your Microsoft Defender for IoT OT network sensors.
Once an alert is selected, you can view comprehensive details about the alert activity, for example,
The following information is available from the Alerts page:
| **Severity** | The alert severity: Critical, Major, Minor, Warning| | **Name** | The alert title | | **Engine** | The Defender for IoT detection engine that detected the activity and triggered the alert. If the event was detected by the Device Builder platform, the value will be Micro-agent. |
-| **Detection time** | The first time the alert activity was detected. |
+| **Last detection** | The last time the alert activity was detected. |
| **Status** | Indicates if the alert is new or closed. | | **Source Device** | The source device IP address | | **Destination Device** | The destination device IP address |
defender-for-iot Ot Appliance Sizing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-appliance-sizing.md
This article is designed to help you choose the right OT appliances for your sen
You can use both physical or virtual appliances.
-## C5600: IT/OT mixed environments
+## IT/OT mixed environments
Use the following hardware profiles for high bandwidth corporate IT/OT mixed networks:
Use the following hardware profiles for high bandwidth corporate IT/OT mixed net
||||| |C5600 | 3 Gbps | 12 K |Physical / Virtual |
-## E1800, E1000, E500: monitoring at the site level
+## Monitoring at the site level
Use the following hardware profiles for enterprise monitoring at the site level:
Use the following hardware profiles for production line monitoring:
||||| |L500 | 200 Mbps | 1,000 |Physical / Virtual | |L100 | 60 Mbps | 800 | Physical / Virtual |
-|L64 | 10 Mbps | 100 |Physical / Virtual|
+|L60 | 10 Mbps | 100 |Physical / Virtual|
## On-premises management console systems
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview.md
For more information, see the [Microsoft 365 Defender](/microsoft-365/security/d
For more information, see:
+- [ICS/OT Security video series](https://www.youtube.com/playlist?list=PLmAptfqzxVEXz5txCCKYUdpQETAMpeOhu)
- [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md) - [Microsoft Defender for IoT architecture](architecture.md) - [Quickstart: Get started with Defender for IoT](getting-started.md)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
For more information, see the [Microsoft Security Development Lifecycle practice
| Version | Date released | End support date | |--|--|--|
-| 22.2.4 | 07/2022 | 4/2023 |
-| 22.2.3 | 07/2022 | 4/2023 |
-| 22.1.7 | 07/2022 | 4/2023 |
+| 22.2.4 | 07/2022 | 04/2023 |
+| 22.2.3 | 07/2022 | 04/2023 |
+| 22.1.7 | 07/2022 | 04/2023 |
| 22.1.6 | 06/2022 | 10/2023 | | 22.1.5 | 06/2022 | 10/2023 | | 22.1.4 | 04/2022 | 10/2022 |
For more information, see the [Microsoft Security Development Lifecycle practice
## August 2022
+- [New alert columns with timestamp data](#new-alert-columns-with-timestamp-data)
- [Sensor health from the Azure portal (Public preview)](#sensor-health-from-the-azure-portal-public-preview)
+### New alert columns with timestamp data
+
+Starting with OT sensor version 22.2.4, Defender for IoT alerts in the Azure portal and the sensor console now show the following columns and data:
+
+- **Last detection**. Defines the last time the alert was detected in the network, and replaces the **Detection time** column.
+
+- **First detection**. Defines the first time the alert was detected in the network.
+
+- **Last activity**. Defines the last time the alert was changed, including manual updates for severity or status, or automated changes for device updates or device/alert de-duplication.
+
+The **First detection** and **Last activity** columns aren't displayed by default. Add them to your **Alerts** page as needed.
+
+> [!TIP]
+> If you're also a Microsoft Sentinel user, you'll be familiar with similar data from your Log Analytics queries. The new alert columns in Defender for IoT are mapped as follows:
+>
+> - The Defender for IoT **Last detection** time is similar to the Log Analytics **EndTime**
+> - The Defender for IoT **First detection** time is similar to the Log Analytics **StartTime**
+> - The Defender for IoT **Last activity** time is similar to the Log Analytics **TimeGenerated**
+For more information, see:
+
+- [View alerts on the Defender for IoT portal](how-to-manage-cloud-alerts.md)
+- [View alerts on your sensor](how-to-view-alerts.md)
+- [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md)
+ ### Sensor health from the Azure portal (Public preview) For OT sensor versions 22.1.3 and higher, you can use the new sensor health widgets and table column data to monitor sensor health directly from the **Sites and sensors** page on the Azure portal.
Disabling these alerts also disables monitoring of related traffic. Specifically
**Unauthorized Database Operation alert** Previously, this alert covered DDL and DML alerting and Data Mining reporting. Now:-- DDL traffic: alerting and monitoring are supported.
+- DDL traffic: alerting and monitoring are supported.
- DML traffic: Monitoring is supported. Alerting isn't supported. **New Asset Detected alert**
defender-for-iot Tutorial Getting Started Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-getting-started-eiot-sensor.md
This tutorial describes how to get started with your Enterprise IoT monitoring d
Defender for IoT supports the entire breadth of IoT devices in your environment, including everything from corporate printers and cameras, to purpose-built, proprietary, and unique devices.
-Integrate with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration) for analytics features that include alerts, vulnerabilities, and recommendations for your enterprise devices.
-
-In this tutorial, you learn how to:
+In this tutorial, you learn about:
> [!div class="checklist"]
-> * Set up a server or Virtual Machine (VM)
-> * Prepare your networking requirements
-> * Set up an Enterprise IoT sensor in the Azure portal
-> * Install the sensor software
-> * Validate your setup
-> * View detected Enterprise IoT devices in the Azure portal
-> * View devices, alerts, vulnerabilities, and recommendations in Defender for Endpoint
+> * Integrating with Microsoft Defender for Endpoint
+> * Prerequisites for Enterprise IoT network monitoring with Defender for IoT
+> * How to prepare a physical appliance or VM as a network sensor
+> * How to onboard an Enterprise IoT sensor and install software
+> * How to view detected Enterprise IoT devices in the Azure portal
+> * How to view devices, alerts, vulnerabilities, and recommendations in Defender for Endpoint
> [!IMPORTANT] > The **Enterprise IoT network sensor** is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ## Microsoft Defender for Endpoint integration
-Once youΓÇÖve onboarded a plan and set up your sensor, your device data integrates automatically with Microsoft Defender for Endpoint. Discovered devices appear in both the Defender for IoT and Defender for Endpoint portals. Use this integration to extend security analytics capabilities for your Enterprise IoT devices and providing complete coverage.
+Integrate with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) to extend your security analytics capabilities, providing complete coverage across your Enterprise IoT devices. Defender for Endpoint analytics features include alerts, vulnerabilities, and recommendations for your enterprise devices.
-In Defender for Endpoint, you can view discovered IoT devices and related alerts, vulnerabilities, and recommendations. For more information, see:
+After you've onboarded a plan for Enterprise IoT and set up your Enterprise IoT network sensor, your device data integrates automatically with Microsoft Defender for Endpoint.
-- [Microsoft Defender for IoT integration](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration)-- [Defender for Endpoint device inventory](/microsoft-365/security/defender-endpoint/machines-view-overview)-- [View and organize the Microsoft Defender for Endpoint Alerts queue](/microsoft-365/security/defender-endpoint/alerts-queue)-- [Vulnerabilities in my organization](/microsoft-365/security/defender-vulnerability-management/)-- [Security recommendations](/microsoft-365/security/defender-vulnerability-management/tvm-security-recommendation)
+- Discovered devices appear in both the Defender for IoT and Defender for Endpoint portals.
+- In Defender for Endpoint, view discovered IoT devices and related alerts, vulnerabilities, and recommendations.
+
+For more information, see [Onboard with Microsoft Defender for IoT in Defender for Endpoint](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
## Prerequisites
-Before you start, make sure that you have:
+Before starting this tutorial, make sure that you have the following prerequisites.
+
+### Azure subscription prerequisites
-- Added a Defender for IoT plan for Enterprise IoT networks to your Azure subscription from the Microsoft Defender for Endpoint portal.
-To onboard a plan, see [Onboard with Microsoft Defender for IoT](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
+- Make sure that you've added a Defender for IoT plan for Enterprise IoT networks to your Azure subscription from Microsoft Defender for Endpoint.
+For more information, see [Onboard with Microsoft Defender for IoT](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
-- Required Azure permissions, as listed in [Quickstart: Getting Started with Defender for IoT](getting-started.md#permissions).
+- Make sure that you can access the Azure portal as a **Security admin**, subscription **Contributor**, or subscription **Owner** user. For more information, see [Required permissions](getting-started.md#permissions).
-## Set up a server or Virtual Machine (VM)
+### Physical appliance or VM requirements
-Before you deploy your Enterprise IoT sensor, you'll need to configure your server or VM, and connect a Network Interface Card (NIC) to a switch monitoring (SPAN) port.
+You can use a physical appliance or a virtual machine as your network sensor. In either case, make sure that your machine has the following specifications:
-**To set up a server or VM**:
+| Tier | Requirements |
+|--|--|
+| **Minimum** | To support up to 1 Gbps: <br><br>- 4 CPUs, each with 2.4 GHz or more<br>- 16-GB RAM of DDR4 or better<br>- 250 GB HDD |
+| **Recommended** | To support up to 15 Gbps: <br><br>- 8 CPUs, each with 2.4 GHz or more<br>- 32-GB RAM of DDR4 or better<br>- 500 GB HDD |
-1. Ensure that your resources are set to one of the following specifications:
+Make sure that your physical appliance or VM also has:
- | Tier | Requirements |
- |--|--|
- | **Minimum** | To support up to 1 Gbps: <br><br>- 4 CPUs, each with 2.4 GHz or more<br>- 16-GB RAM of DDR4 or better<br>- 250 GB HDD |
- | **Recommended** | To support up to 15 Gbps: <br><br>- 8 CPUs, each with 2.4 GHz or more<br>- 32-GB RAM of DDR4 or better<br>- 500 GB HDD |
+- [Ubuntu 18.04 Server](https://releases.ubuntu.com/18.04/) operating system. If you don't yet have Ubuntu installed, download the installation files to an external storage, such as a DVD or disk-on-key, and install it on your appliance or VM. For more information, see the Ubuntu [Image Burning Guide](https://help.ubuntu.com/community/BurningIsoHowto).
- Make sure that your server or VM also has:
+- Network adapters, at least one for your switch monitoring (SPAN) port, and one for your management port to access the sensor's user interface
- - Two network adapters
- - Ubuntu 18.04 operating system
+## Prepare a physical appliance or VM
-1. Connect a NIC to a switch as follows:
+This procedure describes how to prepare your physical appliance or VM to install the Enterprise IoT network sensor software.
- - **Physical device** - Connect a monitoring network interface (NIC) to a switch monitoring (SPAN) port.
+**To prepare your appliance**:
- - **VM** - Connect a vNIC to a vSwitch in promiscuous mode.
+1. Connect a network interface (NIC) from your physical appliance or VM to a switch as follows:
- For a VM, run the following command to enable the network adapter in promiscuous mode.
+ - **Physical appliance** - Connect a monitoring NIC to a SPAN port directly by a copper or fiber cable.
- ```bash
- ifconfig <monitoring port> up promisc
- ```
+ - **VM** - Connect a vNIC to a vSwitch, and configure your vSwitch security settings to accept *Promiscuous mode*. For more information, see, for example [Configure a SPAN monitoring interface for a virtual appliance](extra-deploy-enterprise-iot.md#configure-a-span-monitoring-interface-for-a-virtual-appliance).
-1. Validate incoming traffic to the monitoring port. Run:
+1. <a name="sign-in"></a>Sign in to your physical appliance or VM and run the following command to validate incoming traffic to the monitoring port:
```bash
- ifconfig <monitoring interface>
+ ifconfig
```
- If the number of RX packets increases each time, the interface is receiving incoming traffic. Repeat this step for each interface you have.
+ The system displays a list of all monitored interfaces.
-## Prepare networking requirements
+ Identify the interfaces that you want to monitor, which are usually the interfaces with no IP address listed. Interfaces with incoming traffic will show an increasing number of RX packets.
+
+1. For each interface you want to monitor, run the following command to enable Promiscuous mode in the network adapter:
+
+ ```bash
+ ifconfig <monitoring port> up promisc
+ ```
-This procedure describes how to prepare networking requirements on your server or VM to work with Defender for IoT with Enterprise IoT networks.
+ Where `<monitoring port>` is an interface you want to monitor. Repeat this step for each interface you want to monitor.
-**To prepare your networking requirements**:
+1. Ensure network connectivity by opening the following ports in your firewall:
-1. Open the following ports in your firewall:
+ | Protocol | Transport | In/Out | Port | Purpose |
+ |--|--|--|--|--|
+ | HTTPS | TCP | In/Out | 443 | Cloud connection |
+ | DNS | TCP/UDP | In/Out | 53 | Address resolution |
- - **HTTPS** - 443 TCP
- - **DNS** - 53 TCP
-1. Make sure that your server or VM can access the cloud using HTTP on port 443 to the following Microsoft domains:
+1. Make sure that your physical appliance or VM can access the cloud using HTTP on port 443 to the following Microsoft domains:
- **EventHub**: `*.servicebus.windows.net` - **Storage**: `*.blob.core.windows.net` - **Download Center**: `download.microsoft.com` - **IoT Hub**: `*.azure-devices.net`
-### (Optional) Download Azure public IP ranges
+ > [!TIP]
+ > You can also download and add the [Azure public IP ranges](https://www.microsoft.com/download/details.aspx?id=56519) so your firewall will allow the Azure domains that are specified above, along with their region.
+ >
+ > The Azure public IP ranges are updated weekly. New ranges appearing in the file will not be used in Azure for at least one week. To use this option, download the new json file every week and perform the necessary changes at your site to correctly identify services running in Azure.
-You can also download and add the [Azure public IP ranges](https://www.microsoft.com/download/details.aspx?id=56519) so your firewall will allow the Azure domains that are specified above, along with their region.
+## Register an Enterprise IoT sensor
-> [!Note]
-> The Azure public IP ranges are updated weekly. New ranges appearing in the file will not be used in Azure for at least one week. To use this option, download the new json file every week and perform the necessary changes at your site to correctly identify services running in Azure.
+This procedure describes how to register your Enterprise IoT sensor with Defender for IoT and then install the sensor software on the physical appliance or VM that you're using as your network sensor.
-## Set up an Enterprise IoT sensor
+> [!NOTE]
+> This procedure describes how to install sensor software on a VM using ESXi. Enterprise IoT sensors are also supported using Hyper-V.
+>
-You'll need a Defender for IoT network sensor to discover and continuously monitor Enterprise IoT devices. Defender for IoT sensors use the Enterprise IoT network and endpoint sensors to gain comprehensive visibility.
+**Prerequisites**: Make sure that you have all [Prerequisites](#prerequisites) satisfied and have completed [Prepare a physical appliance or VM](#prepare-a-physical-appliance-or-vm).
-**Prerequisites**: Make sure that you've completed [Set up a server or Virtual Machine (VM)](#set-up-a-server-or-virtual-machine-vm) and [Prepare networking requirements](#prepare-networking-requirements), including verifying that you have the listed required resources.
+**To register your Enterprise IoT sensor**:
-**To set up an Enterprise IoT sensor**:
-
-1. In the Azure portal, go to **Defender for IoT** > **Getting started**.
-
-1. Select **Set up Enterprise IoT Security**.
+1. In the Azure portal, go to **Defender for IoT** > **Getting started** > **Set up Enterprise IoT Security**.
:::image type="content" source="media/tutorial-get-started-eiot/onboard-sensor.png" alt-text="Screenshot of the Getting started page for Enterprise IoT security.":::
-1. In the **Sensor name** field, enter a meaningful name for your sensor.
+1. On the **Set up Enterprise IoT Security** page, enter the following details, and then select **Register**:
-1. From the **Subscription** drop-down menu, select the subscription where you want to add your sensor.
+ - In the **Sensor name** field, enter a meaningful name for your sensor.
+ - From the **Subscription** drop-down menu, select the subscription where you want to add your sensor.
-1. Select **Register**. A **Sensor registration successful** screen shows your next steps and the command you'll need to start the sensor installation.
+ A **Sensor registration successful** screen shows your next steps and the command you'll need to start the sensor installation.
For example: :::image type="content" source="media/tutorial-get-started-eiot/successful-registration.png" alt-text="Screenshot of the successful registration of an Enterprise IoT sensor.":::
-1. Copy the command to a safe location, and continue by [installing the sensor](#install-the-sensor) software.
--
-## Install the sensor
+1. Copy the command to a safe location, where you'll be able to copy it to your physical appliance or VM in order to [install the sensor](#install-the-sensor-software).
-Run the command that you received and saved when you registered the Enterprise IoT sensor. The installation process checks to see if the required Docker version is already installed. If itΓÇÖs not, the sensor installation also installs the latest Docker version.
-**To install the sensor**:
+## Install the sensor software
-1. Sign in to the sensor's CLI using a terminal, such as PuTTY, or MobaXterm.
+Run the command that you received and saved when you registered the Enterprise IoT sensor.
-1. Run the command that you saved from [setting up an Enterprise IoT sensor](#set-up-an-enterprise-iot-sensor).
+The installation process checks to see if the required Docker version is already installed. If itΓÇÖs not, the sensor installation also installs the latest Docker version.
- The installation wizard appears when the command process completes:
+<a name="install"></a>**To install the sensor**:
- - In the **What is the name of the monitored interface?** screen, use the SPACEBAR to select the interfaces you want to monitor with your sensor, and then select OK.
+1. On your physical appliance or VM, sign in to the sensor's CLI using a terminal, such as PUTTY, or MobaXterm.
- - In the **Set up proxy server** screen, select whether to set up a proxy server for your sensor (**Yes** / **No**).
+1. Run the command that you'd saved from the Azure portal. For example:
- (Optional) If you're setting up a proxy server, define the following values, selecting **Ok** after each option:
+ :::image type="content" source="media/tutorial-get-started-eiot/enter-command.png" alt-text="Screenshot of running the command to install the Enterprise IoT sensor monitoring software.":::
- - Proxy server host
- - Proxy server port
- - Proxy server username
- - Proxy server password
+ The command process checks to see if the required Docker version is already installed. If itΓÇÖs not, the sensor installation also installs the latest Docker version.
-The installation process completes.
+ When the command process completes, the Ubuntu **Configure microsoft-eiot-sensor** wizard appears. In this wizard, use the up or down arrows to navigate, and the SPACE bar to select an option. Press ENTER to advance to the next screen.
-## Validate your setup
+1. In the **Configure microsoft-eiot-sensor** wizard, in the **What is the name of the monitored interface?** screen, select one or more interfaces that you want to monitor with your sensor, and then select **OK**.
-Wait 1 minute after your sensor installation has completed before starting to validate your sensor setup.
+ For example:
-**To validate the sensor setup**:
+ :::image type="content" source="media/tutorial-get-started-eiot/install-monitored-interface.png" alt-text="Screenshot of the Configuring microsoft-eiot-sensor screen.":::
-1. To process your system sanity, run:
+1. In the **Set up proxy server?** screen, select whether to set up a proxy server for your sensor. For example:
- ```bash
- sudo docker ps
- ```
+ :::image type="content" source="media/tutorial-get-started-eiot/proxy.png" alt-text="Screenshot of the Set up a proxy server? screen.":::
-1. In the results that display, ensure that the following containers are up:
+ If you're setting up a proxy server, select **Yes**, and then define the proxy server host, port, username, and password, selecting **Ok** after each option.
- - `compose_statistics-collector_1`
- - `compose_cloud-communication_1`
- - `compose_horizon_1`
- - `compose_attributes-collector_1`
- - `compose_properties_1`
+ The installation takes a few minutes to complete.
-1. Check your port validation to see which interface is defined to handle port mirroring. Run:
-
- ```bash
- sudo docker logs compose_horizon_1
- ````
+1. In the Azure portal, check that the **Sites and sensors** page now lists your new sensor.
For example:
- :::image type="content" source="media/tutorial-get-started-eiot/defined-interface.png" alt-text="Screenshot of a result showing an interface defined to handle port monitoring.":::
-
-1. Wait 5 minutes and then check your traffic D2C sanity. Run:
+ :::image type="content" source="media/tutorial-get-started-eiot/view-sensor-listed.png" alt-text="Screenshot of your new Enterprise IoT sensor listed in the Sites and sensors page.":::
- ```bash
- sudo docker logs -f compose_attributes-collector_1
- ```
+In the **Sites and sensors** page, Enterprise IoT sensors are all automatically added to the same site, named **Enterprise network**. For more information, see [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md).
- Check your results to ensure that packets are being sent to the Event Hubs.
+> [!TIP]
+> If you don't see your Enterprise IoT data in Defender for IoT as expected, make sure that you're viewing the Azure portal with the correct subscriptions selected. For more information, see [Manage Azure portal settings](/azure/azure-portal/set-preferences).
+>
+> If you still don't view your data as expected, [validate your sensor setup](extra-deploy-enterprise-iot.md#validate-your-enterprise-iot-sensor-setup) from the CLI.
## View detected Enterprise IoT devices in Azure Once you've validated your setup, the **Device inventory** page will start to populate with all of your devices after 15 minutes. -- View your devices and network information in the Defender for IoT **Device inventory** page on the Azure portal.
+View your devices and network information in the Defender for IoT **Device inventory** page on the Azure portal.
- To view your device inventory, go to **Defender for IoT** > **Device inventory**.
+For more information, see [Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md).
-- You can also view your sensors from the **Sites and sensors** page. Enterprise IoT sensors are all automatically added to the same site, named **Enterprise network**.
+## Delete an Enterprise IoT network sensor (optional)
-For more information, see:
+Remove a sensor if it's no longer in use with Defender for IoT.
-- [Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md)-- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)
+1. From the **Sites and sensors** page on the Azure portal, locate your sensor in the grid.
+1. In the row for your sensor, select the **...** options menu on the right > **Delete sensor**.
-> [!TIP]
-> If you don't see your Enterprise IoT data in Defender for IoT as expected, make sure that you're viewing the Azure portal with the correct subscriptions selected. For more information, see [Manage Azure portal settings](/azure/azure-portal/set-preferences).
-
-## Remove an Enterprise IoT network sensor (optional)
+Alternately, remove your sensor manually from the CLI. For more information, see [Extra steps and samples for Enterprise IoT deployment](extra-deploy-enterprise-iot.md#remove-an-enterprise-iot-network-sensor-optional).
-Remove a sensor if it's no longer in use with Defender for IoT.
+> [!IMPORTANT]
+> If you want to cancel your plan for Enterprise IoT networks only, do so from [Defender for Endpoint](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
+>
+> If you want to cancel your plan for both OT and Enterprise IoT networks together, you can use the [**Pricing**](how-to-manage-subscriptions.md) page in Defender for IoT in the Azure portal.
+>
-**To remove a sensor**, run the following command on the sensor server or VM:
+For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
-```bash
-sudo apt purge -y microsoft-eiot-sensor
-```
## Next steps
-For more information, see:
+Continue viewing device data in both the Azure portal and Defender for Endpoint, depending on your organization's needs.
-- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)-- [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md)-- [Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md)-- [View and manage alerts on the Defender for IoT portal](how-to-manage-cloud-alerts.md)-- [Use Azure Monitor workbooks in Microsoft Defender for IoT (Public preview)](workbooks.md)-- [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md)
+In Defender for Endpoint, also view alerts data, recommendations and vulnerabilities related to your network traffic.
+
+For more information in Defender for Endpoint documentation, see:
+
+- [Onboard with Microsoft Defender for IoT in Defender for Endpoint](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration)
+- [Defender for Endpoint device inventory](/microsoft-365/security/defender-endpoint/machines-view-overview)
+- [Alerts in Defender for Endpoint](/microsoft-365/security/defender-endpoint/alerts-queue)
+- [Security recommendations in Defender for Endpoint](/microsoft-365/security/defender-vulnerability-management/tvm-security-recommendation)
+- [Defender for Endpoint: Vulnerabilities in my organization](/microsoft-365/security/defender-vulnerability-management/tvm-security-recommendation)
devops-project Azure Devops Project Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/azure-devops-project-sql-database.md
To learn more about the CI/CD pipeline, see:
## Videos
-> [!VIDEO https://docs.microsoft.com/Events/Build/2018/BRK3308/player]
+> [!VIDEO https://docs.microsoft.com/Events/Build/2018/BRK3308/player]
digital-twins Concepts Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-azure-digital-twins-explorer.md
Azure Digital Twins Explorer is an open-source tool that welcomes contributions
To view the source code for the tool and read detailed instructions on how to contribute to the code, visit its GitHub repository: [digital-twins-explorer](https://github.com/Azure-Samples/digital-twins-explorer).
-To view instructions for contributing to this documentation, visit the [Microsoft Docs contributor guide](/contribute/).
+To view instructions for contributing to this documentation, visit the [Microsoft contributor guide](/contribute/).
## Other considerations
Azure Digital Twins Explorer is a free tool for interacting with the Azure Digit
## Next steps
-Learn how to use Azure Digital Twins Explorer's features in detail: [Use Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
+Learn how to use Azure Digital Twins Explorer's features in detail: [Use Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
digital-twins How To Manage Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-twin.md
description: See how to retrieve, update, and delete individual twins and relationships. Previously updated : 02/23/2022 Last updated : 08/10/2022
The patch for this situation needs to update both the model and the twin's tempe
### Update a property's sourceTime
-You may optionally decide to use the `sourceTime` field on twin properties to record timestamps for when property updates are observed in the real world. Azure Digital Twins natively supports `sourceTime` in the metadata for each twin property. For more information about this field and other fields on digital twins, see [Digital twin JSON format](concepts-twins-graph.md#digital-twin-json-format).
+You may optionally decide to use the `sourceTime` field on twin properties to record timestamps for when property updates are observed in the real world. Azure Digital Twins natively supports `sourceTime` in the metadata for each twin property. The `sourceTime` value must comply to ISO 8601 date and time format. For more information about this field and other fields on digital twins, see [Digital twin JSON format](concepts-twins-graph.md#digital-twin-json-format).
The minimum stable REST API version to support this field is the [2022-05-31](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins/stable/2022-05-31) version. To work with this field using the [Azure Digital Twins SDKs](concepts-apis-sdks.md), we recommend using the latest version of the SDK to make sure this field is included.
-The `sourceTime` value must comply to ISO 8601 date and time format.
- Here's an example of a JSON Patch document that updates both the value and the `sourceTime` field of a `Temperature` property: :::code language="json" source="~/digital-twins-docs-samples/models/patch-sourcetime.json":::
->[!TIP]
->To update the `sourceTime` field on a property that's part of a component, include the component at the start of the path. In the previous example, this would mean changing the path from `/$metadata/Temperature/sourceTime` to `myComponent/$metadata/Temperature/sourceTime`.
+To update the `sourceTime` field on a property that's part of a component, include the component at the start of the path. In the example above, this would mean changing the path value from `/$metadata/Temperature/sourceTime` to `myComponent/$metadata/Temperature/sourceTime`.
+
+>[!NOTE]
+> If you update both the `sourceTime` and value on a property, and then later update only the property's value, the `sourceTime` timestamp from the first update will remain.
+ ### Handle conflicting update calls
expressroute Expressroute Howto Linkvnet Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-arm.md
With FastPath and UDR, you can configure a UDR on the GatewaySubnet to direct Ex
> The previews for virtual network peering and user defined routes (UDRs) are offered together. You cannot enable only one scenario. >
-To enroll in these previews, run the following Azure PowerShell command in the target Azure subscription:
-
-```azurepowershell-interactive
-Register-AzProviderFeature -FeatureName ExpressRouteVnetPeeringGatewayBypass -ProviderNamespace Microsoft.Network
-```
-
+To enroll in these previews, send an email to exrpm@microsoft.com and include the following information:
+* Subscription ID
+* Service key of the target ExpressRoute circuit
+* Name and Resource Group/ARM resource ID of the target virtual network(s)
### FastPath and Private Link for 10 Gbps ExpressRoute Direct With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypasses the ExpressRoute virtual network gateway in the data path. This preview supports connections associated to 10 Gbps ExpressRoute Direct circuits. This preview doesn't support ExpressRoute circuits managed by an ExpressRoute partner.
fxt-edge-filer Additional Doc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/fxt-edge-filer/additional-doc.md
# Additional documentation for Azure FXT Edge Filer
-Some resources outside of this docs.microsoft.com website might help you understand and work with your Microsoft Azure FXT Edge Filer hybrid cache.
+Other non-Microsoft resources might help you understand and work with your Microsoft Azure FXT Edge Filer hybrid cache.
## Avere vFXT for Azure cache documentation
Specifically, these documents might have helpful details:
* [Dashboard Guide](https://azure.github.io/Avere/legacy/dashboard/4_7/html/ops_dashboard_https://docsupdatetracker.net/index.html) - Explains the features of the control panel **Dashboard** tab
-* [FXT Cluster Creation Guide](https://azure.github.io/Avere/legacy/create_cluster/4_8/html/create_https://docsupdatetracker.net/index.html) - Cluster creation guide from previous products
+* [FXT Cluster Creation Guide](https://azure.github.io/Avere/legacy/create_cluster/4_8/html/create_https://docsupdatetracker.net/index.html) - Cluster creation guide from previous products
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/reference/supported-tables-resources.md
Title: Supported Azure Resource Manager resource types description: Provide a list of the Azure Resource Manager resource types supported by Azure Resource Graph and Change History. Previously updated : 08/09/2022 Last updated : 08/11/2022
For sample queries for this table, see [Resource Graph sample queries for adviso
- microsoft.desktopvirtualization/hostpools/sessionhosts
+## edgeorderresources
+
+- microsoft.edgeorder/orders
+ ## extendedlocationresources For sample queries for this table, see [Resource Graph sample queries for extendedlocationresources](../samples/samples-by-table.md#extendedlocationresources).
For sample queries for this table, see [Resource Graph sample queries for kubern
- microsoft.maintenance/maintenanceconfigurations/applyupdates - microsoft.maintenance/updates
+## networkresources
+
+- microsoft.network/networkgroupmemberships
+- microsoft.network/networkmanagerconnections
+- microsoft.network/networkmanagers/connectivityconfigurations
+- microsoft.network/networkmanagers/networkgroups
+- microsoft.network/networkmanagers/networkgroups/staticmembers
+- microsoft.network/securityadminconfigurations
+- microsoft.network/securityadminconfigurations/rulecollections
+- microsoft.network/securityadminconfigurations/rulecollections/rules
+ ## patchassessmentresources For sample queries for this table, see [Resource Graph sample queries for patchassessmentresources](../samples/samples-by-table.md#patchassessmentresources).
For sample queries for this table, see [Resource Graph sample queries for policy
- microsoft.dataprotection/backupvaults/backupjobs - microsoft.dataprotection/backupvaults/backuppolicies - microsoft.recoveryservices/vaults/alerts-- Microsoft.RecoveryServices/vaults/backupFabrics/protectionContainers/protectedItems (Backup Items)
+- microsoft.recoveryservices/vaults/backupFabrics/protectionContainers/protectedItems (Backup Items)
- microsoft.recoveryservices/vaults/backupjobs - microsoft.recoveryservices/vaults/backuppolicies
For sample queries for this table, see [Resource Graph sample queries for policy
- microsoft.resources/changes
+## resourcecontainerchanges
+
+- microsoft.resources/changes
+ ## resourcecontainers For sample queries for this table, see [Resource Graph sample queries for resourcecontainers](../samples/samples-by-table.md#resourcecontainers).
For sample queries for this table, see [Resource Graph sample queries for resour
- Sample query: [List all subscriptions under a specified management group](../samples/samples-by-category.md#list-all-subscriptions-under-a-specified-management-group) - Sample query: [Remove columns from results](../samples/samples-by-category.md#remove-columns-from-results) - Sample query: [Secure score per management group](../samples/samples-by-category.md#secure-score-per-management-group)-- Microsoft.Resources/subscriptions/resourceGroups (Resource groups)
+- microsoft.resources/subscriptions/resourceGroups (Resource groups)
- Sample query: [Combine results from two queries into a single result](../samples/samples-by-category.md) - Sample query: [Find storage accounts with a specific case-insensitive tag on the resource group](../samples/samples-by-category.md#find-storage-accounts-with-a-specific-case-insensitive-tag-on-the-resource-group) - Sample query: [Find storage accounts with a specific case-sensitive tag on the resource group](../samples/samples-by-category.md#find-storage-accounts-with-a-specific-case-sensitive-tag-on-the-resource-group)
For sample queries for this table, see [Resource Graph sample queries for resour
- livearena.broadcast/services - mailjet.email/services - micorosft.web/kubeenvironments-- Microsoft.AAD/domainServices (Azure AD Domain Services)
+- microsoft.AAD/domainServices (Azure AD Domain Services)
- microsoft.aadiam/azureadmetrics - microsoft.aadiam/privateLinkForAzureAD (Private Link for Azure AD) - microsoft.aadiam/tenants-- Microsoft.AgFoodPlatform/farmBeats (Azure FarmBeats)
+- microsoft.AgFoodPlatform/farmBeats (Azure FarmBeats)
- microsoft.aisupercomputer/accounts - microsoft.aisupercomputer/accounts/jobgroups - microsoft.aisupercomputer/accounts/jobgroups/jobs
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.alertsmanagement/prometheusrulegroups - microsoft.alertsmanagement/resourcehealthalertrules - microsoft.alertsmanagement/smartdetectoralertrules-- Microsoft.AnalysisServices/servers (Analysis Services)-- Microsoft.AnyBuild/clusters (AnyBuild clusters)-- Microsoft.ApiManagement/service (API Management services)
+- microsoft.AnalysisServices/servers (Analysis Services)
+- microsoft.AnyBuild/clusters (AnyBuild clusters)
+- microsoft.ApiManagement/service (API Management services)
- microsoft.app/containerapps - microsoft.app/managedenvironments - microsoft.app/managedenvironments/certificates - microsoft.appassessment/migrateprojects-- Microsoft.AppConfiguration/configurationStores (App Configuration)-- Microsoft.AppPlatform/Spring (Azure Spring Cloud)
+- microsoft.AppConfiguration/configurationStores (App Configuration)
+- microsoft.AppPlatform/Spring (Azure Spring Cloud)
- microsoft.archive/collections-- Microsoft.Attestation/attestationProviders (Attestation providers)
+- microsoft.Attestation/attestationProviders (Attestation providers)
- microsoft.authorization/elevateaccessroleassignment-- Microsoft.Authorization/resourceManagementPrivateLinks (Resource management private links)
+- microsoft.Authorization/resourceManagementPrivateLinks (Resource management private links)
- microsoft.automanage/accounts - microsoft.automanage/configurationprofilepreferences - microsoft.automanage/configurationprofiles-- Microsoft.Automation/AutomationAccounts (Automation Accounts)
+- microsoft.Automation/AutomationAccounts (Automation Accounts)
- microsoft.automation/automationaccounts/configurations-- Microsoft.Automation/automationAccounts/runbooks (Runbook)
+- microsoft.Automation/automationAccounts/runbooks (Runbook)
- microsoft.autonomousdevelopmentplatform/accounts-- Microsoft.AutonomousSystems/workspaces (Bonsai)-- Microsoft.AVS/privateClouds (AVS Private clouds)
+- microsoft.AutonomousSystems/workspaces (Bonsai)
+- microsoft.AVS/privateClouds (AVS Private clouds)
- microsoft.azconfig/configurationstores-- Microsoft.AzureActiveDirectory/b2cDirectories (B2C Tenants)-- Microsoft.AzureActiveDirectory/guestUsages (Guest Usages)-- Microsoft.AzureArcData/dataControllers (Azure Arc data controllers)-- Microsoft.AzureArcData/postgresInstances (Azure Arc-enabled PostgreSQL Hyperscale server groups)-- Microsoft.AzureArcData/sqlManagedInstances (SQL managed instances - Azure Arc)-- Microsoft.AzureArcData/sqlServerInstances (SQL Server - Azure Arc)
+- microsoft.AzureActiveDirectory/b2cDirectories (B2C Tenants)
+- microsoft.AzureActiveDirectory/guestUsages (Guest Usages)
+- microsoft.AzureArcData/dataControllers (Azure Arc data controllers)
+- microsoft.AzureArcData/postgresInstances (Azure Arc-enabled PostgreSQL Hyperscale server groups)
+- microsoft.AzureArcData/sqlManagedInstances (SQL managed instances - Azure Arc)
+- microsoft.AzureArcData/sqlServerInstances (SQL Server - Azure Arc)
- microsoft.azurecis/autopilotenvironments - microsoft.azurecis/dstsserviceaccounts - microsoft.azurecis/dstsserviceclientidentities
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.azuredata/sqlinstances - microsoft.azuredata/sqlmanagedinstances - microsoft.azuredata/sqlserverinstances-- Microsoft.AzureData/sqlServerRegistrations (SQL Server registries)-- Microsoft.AzurePercept/accounts (Azure Percept accounts)
+- microsoft.AzureData/sqlServerRegistrations (SQL Server registries)
+- microsoft.AzurePercept/accounts (Azure Percept accounts)
- microsoft.azuresphere/catalogs - microsoft.azuresphere/catalogs/products - microsoft.azuresphere/catalogs/products/devicegroups - microsoft.azurestack/edgesubscriptions - microsoft.azurestack/linkedsubscriptions - microsoft.azurestack/registrations-- Microsoft.AzureStackHCI/clusters (Azure Stack HCI)
+- microsoft.AzureStackHCI/clusters (Azure Stack HCI)
- microsoft.azurestackhci/galleryimages - microsoft.azurestackhci/networkinterfaces - microsoft.azurestackhci/virtualharddisks-- Microsoft.AzureStackHci/virtualMachines (Azure Stack HCI virtual machine - Azure Arc)
+- microsoft.AzureStackHci/virtualMachines (Azure Stack HCI virtual machine - Azure Arc)
- microsoft.azurestackhci/virtualmachines/extensions - microsoft.azurestackhci/virtualnetworks - microsoft.backupsolutions/vmwareapplications - microsoft.baremetal/consoleconnections-- Microsoft.BareMetal/crayServers (Cray Servers)-- Microsoft.BareMetal/monitoringServers (Monitoring Servers)-- Microsoft.BareMetalInfrastructure/bareMetalInstances (BareMetal Instances)-- Microsoft.Batch/batchAccounts (Batch accounts)
+- microsoft.BareMetal/crayServers (Cray Servers)
+- microsoft.BareMetal/monitoringServers (Monitoring Servers)
+- microsoft.BareMetalInfrastructure/bareMetalInstances (BareMetal Instances)
+- microsoft.Batch/batchAccounts (Batch accounts)
- microsoft.batchai/clusters - microsoft.batchai/fileservers - microsoft.batchai/jobs - microsoft.batchai/workspaces-- Microsoft.Bing/accounts (Bing Resources)
+- microsoft.Bing/accounts (Bing Resources)
- microsoft.bingmaps/mapapis - microsoft.biztalkservices/biztalk - microsoft.blockchain/blockchainmembers - microsoft.blockchain/cordamembers - microsoft.blockchain/watchers-- Microsoft.BotService/botServices (Bot Services)-- Microsoft.Cache/Redis (Azure Cache for Redis)-- Microsoft.Cache/RedisEnterprise (Redis Enterprise)
+- microsoft.BotService/botServices (Bot Services)
+- microsoft.Cache/Redis (Azure Cache for Redis)
+- microsoft.Cache/RedisEnterprise (Redis Enterprise)
- microsoft.cascade/sites-- Microsoft.Cdn/CdnWebApplicationFirewallPolicies (Content Delivery Network WAF policies)
+- microsoft.Cdn/CdnWebApplicationFirewallPolicies (Content Delivery Network WAF policies)
- microsoft.cdn/profiles (Front Doors Standard/Premium (Preview))-- Microsoft.Cdn/Profiles/AfdEndpoints (Endpoints)
+- microsoft.Cdn/Profiles/AfdEndpoints (Endpoints)
- microsoft.cdn/profiles/endpoints (Endpoints)-- Microsoft.CertificateRegistration/certificateOrders (App Service Certificates)
+- microsoft.CertificateRegistration/certificateOrders (App Service Certificates)
- microsoft.chaos/chaosexperiments (Chaos Experiments (Classic)) - microsoft.chaos/experiments (Chaos Experiments) - microsoft.classicCompute/domainNames (Cloud services (classic))-- Microsoft.ClassicCompute/VirtualMachines (Virtual machines (classic))-- Microsoft.ClassicNetwork/networkSecurityGroups (Network security groups (classic))-- Microsoft.ClassicNetwork/reservedIps (Reserved IP addresses (classic))-- Microsoft.ClassicNetwork/virtualNetworks (Virtual networks (classic))-- Microsoft.ClassicStorage/StorageAccounts (Storage accounts (classic))
+- microsoft.ClassicCompute/VirtualMachines (Virtual machines (classic))
+- microsoft.ClassicNetwork/networkSecurityGroups (Network security groups (classic))
+- microsoft.ClassicNetwork/reservedIps (Reserved IP addresses (classic))
+- microsoft.ClassicNetwork/virtualNetworks (Virtual networks (classic))
+- microsoft.ClassicStorage/StorageAccounts (Storage accounts (classic))
- microsoft.cloudes/accounts - microsoft.cloudsearch/indexes-- Microsoft.CloudTest/accounts (CloudTest Accounts)-- Microsoft.CloudTest/hostedpools (1ES Hosted Pools)-- Microsoft.CloudTest/images (CloudTest Images)-- Microsoft.CloudTest/pools (CloudTest Pools)-- Microsoft.ClusterStor/nodes (ClusterStors)
+- microsoft.CloudTest/accounts (CloudTest Accounts)
+- microsoft.CloudTest/hostedpools (1ES Hosted Pools)
+- microsoft.CloudTest/images (CloudTest Images)
+- microsoft.CloudTest/pools (CloudTest Pools)
+- microsoft.ClusterStor/nodes (ClusterStors)
- microsoft.codesigning/codesigningaccounts - microsoft.codespaces/plans - microsoft.cognition/syntheticsaccounts-- Microsoft.CognitiveServices/accounts (Cognitive Services)-- Microsoft.Compute/availabilitySets (Availability sets)-- Microsoft.Compute/capacityReservationGroups (Capacity Reservation Groups)
+- microsoft.CognitiveServices/accounts (Cognitive Services)
+- microsoft.Compute/availabilitySets (Availability sets)
+- microsoft.Compute/capacityReservationGroups (Capacity Reservation Groups)
- microsoft.compute/capacityreservationgroups/capacityreservations - microsoft.compute/capacityreservations-- Microsoft.Compute/cloudServices (Cloud services (extended support))-- Microsoft.Compute/diskAccesses (Disk Accesses)-- Microsoft.Compute/diskEncryptionSets (Disk Encryption Sets)-- Microsoft.Compute/disks (Disks)-- Microsoft.Compute/galleries (Azure compute galleries)-- Microsoft.Compute/galleries/applications (VM application definitions)-- Microsoft.Compute/galleries/applications/versions (VM application versions)-- Microsoft.Compute/galleries/images (VM image definitions)-- Microsoft.Compute/galleries/images/versions (VM image versions)-- Microsoft.Compute/hostgroups (Host groups)-- Microsoft.Compute/hostgroups/hosts (Hosts)-- Microsoft.Compute/images (Images)-- Microsoft.Compute/ProximityPlacementGroups (Proximity placement groups)-- Microsoft.Compute/restorePointCollections (Restore Point Collections)
+- microsoft.Compute/cloudServices (Cloud services (extended support))
+- microsoft.Compute/diskAccesses (Disk Accesses)
+- microsoft.Compute/diskEncryptionSets (Disk Encryption Sets)
+- microsoft.Compute/disks (Disks)
+- microsoft.Compute/galleries (Azure compute galleries)
+- microsoft.Compute/galleries/applications (VM application definitions)
+- microsoft.Compute/galleries/applications/versions (VM application versions)
+- microsoft.Compute/galleries/images (VM image definitions)
+- microsoft.Compute/galleries/images/versions (VM image versions)
+- microsoft.Compute/hostgroups (Host groups)
+- microsoft.Compute/hostgroups/hosts (Hosts)
+- microsoft.Compute/images (Images)
+- microsoft.Compute/ProximityPlacementGroups (Proximity placement groups)
+- microsoft.Compute/restorePointCollections (Restore Point Collections)
- microsoft.compute/sharedvmextensions - microsoft.compute/sharedvmextensions/versions - microsoft.compute/sharedvmimages - microsoft.compute/sharedvmimages/versions-- Microsoft.Compute/snapshots (Snapshots)-- Microsoft.Compute/sshPublicKeys (SSH keys)
+- microsoft.Compute/snapshots (Snapshots)
+- microsoft.Compute/sshPublicKeys (SSH keys)
- microsoft.compute/swiftlets-- Microsoft.Compute/VirtualMachines (Virtual machines)
+- microsoft.Compute/VirtualMachines (Virtual machines)
- Sample query: [Count of virtual machines by power state](../samples/samples-by-category.md#count-of-virtual-machines-by-power-state) - Sample query: [Count virtual machines by OS type](../samples/samples-by-category.md#count-virtual-machines-by-os-type) - Sample query: [Count virtual machines by OS type with extend](../samples/samples-by-category.md#count-virtual-machines-by-os-type-with-extend)
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.compute/virtualmachines/extensions - Sample query: [List all extensions installed on a virtual machine](../samples/samples-by-category.md#list-all-extensions-installed-on-a-virtual-machine) - microsoft.compute/virtualmachines/runcommands-- Microsoft.Compute/virtualMachineScaleSets (Virtual machine scale sets)
+- microsoft.Compute/virtualMachineScaleSets (Virtual machine scale sets)
- Sample query: [Get virtual machine scale set capacity and size](../samples/samples-by-category.md#get-virtual-machine-scale-set-capacity-and-size) - microsoft.compute/virtualmachinescalesets/virtualmachines/networkinterfaces/ipconfigurations/publicipaddresses-- Microsoft.ConfidentialLedger/ledgers (Confidential Ledgers)-- Microsoft.Confluent/organizations (Confluent organizations)-- Microsoft.ConnectedCache/cacheNodes (Connected Cache Resources)-- Microsoft.ConnectedCache/enterpriseCustomers (Connected Cache Resources)-- Microsoft.ConnectedVehicle/platformAccounts (Connected Vehicle Platforms)
+- microsoft.ConfidentialLedger/ledgers (Confidential Ledgers)
+- microsoft.Confluent/organizations (Confluent organizations)
+- microsoft.ConnectedCache/cacheNodes (Connected Cache Resources)
+- microsoft.ConnectedCache/enterpriseCustomers (Connected Cache Resources)
+- microsoft.ConnectedVehicle/platformAccounts (Connected Vehicle Platforms)
- microsoft.connectedvmwarevsphere/clusters - microsoft.connectedvmwarevsphere/datastores - microsoft.connectedvmwarevsphere/hosts - microsoft.connectedvmwarevsphere/resourcepools-- Microsoft.connectedVMwareVSphere/vCenters (VMware vCenters)-- Microsoft.ConnectedVMwarevSphere/VirtualMachines (VMware + AVS virtual machines)
+- microsoft.connectedVMwareVSphere/vCenters (VMware vCenters)
+- microsoft.ConnectedVMwarevSphere/VirtualMachines (VMware + AVS virtual machines)
- microsoft.connectedvmwarevsphere/virtualmachines/extensions - microsoft.connectedvmwarevsphere/virtualmachinetemplates - microsoft.connectedvmwarevsphere/virtualnetworks-- Microsoft.ContainerInstance/containerGroups (Container instances)-- Microsoft.ContainerRegistry/registries (Container registries)
+- microsoft.ContainerInstance/containerGroups (Container instances)
+- microsoft.ContainerRegistry/registries (Container registries)
- microsoft.containerregistry/registries/agentpools - microsoft.containerregistry/registries/buildtasks-- Microsoft.ContainerRegistry/registries/replications (Container registry replications)
+- microsoft.ContainerRegistry/registries/replications (Container registry replications)
- microsoft.containerregistry/registries/taskruns - microsoft.containerregistry/registries/tasks-- Microsoft.ContainerRegistry/registries/webhooks (Container registry webhooks)
+- microsoft.ContainerRegistry/registries/webhooks (Container registry webhooks)
- microsoft.containerservice/containerservices-- Microsoft.ContainerService/managedClusters (Kubernetes services)
+- microsoft.ContainerService/managedClusters (Kubernetes services)
- Sample query: [List all ConnectedClusters and ManagedClusters that contain a Flux Configuration](../samples/samples-by-category.md#list-all-connectedclusters-and-managedclusters-that-contain-a-flux-configuration) - Sample query: [List impacted resources when transferring an Azure subscription](../samples/samples-by-category.md#list-impacted-resources-when-transferring-an-azure-subscription) - microsoft.containerservice/openshiftmanagedclusters
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.costmanagement/connectors - microsoft.customproviders/resourceproviders - microsoft.d365customerinsights/instances-- Microsoft.Dashboard/grafana (Grafana Workspaces)-- Microsoft.DataBox/jobs (Azure Data Box)-- Microsoft.DataBoxEdge/dataBoxEdgeDevices (Azure Stack Edge / Data Box Gateway)-- Microsoft.Databricks/workspaces (Azure Databricks Services)-- Microsoft.DataCatalog/catalogs (Data Catalog)
+- microsoft.Dashboard/grafana (Grafana Workspaces)
+- microsoft.DataBox/jobs (Azure Data Box)
+- microsoft.DataBoxEdge/dataBoxEdgeDevices (Azure Stack Edge / Data Box Gateway)
+- microsoft.Databricks/workspaces (Azure Databricks Services)
+- microsoft.DataCatalog/catalogs (Data Catalog)
- microsoft.datacatalog/datacatalogs-- Microsoft.DataCollaboration/workspaces (Project CI)-- Microsoft.Datadog/monitors (Datadog)-- Microsoft.DataFactory/dataFactories (Data factories)-- Microsoft.DataFactory/factories (Data factories (V2))-- Microsoft.DataLakeAnalytics/accounts (Data Lake Analytics)-- Microsoft.DataLakeStore/accounts (Data Lake Storage Gen1)
+- microsoft.DataCollaboration/workspaces (Project CI)
+- microsoft.Datadog/monitors (Datadog)
+- microsoft.DataFactory/dataFactories (Data factories)
+- microsoft.DataFactory/factories (Data factories (V2))
+- microsoft.DataLakeAnalytics/accounts (Data Lake Analytics)
+- microsoft.DataLakeStore/accounts (Data Lake Storage Gen1)
- Sample query: [List impacted resources when transferring an Azure subscription](../samples/samples-by-category.md#list-impacted-resources-when-transferring-an-azure-subscription) - microsoft.datamigration/controllers-- Microsoft.DataMigration/services (Azure Database Migration Services)-- Microsoft.DataMigration/services/projects (Azure Database Migration Projects)
+- microsoft.DataMigration/services (Azure Database Migration Services)
+- microsoft.DataMigration/services/projects (Azure Database Migration Projects)
- microsoft.datamigration/slots - microsoft.datamigration/sqlmigrationservices (Azure Database Migration Services)-- Microsoft.DataProtection/BackupVaults (Backup vaults)-- Microsoft.DataProtection/resourceGuards (Resource Guards (Preview))
+- microsoft.DataProtection/BackupVaults (Backup vaults)
+- microsoft.DataProtection/resourceGuards (Resource Guards (Preview))
- microsoft.dataprotection/resourceoperationgatekeepers - microsoft.datareplication/replicationfabrics-- Microsoft.DataReplication/replicationVaults (Site Recovery Vaults)-- Microsoft.DataShare/accounts (Data Shares)-- Microsoft.DBforMariaDB/servers (Azure Database for MariaDB servers)-- Microsoft.DBforMySQL/flexibleServers (Azure Database for MySQL flexible servers)-- Microsoft.DBforMySQL/servers (Azure Database for MySQL servers)-- Microsoft.DBforPostgreSQL/flexibleServers (Azure Database for PostgreSQL flexible servers)-- Microsoft.DBforPostgreSQL/serverGroups (Azure Database for PostgreSQL server groups)-- Microsoft.DBforPostgreSQL/serverGroupsv2 (Azure Database for PostgreSQL server groups)-- Microsoft.DBforPostgreSQL/servers (Azure Database for PostgreSQL servers)-- Microsoft.DBforPostgreSQL/serversv2 (Azure Database for PostgreSQL servers v2)
+- microsoft.DataReplication/replicationVaults (Site Recovery Vaults)
+- microsoft.DataShare/accounts (Data Shares)
+- microsoft.DBforMariaDB/servers (Azure Database for MariaDB servers)
+- microsoft.DBforMySQL/flexibleServers (Azure Database for MySQL flexible servers)
+- microsoft.DBforMySQL/servers (Azure Database for MySQL servers)
+- microsoft.DBforPostgreSQL/flexibleServers (Azure Database for PostgreSQL flexible servers)
+- microsoft.DBforPostgreSQL/serverGroups (Azure Database for PostgreSQL server groups)
+- microsoft.DBforPostgreSQL/serverGroupsv2 (Azure Database for PostgreSQL server groups)
+- microsoft.DBforPostgreSQL/servers (Azure Database for PostgreSQL servers)
+- microsoft.DBforPostgreSQL/serversv2 (Azure Database for PostgreSQL servers v2)
- microsoft.dbforpostgresql/singleservers - microsoft.delegatednetwork/controller - microsoft.delegatednetwork/delegatedsubnets - microsoft.delegatednetwork/orchestratorinstances - microsoft.delegatednetwork/orchestrators - microsoft.deploymentmanager/artifactsources-- Microsoft.DeploymentManager/Rollouts (Rollouts)
+- microsoft.DeploymentManager/Rollouts (Rollouts)
- microsoft.deploymentmanager/servicetopologies - microsoft.deploymentmanager/servicetopologies/services - microsoft.deploymentmanager/servicetopologies/services/serviceunits - microsoft.deploymentmanager/steps-- Microsoft.DesktopVirtualization/ApplicationGroups (Application groups)-- Microsoft.DesktopVirtualization/HostPools (Host pools)-- Microsoft.DesktopVirtualization/ScalingPlans (Scaling plans)-- Microsoft.DesktopVirtualization/Workspaces (Workspaces)
+- microsoft.DesktopVirtualization/ApplicationGroups (Application groups)
+- microsoft.DesktopVirtualization/HostPools (Host pools)
+- microsoft.DesktopVirtualization/ScalingPlans (Scaling plans)
+- microsoft.DesktopVirtualization/Workspaces (Workspaces)
- microsoft.devai/instances - microsoft.devai/instances/experiments - microsoft.devai/instances/sandboxes - microsoft.devai/instances/sandboxes/experiments - microsoft.devices/elasticpools - microsoft.devices/elasticpools/iothubtenants-- Microsoft.Devices/IotHubs (IoT Hub)-- Microsoft.Devices/ProvisioningServices (Device Provisioning Services)-- Microsoft.DeviceUpdate/Accounts (Device Update for IoT Hubs)
+- microsoft.Devices/IotHubs (IoT Hub)
+- microsoft.Devices/ProvisioningServices (Device Provisioning Services)
+- microsoft.DeviceUpdate/Accounts (Device Update for IoT Hubs)
- microsoft.deviceupdate/accounts/instances - microsoft.devops/pipelines (DevOps Starter) - microsoft.devspaces/controllers - microsoft.devtestlab/labcenters-- Microsoft.DevTestLab/labs (DevTest Labs)
+- microsoft.DevTestLab/labs (DevTest Labs)
- microsoft.devtestlab/labs/servicerunners-- Microsoft.DevTestLab/labs/virtualMachines (Virtual machines)
+- microsoft.DevTestLab/labs/virtualMachines (Virtual machines)
- microsoft.devtestlab/schedules-- Microsoft.DigitalTwins/digitalTwinsInstances (Azure Digital Twins)-- Microsoft.DocumentDB/cassandraClusters (Azure Managed Instance for Apache Cassandra)-- Microsoft.DocumentDb/databaseAccounts (Azure Cosmos DB accounts)
+- microsoft.DigitalTwins/digitalTwinsInstances (Azure Digital Twins)
+- microsoft.DocumentDB/cassandraClusters (Azure Managed Instance for Apache Cassandra)
+- microsoft.DocumentDb/databaseAccounts (Azure Cosmos DB accounts)
- Sample query: [List Azure Cosmos DB with specific write locations](../samples/samples-by-category.md#list-azure-cosmos-db-with-specific-write-locations)-- Microsoft.DomainRegistration/domains (App Service Domains)
+- microsoft.DomainRegistration/domains (App Service Domains)
- microsoft.dynamics365fraudprotection/instances-- Microsoft.EdgeOrder/addresses (Azure Edge Hardware Center Address)
+- microsoft.EdgeOrder/addresses (Azure Edge Hardware Center Address)
- microsoft.edgeorder/ordercollections-- Microsoft.EdgeOrder/orderItems (Azure Edge Hardware Center)
+- microsoft.EdgeOrder/orderItems (Azure Edge Hardware Center)
- microsoft.edgeorder/orders-- Microsoft.Elastic/monitors (Elasticsearch (Elastic Cloud))
+- microsoft.Elastic/monitors (Elasticsearch (Elastic Cloud))
- microsoft.enterpriseknowledgegraph/services-- Microsoft.EventGrid/domains (Event Grid Domains)
+- microsoft.EventGrid/domains (Event Grid Domains)
- microsoft.eventgrid/partnerdestinations-- Microsoft.EventGrid/partnerNamespaces (Event Grid Partner Namespaces)-- Microsoft.EventGrid/partnerRegistrations (Event Grid Partner Registrations)-- Microsoft.EventGrid/partnerTopics (Event Grid Partner Topics)-- Microsoft.EventGrid/systemTopics (Event Grid System Topics)-- Microsoft.EventGrid/topics (Event Grid Topics)-- Microsoft.EventHub/clusters (Event Hubs Clusters)-- Microsoft.EventHub/namespaces (Event Hubs Namespaces)-- Microsoft.Experimentation/experimentWorkspaces (Experiment Workspaces)-- Microsoft.ExtendedLocation/CustomLocations (Custom locations)
+- microsoft.EventGrid/partnerNamespaces (Event Grid Partner Namespaces)
+- microsoft.EventGrid/partnerRegistrations (Event Grid Partner Registrations)
+- microsoft.EventGrid/partnerTopics (Event Grid Partner Topics)
+- microsoft.EventGrid/systemTopics (Event Grid System Topics)
+- microsoft.EventGrid/topics (Event Grid Topics)
+- microsoft.EventHub/clusters (Event Hubs Clusters)
+- microsoft.EventHub/namespaces (Event Hubs Namespaces)
+- microsoft.Experimentation/experimentWorkspaces (Experiment Workspaces)
+- microsoft.ExtendedLocation/CustomLocations (Custom locations)
- Sample query: [List Azure Arc-enabled custom locations with VMware or SCVMM enabled](../samples/samples-by-category.md#list-azure-arc-enabled-custom-locations-with-vmware-or-scvmm-enabled) - microsoft.extendedlocation/customlocations/resourcesyncrules - microsoft.falcon/namespaces-- Microsoft.Fidalgo/devcenters (Fidalgo DevCenters)
+- microsoft.Fidalgo/devcenters (Fidalgo DevCenters)
- microsoft.fidalgo/machinedefinitions - microsoft.fidalgo/networksettings (Network Configurations)-- Microsoft.Fidalgo/projects (Fidalgo Projects)-- Microsoft.Fidalgo/projects/environments (Fidalgo Environments)
+- microsoft.Fidalgo/projects (Fidalgo Projects)
+- microsoft.Fidalgo/projects/environments (Fidalgo Environments)
- microsoft.fidalgo/projects/pools-- Microsoft.FluidRelay/fluidRelayServers (Fluid Relay)
+- microsoft.FluidRelay/fluidRelayServers (Fluid Relay)
- microsoft.footprintmonitoring/profiles - microsoft.gaming/titles-- Microsoft.Genomics/accounts (Genomics accounts)
+- microsoft.Genomics/accounts (Genomics accounts)
- microsoft.guestconfiguration/automanagedaccounts-- Microsoft.HanaOnAzure/hanaInstances (SAP HANA on Azure)-- Microsoft.HanaOnAzure/sapMonitors (Azure Monitors for SAP Solutions)
+- microsoft.HanaOnAzure/hanaInstances (SAP HANA on Azure)
+- microsoft.HanaOnAzure/sapMonitors (Azure Monitors for SAP Solutions)
- microsoft.hardwaresecuritymodules/dedicatedhsms-- Microsoft.HDInsight/clusterpools (HDInsight cluster pools)-- Microsoft.HDInsight/clusterpools/clusters (HDInsight gen2 clusters)-- Microsoft.HDInsight/clusterpools/clusters/sessionclusters (HDInsight session clusters)-- Microsoft.HDInsight/clusters (HDInsight clusters)-- Microsoft.HealthBot/healthBots (Azure Health Bot)-- Microsoft.HealthcareApis/services (Azure API for FHIR)
+- microsoft.HDInsight/clusterpools (HDInsight cluster pools)
+- microsoft.HDInsight/clusterpools/clusters (HDInsight gen2 clusters)
+- microsoft.HDInsight/clusterpools/clusters/sessionclusters (HDInsight session clusters)
+- microsoft.HDInsight/clusters (HDInsight clusters)
+- microsoft.HealthBot/healthBots (Azure Health Bot)
+- microsoft.HealthcareApis/services (Azure API for FHIR)
- microsoft.healthcareapis/services/privateendpointconnections-- Microsoft.HealthcareApis/workspaces (Healthcare APIs Workspaces)-- Microsoft.HealthcareApis/workspaces/dicomservices (DICOM services)-- Microsoft.HealthcareApis/workspaces/fhirservices (FHIR services)-- Microsoft.HealthcareApis/workspaces/iotconnectors (IoT connectors)-- Microsoft.HpcWorkbench/instances (HPC Workbenches (preview))-- Microsoft.HpcWorkbench/instances/chambers (Chambers (preview))-- Microsoft.HpcWorkbench/instances/chambers/accessProfiles (Chamber Profiles (preview))-- Microsoft.HpcWorkbench/instances/chambers/workloads (Chamber VMs (preview))-- Microsoft.HpcWorkbench/instances/consortiums (Consortiums (preview))-- Microsoft.HybridCompute/machines (Servers - Azure Arc)
+- microsoft.HealthcareApis/workspaces (Healthcare APIs Workspaces)
+- microsoft.HealthcareApis/workspaces/dicomservices (DICOM services)
+- microsoft.HealthcareApis/workspaces/fhirservices (FHIR services)
+- microsoft.HealthcareApis/workspaces/iotconnectors (IoT connectors)
+- microsoft.HpcWorkbench/instances (HPC Workbenches (preview))
+- microsoft.HpcWorkbench/instances/chambers (Chambers (preview))
+- microsoft.HpcWorkbench/instances/chambers/accessProfiles (Chamber Profiles (preview))
+- microsoft.HpcWorkbench/instances/chambers/workloads (Chamber VMs (preview))
+- microsoft.HpcWorkbench/instances/consortiums (Consortiums (preview))
+- microsoft.HybridCompute/machines (Servers - Azure Arc)
- Sample query: [Get count and percentage of Arc-enabled servers by domain](../samples/samples-by-category.md#get-count-and-percentage-of-arc-enabled-servers-by-domain) - Sample query: [List all extensions installed on an Azure Arc-enabled server](../samples/samples-by-category.md#list-all-extensions-installed-on-an-azure-arc-enabled-server) - Sample query: [List Arc-enabled servers not running latest released agent version](../samples/samples-by-category.md#list-arc-enabled-servers-not-running-latest-released-agent-version) - microsoft.hybridcompute/machines/extensions - Sample query: [List all extensions installed on an Azure Arc-enabled server](../samples/samples-by-category.md#list-all-extensions-installed-on-an-azure-arc-enabled-server)-- Microsoft.HybridCompute/privateLinkScopes (Azure Arc Private Link Scopes)
+- microsoft.HybridCompute/privateLinkScopes (Azure Arc Private Link Scopes)
- microsoft.hybridcontainerservice/provisionedclusters - microsoft.hybridcontainerservice/provisionedclusters/agentpools-- Microsoft.HybridData/dataManagers (StorSimple Data Managers)-- Microsoft.HybridNetwork/devices (Azure Network Function Manager ΓÇô Devices)-- Microsoft.HybridNetwork/networkFunctions (Azure Network Function Manager ΓÇô Network Functions)
+- microsoft.HybridData/dataManagers (StorSimple Data Managers)
+- microsoft.HybridNetwork/devices (Azure Network Function Manager ΓÇô Devices)
+- microsoft.HybridNetwork/networkFunctions (Azure Network Function Manager ΓÇô Network Functions)
- microsoft.hybridnetwork/virtualnetworkfunctions-- Microsoft.ImportExport/jobs (Import/export jobs)
+- microsoft.ImportExport/jobs (Import/export jobs)
- microsoft.industrydatalifecycle/basemodels - microsoft.industrydatalifecycle/custodiancollaboratives - microsoft.industrydatalifecycle/dataconsumercollaboratives
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.insights/metricalerts - microsoft.insights/notificationgroups - microsoft.insights/notificationrules-- Microsoft.Insights/privateLinkScopes (Azure Monitor Private Link Scopes)
+- microsoft.Insights/privateLinkScopes (Azure Monitor Private Link Scopes)
- microsoft.insights/querypacks - microsoft.insights/scheduledqueryrules - microsoft.insights/webtests (Availability tests)
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.intelligentitdigitaltwin/digitaltwins/executionplans - microsoft.intelligentitdigitaltwin/digitaltwins/testplans - microsoft.intelligentitdigitaltwin/digitaltwins/tests-- Microsoft.IoTCentral/IoTApps (IoT Central Applications)
+- microsoft.IoTCentral/IoTApps (IoT Central Applications)
- microsoft.iotspaces/graph - microsoft.keyvault/hsmpools - microsoft.keyvault/managedhsms-- Microsoft.KeyVault/vaults (Key vaults)
+- microsoft.KeyVault/vaults (Key vaults)
- Sample query: [Count key vault resources](../samples/samples-by-category.md#count-key-vault-resources) - Sample query: [Key vaults with subscription name](../samples/samples-by-category.md#key-vaults-with-subscription-name) - Sample query: [List impacted resources when transferring an Azure subscription](../samples/samples-by-category.md#list-impacted-resources-when-transferring-an-azure-subscription)-- Microsoft.Kubernetes/connectedClusters (Kubernetes - Azure Arc)
+- microsoft.Kubernetes/connectedClusters (Kubernetes - Azure Arc)
- Sample query: [List all Azure Arc-enabled Kubernetes clusters without Azure Monitor extension](../samples/samples-by-category.md#list-all-azure-arc-enabled-kubernetes-clusters-without-azure-monitor-extension) - Sample query: [List all Azure Arc-enabled Kubernetes resources](../samples/samples-by-category.md#list-all-azure-arc-enabled-kubernetes-resources) - Sample query: [List all ConnectedClusters and ManagedClusters that contain a Flux Configuration](../samples/samples-by-category.md#list-all-connectedclusters-and-managedclusters-that-contain-a-flux-configuration)-- Microsoft.Kusto/clusters (Azure Data Explorer Clusters)-- Microsoft.Kusto/clusters/databases (Azure Data Explorer Databases)-- Microsoft.LabServices/labAccounts (Lab accounts)-- Microsoft.LabServices/labPlans (Lab plans)-- Microsoft.LabServices/labs (Labs)-- Microsoft.LoadTestService/LoadTests (Azure Load Testing)-- Microsoft.Logic/integrationAccounts (Integration accounts)-- Microsoft.Logic/integrationServiceEnvironments (Integration Service Environments)-- Microsoft.Logic/integrationServiceEnvironments/managedApis (Managed Connector)-- Microsoft.Logic/workflows (Logic apps)-- Microsoft.Logz/monitors (Logz main account)-- Microsoft.Logz/monitors/accounts (Logz sub account)-- Microsoft.Logz/monitors/metricsSource (Logz metrics data source)-- Microsoft.MachineLearning/commitmentPlans (Machine Learning Studio (classic) web service plans)-- Microsoft.MachineLearning/webServices (Machine Learning Studio (classic) web services)-- Microsoft.MachineLearning/workspaces (Machine Learning Studio (classic) workspaces)
+- microsoft.Kusto/clusters (Azure Data Explorer Clusters)
+- microsoft.Kusto/clusters/databases (Azure Data Explorer Databases)
+- microsoft.LabServices/labAccounts (Lab accounts)
+- microsoft.LabServices/labPlans (Lab plans)
+- microsoft.LabServices/labs (Labs)
+- microsoft.LoadTestService/LoadTests (Azure Load Testing)
+- microsoft.Logic/integrationAccounts (Integration accounts)
+- microsoft.Logic/integrationServiceEnvironments (Integration Service Environments)
+- microsoft.Logic/integrationServiceEnvironments/managedApis (Managed Connector)
+- microsoft.Logic/workflows (Logic apps)
+- microsoft.Logz/monitors (Logz main account)
+- microsoft.Logz/monitors/accounts (Logz sub account)
+- microsoft.Logz/monitors/metricsSource (Logz metrics data source)
+- microsoft.MachineLearning/commitmentPlans (Machine Learning Studio (classic) web service plans)
+- microsoft.MachineLearning/webServices (Machine Learning Studio (classic) web services)
+- microsoft.MachineLearning/workspaces (Machine Learning Studio (classic) workspaces)
- microsoft.machinelearningcompute/operationalizationclusters - microsoft.machinelearningexperimentation/accounts/workspaces - microsoft.machinelearningservices/aisysteminventories - microsoft.machinelearningservices/modelinventories - microsoft.machinelearningservices/modelinventory - microsoft.machinelearningservices/virtualclusters-- Microsoft.MachineLearningServices/workspaces (Machine learning)
+- microsoft.MachineLearningServices/workspaces (Machine learning)
- microsoft.machinelearningservices/workspaces/batchendpoints - microsoft.machinelearningservices/workspaces/batchendpoints/deployments - microsoft.machinelearningservices/workspaces/inferenceendpoints - microsoft.machinelearningservices/workspaces/inferenceendpoints/deployments-- Microsoft.MachineLearningServices/workspaces/onlineEndpoints (Machine learning online endpoints)-- Microsoft.MachineLearningServices/workspaces/onlineEndpoints/deployments (Machine learning online deployments)-- Microsoft.Maintenance/maintenanceConfigurations (Maintenance Configurations)
+- microsoft.MachineLearningServices/workspaces/onlineEndpoints (Machine learning online endpoints)
+- microsoft.MachineLearningServices/workspaces/onlineEndpoints/deployments (Machine learning online deployments)
+- microsoft.Maintenance/maintenanceConfigurations (Maintenance Configurations)
- microsoft.maintenance/maintenancepolicies - microsoft.managedidentity/groups-- Microsoft.ManagedIdentity/userAssignedIdentities (Managed Identities)
+- microsoft.ManagedIdentity/userAssignedIdentities (Managed Identities)
- Sample query: [List impacted resources when transferring an Azure subscription](../samples/samples-by-category.md#list-impacted-resources-when-transferring-an-azure-subscription) - microsoft.managednetwork/managednetworkgroups - microsoft.managednetwork/managednetworkpeeringpolicies - microsoft.managednetwork/managednetworks - microsoft.managednetwork/managednetworks/managednetworkgroups - microsoft.managednetwork/managednetworks/managednetworkpeeringpolicies-- Microsoft.Maps/accounts (Azure Maps Accounts)-- Microsoft.Maps/accounts/creators (Azure Maps Creator Resources)
+- microsoft.Maps/accounts (Azure Maps Accounts)
+- microsoft.Maps/accounts/creators (Azure Maps Creator Resources)
- microsoft.maps/accounts/privateatlases - microsoft.marketplaceapps/classicdevservices - microsoft.media/mediaservices (Media Services)
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.migrate/assessmentprojects - microsoft.migrate/migrateprojects - microsoft.migrate/movecollections-- Microsoft.Migrate/projects (Migration projects)-- Microsoft.MixedReality/objectAnchorsAccounts (Object Anchors Accounts)-- Microsoft.MixedReality/objectUnderstandingAccounts (Object Understanding Accounts)-- Microsoft.MixedReality/remoteRenderingAccounts (Remote Rendering Accounts)-- Microsoft.MixedReality/spatialAnchorsAccounts (Spatial Anchors Accounts)
+- microsoft.Migrate/projects (Migration projects)
+- microsoft.MixedReality/objectAnchorsAccounts (Object Anchors Accounts)
+- microsoft.MixedReality/objectUnderstandingAccounts (Object Understanding Accounts)
+- microsoft.MixedReality/remoteRenderingAccounts (Remote Rendering Accounts)
+- microsoft.MixedReality/spatialAnchorsAccounts (Spatial Anchors Accounts)
- microsoft.mixedreality/surfacereconstructionaccounts-- Microsoft.MobileNetwork/mobileNetworks (Mobile Networks)-- Microsoft.MobileNetwork/mobileNetworks/dataNetworks (Data Networks)-- Microsoft.MobileNetwork/mobileNetworks/services (Services)-- Microsoft.MobileNetwork/mobileNetworks/simPolicies (Sim Policies)-- Microsoft.MobileNetwork/mobileNetworks/sites (Mobile Network Sites)-- Microsoft.MobileNetwork/mobileNetworks/slices (Slices)
+- microsoft.MobileNetwork/mobileNetworks (Mobile Networks)
+- microsoft.MobileNetwork/mobileNetworks/dataNetworks (Data Networks)
+- microsoft.MobileNetwork/mobileNetworks/services (Services)
+- microsoft.MobileNetwork/mobileNetworks/simPolicies (Sim Policies)
+- microsoft.MobileNetwork/mobileNetworks/sites (Mobile Network Sites)
+- microsoft.MobileNetwork/mobileNetworks/slices (Slices)
- microsoft.mobilenetwork/networks - microsoft.mobilenetwork/networks/sites-- Microsoft.MobileNetwork/packetCoreControlPlanes (Packet Core Control Planes)-- Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes (Packet Core Data Planes)-- Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks (Attached Data Networks)-- Microsoft.MobileNetwork/sims (Sims)
+- microsoft.MobileNetwork/packetCoreControlPlanes (Packet Core Control Planes)
+- microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes (Packet Core Data Planes)
+- microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks (Attached Data Networks)
+- microsoft.MobileNetwork/sims (Sims)
- microsoft.mobilenetwork/sims/simprofiles - microsoft.monitor/accounts-- Microsoft.NetApp/netAppAccounts (NetApp accounts)
+- microsoft.NetApp/netAppAccounts (NetApp accounts)
- microsoft.netapp/netappaccounts/backuppolicies-- Microsoft.NetApp/netAppAccounts/capacityPools (Capacity pools)-- Microsoft.NetApp/netAppAccounts/capacityPools/Volumes (Volumes)
+- microsoft.NetApp/netAppAccounts/capacityPools (Capacity pools)
+- microsoft.NetApp/netAppAccounts/capacityPools/Volumes (Volumes)
- microsoft.netapp/netappaccounts/capacitypools/volumes/mounttargets-- Microsoft.NetApp/netAppAccounts/capacityPools/volumes/snapshots (Snapshots)
+- microsoft.NetApp/netAppAccounts/capacityPools/volumes/snapshots (Snapshots)
- microsoft.netapp/netappaccounts/capacitypools/volumes/subvolumes-- Microsoft.NetApp/netAppAccounts/snapshotPolicies (Snapshot policies)-- Microsoft.Network/applicationGateways (Application gateways)-- Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies (Application Gateway WAF policies)-- Microsoft.Network/applicationSecurityGroups (Application security groups)-- Microsoft.Network/azureFirewalls (Firewalls)-- Microsoft.Network/bastionHosts (Bastions)-- Microsoft.Network/connections (Connections)-- Microsoft.Network/customIpPrefixes (Custom IP Prefixes)
+- microsoft.NetApp/netAppAccounts/snapshotPolicies (Snapshot policies)
+- microsoft.Network/applicationGateways (Application gateways)
+- microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies (Application Gateway WAF policies)
+- microsoft.Network/applicationSecurityGroups (Application security groups)
+- microsoft.Network/azureFirewalls (Firewalls)
+- microsoft.Network/bastionHosts (Bastions)
+- microsoft.Network/connections (Connections)
+- microsoft.Network/customIpPrefixes (Custom IP Prefixes)
- microsoft.network/ddoscustompolicies-- Microsoft.Network/ddosProtectionPlans (DDoS protection plans)-- Microsoft.Network/dnsForwardingRulesets (Dns Forwarding Rulesets)-- Microsoft.Network/dnsResolvers (DNS Private Resolvers)
+- microsoft.Network/ddosProtectionPlans (DDoS protection plans)
+- microsoft.Network/dnsForwardingRulesets (Dns Forwarding Rulesets)
+- microsoft.Network/dnsResolvers (DNS Private Resolvers)
- microsoft.network/dnsresolvers/inboundendpoints - microsoft.network/dnsresolvers/outboundendpoints-- Microsoft.Network/dnsZones (DNS zones)
+- microsoft.Network/dnsZones (DNS zones)
- microsoft.network/dscpconfigurations-- Microsoft.Network/expressRouteCircuits (ExpressRoute circuits)
+- microsoft.Network/expressRouteCircuits (ExpressRoute circuits)
- microsoft.network/expressroutecrossconnections - microsoft.network/expressroutegateways-- Microsoft.Network/expressRoutePorts (ExpressRoute Direct)-- Microsoft.Network/firewallPolicies (Firewall Policies)
+- microsoft.Network/expressRoutePorts (ExpressRoute Direct)
+- microsoft.Network/firewallPolicies (Firewall Policies)
- microsoft.network/firewallpolicies/rulegroups-- Microsoft.Network/frontdoors (Front Doors)-- Microsoft.Network/FrontDoorWebApplicationFirewallPolicies (Web Application Firewall policies (WAF))
+- microsoft.Network/frontdoors (Front Doors)
+- microsoft.Network/FrontDoorWebApplicationFirewallPolicies (Web Application Firewall policies (WAF))
- microsoft.network/ipallocations-- Microsoft.Network/ipGroups (IP Groups)-- Microsoft.Network/LoadBalancers (Load balancers)-- Microsoft.Network/localnetworkgateways (Local network gateways)
+- microsoft.Network/ipGroups (IP Groups)
+- microsoft.Network/LoadBalancers (Load balancers)
+- microsoft.Network/localnetworkgateways (Local network gateways)
- microsoft.network/mastercustomipprefixes-- Microsoft.Network/natGateways (NAT gateways)-- Microsoft.Network/NetworkExperimentProfiles (Internet Analyzer profiles)
+- microsoft.Network/natGateways (NAT gateways)
+- microsoft.Network/NetworkExperimentProfiles (Internet Analyzer profiles)
- microsoft.network/networkintentpolicies-- Microsoft.Network/networkinterfaces (Network interfaces)
+- microsoft.Network/networkinterfaces (Network interfaces)
- Sample query: [Get virtual networks and subnets of network interfaces](../samples/samples-by-category.md#get-virtual-networks-and-subnets-of-network-interfaces) - Sample query: [List virtual machines with their network interface and public IP](../samples/samples-by-category.md#list-virtual-machines-with-their-network-interface-and-public-ip)-- Microsoft.Network/networkManagers (Network Managers) - microsoft.network/networkprofiles-- Microsoft.Network/NetworkSecurityGroups (Network security groups)
+- microsoft.Network/NetworkSecurityGroups (Network security groups)
- Sample query: [Show unassociated network security groups](../samples/samples-by-category.md#show-unassociated-network-security-groups) - microsoft.network/networksecurityperimeters - microsoft.network/networkvirtualappliances
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.network/networkwatchers/lenses - microsoft.network/networkwatchers/pingmeshes - microsoft.network/p2svpngateways-- Microsoft.Network/privateDnsZones (Private DNS zones)
+- microsoft.Network/privateDnsZones (Private DNS zones)
- microsoft.network/privatednszones/virtualnetworklinks - microsoft.network/privateendpointredirectmaps-- Microsoft.Network/privateEndpoints (Private endpoints)-- Microsoft.Network/privateLinkServices (Private link services)-- Microsoft.Network/PublicIpAddresses (Public IP addresses)
+- microsoft.Network/privateEndpoints (Private endpoints)
+- microsoft.Network/privateLinkServices (Private link services)
+- microsoft.Network/PublicIpAddresses (Public IP addresses)
- Sample query: [List virtual machines with their network interface and public IP](../samples/samples-by-category.md#list-virtual-machines-with-their-network-interface-and-public-ip)-- Microsoft.Network/publicIpPrefixes (Public IP Prefixes)-- Microsoft.Network/routeFilters (Route filters)-- Microsoft.Network/routeTables (Route tables)
+- microsoft.Network/publicIpPrefixes (Public IP Prefixes)
+- microsoft.Network/routeFilters (Route filters)
+- microsoft.Network/routeTables (Route tables)
- microsoft.network/sampleresources - microsoft.network/securitypartnerproviders-- Microsoft.Network/serviceEndpointPolicies (Service endpoint policies)-- Microsoft.Network/trafficmanagerprofiles (Traffic Manager profiles)
+- microsoft.Network/serviceEndpointPolicies (Service endpoint policies)
+- microsoft.Network/trafficmanagerprofiles (Traffic Manager profiles)
- microsoft.network/virtualhubs - microsoft.network/virtualhubs/bgpconnections - microsoft.network/virtualhubs/ipconfigurations-- Microsoft.Network/virtualNetworkGateways (Virtual network gateways)-- Microsoft.Network/virtualNetworks (Virtual networks)
+- microsoft.Network/virtualNetworkGateways (Virtual network gateways)
+- microsoft.Network/virtualNetworks (Virtual networks)
- microsoft.network/virtualnetworktaps - microsoft.network/virtualrouters-- Microsoft.Network/virtualWans (Virtual WANs)
+- microsoft.Network/virtualWans (Virtual WANs)
- microsoft.network/vpngateways - microsoft.network/vpnserverconfigurations - microsoft.network/vpnsites - microsoft.networkfunction/azuretrafficcollectors-- Microsoft.NotificationHubs/namespaces (Notification Hub Namespaces)-- Microsoft.NotificationHubs/namespaces/notificationHubs (Notification Hubs)
+- microsoft.NotificationHubs/namespaces (Notification Hub Namespaces)
+- microsoft.NotificationHubs/namespaces/notificationHubs (Notification Hubs)
- microsoft.nutanix/interfaces - microsoft.nutanix/nodes - microsoft.objectstore/osnamespaces
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.offazure/mastersites - microsoft.offazure/serversites - microsoft.offazure/vmwaresites-- Microsoft.OpenEnergyPlatform/energyServices (Project Oak Forest)
+- microsoft.OpenEnergyPlatform/energyServices (Project Oak Forest)
- microsoft.openlogisticsplatform/applicationmanagers - microsoft.openlogisticsplatform/applicationworkspaces-- Microsoft.OpenLogisticsPlatform/workspaces (Open Supply Chain Platform)
+- microsoft.OpenLogisticsPlatform/workspaces (Open Supply Chain Platform)
- microsoft.operationalinsights/clusters-- Microsoft.OperationalInsights/querypacks (Log Analytics query packs)-- Microsoft.OperationalInsights/workspaces (Log Analytics workspaces)-- Microsoft.OperationsManagement/solutions (Solutions)
+- microsoft.OperationalInsights/querypacks (Log Analytics query packs)
+- microsoft.OperationalInsights/workspaces (Log Analytics workspaces)
+- microsoft.OperationsManagement/solutions (Solutions)
- microsoft.operationsmanagement/views-- Microsoft.Orbital/contactProfiles (Contact Profiles)-- Microsoft.Orbital/EdgeSites (Edge Sites)-- Microsoft.Orbital/GroundStations (Ground Stations)-- Microsoft.Orbital/l2Connections (L2 Connections)
+- microsoft.Orbital/contactProfiles (Contact Profiles)
+- microsoft.Orbital/EdgeSites (Edge Sites)
+- microsoft.Orbital/GroundStations (Ground Stations)
+- microsoft.Orbital/l2Connections (L2 Connections)
- microsoft.orbital/orbitalendpoints - microsoft.orbital/orbitalgateways - microsoft.orbital/orbitalgateways/orbitall2connections - microsoft.orbital/orbitalgateways/orbitall3connections-- Microsoft.Orbital/spacecrafts (Spacecrafts)-- Microsoft.Peering/peerings (Peerings)-- Microsoft.Peering/peeringServices (Peering Services)-- Microsoft.PlayFab/playerAccountPools (PlayFab player account pools)-- Microsoft.PlayFab/titles (PlayFab titles)-- Microsoft.Portal/dashboards (Shared dashboards)
+- microsoft.Orbital/spacecrafts (Spacecrafts)
+- microsoft.Peering/peerings (Peerings)
+- microsoft.Peering/peeringServices (Peering Services)
+- microsoft.PlayFab/playerAccountPools (PlayFab player account pools)
+- microsoft.PlayFab/titles (PlayFab titles)
+- microsoft.Portal/dashboards (Shared dashboards)
- microsoft.portalsdk/rootresources - microsoft.powerbi/privatelinkservicesforpowerbi - microsoft.powerbi/tenants - microsoft.powerbi/workspacecollections - microsoft.powerbidedicated/autoscalevcores-- Microsoft.PowerBIDedicated/capacities (Power BI Embedded)
+- microsoft.PowerBIDedicated/capacities (Power BI Embedded)
- microsoft.powerplatform/accounts - microsoft.powerplatform/enterprisepolicies - microsoft.projectbabylon/accounts - microsoft.providerhubdevtest/regionalstresstests-- Microsoft.Purview/Accounts (Microsoft Purview accounts)-- Microsoft.Quantum/Workspaces (Quantum Workspaces)-- Microsoft.RecommendationsService/accounts (Intelligent Recommendations Accounts)-- Microsoft.RecommendationsService/accounts/modeling (Modeling)-- Microsoft.RecommendationsService/accounts/serviceEndpoints (Service Endpoints)-- Microsoft.RecoveryServices/vaults (Recovery Services vaults)
+- microsoft.Purview/Accounts (microsoft Purview accounts)
+- microsoft.Quantum/Workspaces (Quantum Workspaces)
+- microsoft.RecommendationsService/accounts (Intelligent Recommendations Accounts)
+- microsoft.RecommendationsService/accounts/modeling (Modeling)
+- microsoft.RecommendationsService/accounts/serviceEndpoints (Service Endpoints)
+- microsoft.RecoveryServices/vaults (Recovery Services vaults)
- microsoft.recoveryservices/vaults/backupstorageconfig - microsoft.recoveryservices/vaults/replicationfabrics - microsoft.recoveryservices/vaults/replicationfabrics/replicationprotectioncontainers - microsoft.recoveryservices/vaults/replicationfabrics/replicationprotectioncontainers/replicationprotecteditems - microsoft.recoveryservices/vaults/replicationfabrics/replicationprotectioncontainers/replicationprotectioncontainermappings - microsoft.recoveryservices/vaults/replicationfabrics/replicationrecoveryservicesproviders-- Microsoft.RedHatOpenShift/OpenShiftClusters (Azure Red Hat OpenShift)-- Microsoft.Relay/namespaces (Relays)
+- microsoft.RedHatOpenShift/OpenShiftClusters (Azure Red Hat OpenShift)
+- microsoft.Relay/namespaces (Relays)
- microsoft.remoteapp/collections - microsoft.resiliency/chaosexperiments-- Microsoft.ResourceConnector/Appliances (Resource bridges)-- Microsoft.resourcegraph/queries (Resource Graph queries)-- Microsoft.Resources/deploymentScripts (Deployment Scripts)-- Microsoft.Resources/templateSpecs (Template specs)
+- microsoft.ResourceConnector/Appliances (Resource bridges)
+- microsoft.resourcegraph/queries (Resource Graph queries)
+- microsoft.Resources/deploymentScripts (Deployment Scripts)
+- microsoft.Resources/templateSpecs (Template specs)
- microsoft.resources/templatespecs/versions-- Microsoft.SaaS/applications (Software as a Service (classic))-- Microsoft.SaaS/resources (SaaS)
+- microsoft.SaaS/applications (Software as a Service (classic))
+- microsoft.SaaS/resources (SaaS)
- microsoft.scheduler/jobcollections-- Microsoft.Scom/managedInstances (Aquila Instances)
+- microsoft.Scom/managedInstances (Aquila Instances)
- microsoft.scvmm/availabilitysets - microsoft.scvmm/clouds-- Microsoft.scvmm/virtualMachines (SCVMM virtual machine - Azure Arc)
+- microsoft.scvmm/virtualMachines (SCVMM virtual machine - Azure Arc)
- microsoft.scvmm/virtualmachinetemplates - microsoft.scvmm/virtualnetworks-- Microsoft.ScVmm/vmmServers (SCVMM management servers)-- Microsoft.Search/searchServices (Search services)
+- microsoft.ScVmm/vmmServers (SCVMM management servers)
+- microsoft.Search/searchServices (Search services)
- microsoft.security/assignments - microsoft.security/automations - microsoft.security/customassessmentautomations
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.security/iotsecuritysolutions - microsoft.security/securityconnectors - microsoft.security/standards-- Microsoft.SecurityDetonation/chambers (Security Detonation Chambers)
+- microsoft.SecurityDetonation/chambers (Security Detonation Chambers)
- microsoft.securitydevops/githubconnectors-- Microsoft.ServiceBus/namespaces (Service Bus Namespaces)-- Microsoft.ServiceFabric/clusters (Service Fabric clusters)
+- microsoft.ServiceBus/namespaces (Service Bus Namespaces)
+- microsoft.ServiceFabric/clusters (Service Fabric clusters)
- microsoft.servicefabric/containergroupsets-- Microsoft.ServiceFabric/managedclusters (Service Fabric managed clusters)
+- microsoft.ServiceFabric/managedclusters (Service Fabric managed clusters)
- microsoft.servicefabricmesh/applications - microsoft.servicefabricmesh/gateways - microsoft.servicefabricmesh/networks - microsoft.servicefabricmesh/secrets - microsoft.servicefabricmesh/volumes-- Microsoft.ServicesHub/connectors (Services Hub Connectors)-- Microsoft.SignalRService/SignalR (SignalR)-- Microsoft.SignalRService/WebPubSub (Web PubSub Service)
+- microsoft.ServicesHub/connectors (Services Hub Connectors)
+- microsoft.SignalRService/SignalR (SignalR)
+- microsoft.SignalRService/WebPubSub (Web PubSub Service)
- microsoft.singularity/accounts - microsoft.skytap/nodes - microsoft.solutions/appliancedefinitions - microsoft.solutions/appliances-- Microsoft.Solutions/applicationDefinitions (Service catalog managed application definitions)-- Microsoft.Solutions/applications (Managed applications)
+- microsoft.Solutions/applicationDefinitions (Service catalog managed application definitions)
+- microsoft.Solutions/applications (Managed applications)
- microsoft.solutions/jitrequests - microsoft.spoolservice/spools-- Microsoft.Sql/instancePools (Instance pools)-- Microsoft.Sql/managedInstances (SQL managed instances)-- Microsoft.Sql/managedInstances/databases (Managed databases)-- Microsoft.Sql/servers (SQL servers)-- Microsoft.Sql/servers/databases (SQL databases)
+- microsoft.Sql/instancePools (Instance pools)
+- microsoft.Sql/managedInstances (SQL managed instances)
+- microsoft.Sql/managedInstances/databases (Managed databases)
+- microsoft.Sql/servers (SQL servers)
+- microsoft.Sql/servers/databases (SQL databases)
- Sample query: [List impacted resources when transferring an Azure subscription](../samples/samples-by-category.md#list-impacted-resources-when-transferring-an-azure-subscription) - Sample query: [List SQL Databases and their elastic pools](../samples/samples-by-category.md#list-sql-databases-and-their-elastic-pools)-- Microsoft.Sql/servers/elasticpools (SQL elastic pools)
+- microsoft.Sql/servers/elasticpools (SQL elastic pools)
- Sample query: [List SQL Databases and their elastic pools](../samples/samples-by-category.md#list-sql-databases-and-their-elastic-pools) - microsoft.sql/servers/jobaccounts-- Microsoft.Sql/servers/jobAgents (Elastic Job agents)-- Microsoft.Sql/virtualClusters (Virtual clusters)
+- microsoft.Sql/servers/jobAgents (Elastic Job agents)
+- microsoft.Sql/virtualClusters (Virtual clusters)
- microsoft.sqlvirtualmachine/sqlvirtualmachinegroups-- Microsoft.SqlVirtualMachine/SqlVirtualMachines (SQL virtual machines)
+- microsoft.SqlVirtualMachine/SqlVirtualMachines (SQL virtual machines)
- microsoft.sqlvm/dwvm - microsoft.storage/datamovers-- Microsoft.Storage/StorageAccounts (Storage accounts)
+- microsoft.Storage/StorageAccounts (Storage accounts)
- Sample query: [Find storage accounts with a specific case-insensitive tag on the resource group](../samples/samples-by-category.md#find-storage-accounts-with-a-specific-case-insensitive-tag-on-the-resource-group) - Sample query: [Find storage accounts with a specific case-sensitive tag on the resource group](../samples/samples-by-category.md#find-storage-accounts-with-a-specific-case-sensitive-tag-on-the-resource-group) - Sample query: [List all storage accounts with specific tag value](../samples/samples-by-category.md#list-all-storage-accounts-with-specific-tag-value) - Sample query: [List impacted resources when transferring an Azure subscription](../samples/samples-by-category.md#list-impacted-resources-when-transferring-an-azure-subscription)-- Microsoft.StorageCache/amlFilesystems (Lustre File Systems)-- Microsoft.StorageCache/caches (HPC caches)-- Microsoft.StoragePool/diskPools (Disk Pools)-- Microsoft.StorageSync/storageSyncServices (Storage Sync Services)-- Microsoft.StorageSyncDev/storageSyncServices (Storage Sync Services)-- Microsoft.StorageSyncInt/storageSyncServices (Storage Sync Services)-- Microsoft.StorSimple/Managers (StorSimple Device Managers)-- Microsoft.StreamAnalytics/clusters (Stream Analytics clusters)-- Microsoft.StreamAnalytics/StreamingJobs (Stream Analytics jobs)
+- microsoft.StorageCache/amlFilesystems (Lustre File Systems)
+- microsoft.StorageCache/caches (HPC caches)
+- microsoft.StoragePool/diskPools (Disk Pools)
+- microsoft.StorageSync/storageSyncServices (Storage Sync Services)
+- microsoft.StorageSyncDev/storageSyncServices (Storage Sync Services)
+- microsoft.StorageSyncInt/storageSyncServices (Storage Sync Services)
+- microsoft.StorSimple/Managers (StorSimple Device Managers)
+- microsoft.StreamAnalytics/clusters (Stream Analytics clusters)
+- microsoft.StreamAnalytics/StreamingJobs (Stream Analytics jobs)
- microsoft.swiftlet/virtualmachines - microsoft.swiftlet/virtualmachinesnapshots-- Microsoft.Synapse/privateLinkHubs (Azure Synapse Analytics (private link hubs))-- Microsoft.Synapse/workspaces (Azure Synapse Analytics)-- Microsoft.Synapse/workspaces/bigDataPools (Apache Spark pools)
+- microsoft.Synapse/privateLinkHubs (Azure Synapse Analytics (private link hubs))
+- microsoft.Synapse/workspaces (Azure Synapse Analytics)
+- microsoft.Synapse/workspaces/bigDataPools (Apache Spark pools)
- microsoft.synapse/workspaces/eventstreams-- Microsoft.Synapse/workspaces/kustopools (Data Explorer pools (preview))
+- microsoft.Synapse/workspaces/kustopools (Data Explorer pools (preview))
- microsoft.synapse/workspaces/sqldatabases-- Microsoft.Synapse/workspaces/sqlPools (Dedicated SQL pools)
+- microsoft.Synapse/workspaces/sqlPools (Dedicated SQL pools)
- microsoft.terraformoss/providerregistrations-- Microsoft.TestBase/testbaseAccounts (Test Base Accounts)-- Microsoft.TestBase/testBaseAccounts/packages (Test Base Packages)
+- microsoft.TestBase/testbaseAccounts (Test Base Accounts)
+- microsoft.TestBase/testBaseAccounts/packages (Test Base Packages)
- microsoft.testbase/testbases-- Microsoft.TimeSeriesInsights/environments (Time Series Insights environments)-- Microsoft.TimeSeriesInsights/environments/eventsources (Time Series Insights event sources)-- Microsoft.TimeSeriesInsights/environments/referenceDataSets (Time Series Insights reference data sets)
+- microsoft.TimeSeriesInsights/environments (Time Series Insights environments)
+- microsoft.TimeSeriesInsights/environments/eventsources (Time Series Insights event sources)
+- microsoft.TimeSeriesInsights/environments/referenceDataSets (Time Series Insights reference data sets)
- microsoft.token/stores - microsoft.tokenvault/vaults-- Microsoft.VideoIndexer/accounts (Video Analyzer for Media)-- Microsoft.VirtualMachineImages/imageTemplates (Image Templates)
+- microsoft.VideoIndexer/accounts (Video Analyzer for Media)
+- microsoft.VirtualMachineImages/imageTemplates (Image Templates)
- microsoft.visualstudio/account (Azure DevOps organizations) - microsoft.visualstudio/account/extension - microsoft.visualstudio/account/project (DevOps Starter)
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.vmware/virtualmachines - microsoft.vmware/virtualmachinetemplates - microsoft.vmware/virtualnetworks-- Microsoft.VMwareCloudSimple/dedicatedCloudNodes (CloudSimple Nodes)-- Microsoft.VMwareCloudSimple/dedicatedCloudServices (CloudSimple Services)-- Microsoft.VMwareCloudSimple/virtualMachines (CloudSimple Virtual Machines)
+- microsoft.VMwareCloudSimple/dedicatedCloudNodes (CloudSimple Nodes)
+- microsoft.VMwareCloudSimple/dedicatedCloudServices (CloudSimple Services)
+- microsoft.VMwareCloudSimple/virtualMachines (CloudSimple Virtual Machines)
- microsoft.vmwareonazure/privateclouds - microsoft.vmwarevirtustream/privateclouds - microsoft.vsonline/accounts-- Microsoft.VSOnline/Plans (Visual Studio Online Plans)
+- microsoft.VSOnline/Plans (Visual Studio Online Plans)
- microsoft.web/apimanagementaccounts - microsoft.web/apimanagementaccounts/apis - microsoft.web/certificates-- Microsoft.Web/connectionGateways (On-premises data gateways)-- Microsoft.Web/connections (API Connections)-- Microsoft.Web/containerApps (Container Apps)-- Microsoft.Web/customApis (Logic Apps Custom Connector)-- Microsoft.Web/HostingEnvironments (App Service Environments)-- Microsoft.Web/KubeEnvironments (App Service Kubernetes Environments)-- Microsoft.Web/serverFarms (App Service plans)-- Microsoft.Web/sites (App Services)
+- microsoft.Web/connectionGateways (On-premises data gateways)
+- microsoft.Web/connections (API Connections)
+- microsoft.Web/containerApps (Container Apps)
+- microsoft.Web/customApis (Logic Apps Custom Connector)
+- microsoft.Web/HostingEnvironments (App Service Environments)
+- microsoft.Web/KubeEnvironments (App Service Kubernetes Environments)
+- microsoft.Web/serverFarms (App Service plans)
+- microsoft.Web/sites (App Services)
- microsoft.web/sites/premieraddons-- Microsoft.Web/sites/slots (App Service (Slots))-- Microsoft.Web/StaticSites (Static Web Apps)
+- microsoft.Web/sites/slots (App Service (Slots))
+- microsoft.Web/StaticSites (Static Web Apps)
- microsoft.web/workerapps-- Microsoft.WindowsESU/multipleActivationKeys (Windows Multiple Activation Keys)-- Microsoft.WindowsIoT/DeviceServices (Windows 10 IoT Core Services)
+- microsoft.WindowsESU/multipleActivationKeys (Windows Multiple Activation Keys)
+- microsoft.WindowsIoT/DeviceServices (Windows 10 IoT Core Services)
- microsoft.workloadbuilder/migrationagents - microsoft.workloadbuilder/workloads-- Microsoft.Workloads/monitors (Azure Monitors for SAP Solutions (v2))-- Microsoft.Workloads/phpworkloads (Scalable WordPress on Linux)-- Microsoft.Workloads/sapVirtualInstances (SAP Virtual Instances)-- Microsoft.Workloads/sapVirtualInstances/applicationInstances (SAP app server instances)-- Microsoft.Workloads/sapVirtualInstances/centralInstances (SAP central server instances)-- Microsoft.Workloads/sapVirtualInstances/databaseInstances (SAP database server instances)
+- microsoft.Workloads/monitors (Azure Monitors for SAP Solutions (v2))
+- microsoft.Workloads/phpworkloads (Scalable WordPress on Linux)
+- microsoft.Workloads/sapVirtualInstances (SAP Virtual Instances)
+- microsoft.Workloads/sapVirtualInstances/applicationInstances (SAP app server instances)
+- microsoft.Workloads/sapVirtualInstances/centralInstances (SAP central server instances)
+- microsoft.Workloads/sapVirtualInstances/databaseInstances (SAP database server instances)
- myget.packagemanagement/services - NGINX.NGINXPLUS/nginxDeployments (NGINX Deployment) - paraleap.cloudmonix/services
For sample queries for this table, see [Resource Graph sample queries for securi
- microsoft.security/assessments - Sample query: [Count healthy, unhealthy, and not applicable resources per recommendation](../samples/samples-by-category.md#count-healthy-unhealthy-and-not-applicable-resources-per-recommendation) - Sample query: [List Container Registry vulnerability assessment results](../samples/samples-by-category.md#list-container-registry-vulnerability-assessment-results)
- - Sample query: [List Microsoft Defender recommendations](../samples/samples-by-category.md)
+ - Sample query: [List microsoft Defender recommendations](../samples/samples-by-category.md)
- Sample query: [List Qualys vulnerability assessment results](../samples/samples-by-category.md#list-qualys-vulnerability-assessment-results) - microsoft.security/assessments/governanceassignments - microsoft.security/assessments/subassessments
hdinsight Hdinsight For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-for-vscode.md
Using the PySpark interactive command to submit the queries, follow these steps:
```python from operator import add
+ from pyspark.sql import SparkSession
+ spark = SparkSession.builder \
+ .appName('hdisample') \
+ .getOrCreate()
lines = spark.read.text("/HdiSamples/HdiSamples/FoodInspectionData/README").rdd.map(lambda r: r[0]) counters = lines.flatMap(lambda x: x.split(' ')) \ .map(lambda x: (x, 1)) \
hdinsight Apache Spark Create Standalone Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-create-standalone-application.md
In this tutorial, you learn how to:
* An Apache Spark cluster on HDInsight. For instructions, see [Create Apache Spark clusters in Azure HDInsight](apache-spark-jupyter-spark-sql.md).
-* [Oracle Java Development kit](https://www.azul.com/downloads/azure-only/zulu/). This tutorial uses Java version 8.0.202.
+* [Oracle Java Development kit](https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html). This tutorial uses Java version 8.0.202.
* A Java IDE. This article uses [IntelliJ IDEA Community 2018.3.4](https://www.jetbrains.com/idea/download/).
healthcare-apis Azure Api Fhir Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-resource-manager-template.md
On the **Deploy Azure API for FHIR** page:
4. Enter a new **Service Name** and choose the **Location** of the Azure API for FHIR. The location can be the same as or different from the region of the resource group.
- [ ![Deploy Azure API for FHIR using the ARM template in the Azure portal.](media/fhir-resource-manager-template/deploy-azure-api-fhir.png) ](media/fhir-resource-manager-template/deploy-azure-api-fhir.png#lightbox)
+ [![Deploy Azure API for FHIR using the ARM template in the Azure portal.](media/fhir-resource-manager-template/deploy-azure-api-fhir.png)](media/fhir-resource-manager-template/deploy-azure-api-fhir.png#lightbox)
5. Select **Review + create**.
In this quickstart guide, you've deployed the Azure API for FHIR into your subsc
>[!div class="nextstepaction"] >[Configure Private Link](configure-private-link.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis How To Display Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-display-metrics.md
Title: Display MedTech service metrics - Azure Health Data Services
+ Title: Display the MedTech service metrics - Azure Health Data Services
description: This article explains how to display MedTech service metrics. Previously updated : 07/22/2022 Last updated : 08/09/2022
-# How to display the MedTech service metrics
+# How to display and configure the MedTech service metrics
-In this article, you'll learn how to display MedTech service metrics in the Azure portal and how to pin the MedTech service metrics tile to an Azure portal dashboard.
+In this article, you'll learn how to display and configure the [MedTech service](iot-connector-overview.md) metrics in the Azure portal. You'll also learn how to pin the MedTech service metrics tile to an Azure portal dashboard for later viewing.
+
+The MedTech service metrics can be used to help determine the health and performance of your MedTech service and can be useful with troubleshooting and seeing patterns and/or trends with your MedTech service.
## Metric types for the MedTech service
-The MedTech service metrics that you can select and display are listed in the following table:
+This table shows the available MedTech service metrics and the information that the metrics are capturing and displaying within the Azure portal:
-|Metric type|Metric purpose|
-|--|--|
-|Number of Incoming Messages|Displays the number of received raw incoming messages (for example, the device events).|
-|Number of Normalized Messages|Displays the number of normalized messages.|
-|Number of Message Groups|Displays the number of groups that have messages aggregated in the designated time window.|
-|Average Normalized Stage Latency|Displays the average latency of the normalized stage. The normalized stage performs normalization on raw incoming messages.|
-|Average Group Stage Latency|Displays the average latency of the group stage. The group stage performs buffering, aggregating, and grouping on normalized messages.|
-|Total Error Count|Displays the total number of errors.|
+Metric category|Metric name|Metric description|
+|--|--|--|
+|Availability|IotConnector Health Status|The overall health of the MedTech service.|
+|Errors|Total Error Count|The total number of errors.|
+|Latency|Average Group Stage Latency|The average latency of the group stage. The [group stage](iot-data-flow.md#group) performs buffering, aggregating, and grouping on normalized messages.|
+|Latency|Average Normalize Stage Latency|The average latency of the normalized stage. The [normalized stage](iot-data-flow.md#normalize) performs normalization on raw incoming messages.|
+|Traffic|Number of Fhir resources saved|The total number of Fast Healthcare Interoperability Resources (FHIR&#174;) resources [updated or persisted](iot-data-flow.md#persist) by the MedTech service.|
+|Traffic|Number of Incoming Messages|The number of received raw [incoming messages](iot-data-flow.md#ingest) (for example, the device events) from the configured source event hub.|
+|Traffic|Number of Measurements|The number of normalized value readings received by the FHIR [transformation stage](iot-data-flow.md#transform) of the MedTech service.|
+|Traffic|Number of Message Groups|The number of groups that have messages aggregated in the designated time window.|
+|Traffic|Number of Normalized Messages|The number of normalized messages.|
-## Display the MedTech service metrics
+## Display and configure the MedTech service metrics
-1. Within your Azure Health Data Services workspace, select **MedTech service** under **Services**.
+1. Within your Azure Health Data Services workspace, select **MedTech service** under **Services**.
- :::image type="content" source="media\iot-metrics-display\iot-workspace-displayed-with-connectors-button.png" alt-text="Screenshot of select the MedTech service button within the workspace." lightbox="media\iot-metrics-display\iot-connectors-button.png":::
+ :::image type="content" source="media\iot-metrics-display\iot-workspace-displayed-with-connectors-button.png" alt-text="Screenshot of select the MedTech service within the workspace." lightbox="media\iot-metrics-display\iot-workspace-displayed-with-connectors-button.png":::
-2. Select the MedTech service that you would like to display the metrics for.
+2. Select the MedTech service that you would like to display metrics for. For this example, we'll select a MedTech service named **mt-azuredocsdemo**. You'll select your own MedTech service.
:::image type="content" source="media\iot-metrics-display\iot-connector-select.png" alt-text="Screenshot of select the MedTech service you would like to display metrics for." lightbox="media\iot-metrics-display\iot-connector-select.png":::
-
-3. Select **Metrics** button within the MedTech service page.
- :::image type="content" source="media\iot-metrics-display\iot-select-metrics.png" alt-text="Screenshot of Select the Metrics button within your MedTech service." lightbox="media\iot-metrics-display\iot-metrics-button.png":::
+3. Select **Metrics** within the MedTech service page.
+
+ :::image type="content" source="media\iot-metrics-display\iot-select-metrics.png" alt-text="Screenshot of select the Metrics option within your MedTech service." lightbox="media\iot-metrics-display\iot-select-metrics.png":::
+
+4. The MedTech service metrics page will open allowing you to use the drop-down menus to view and select the metrics that are available for the MedTech service.
+
+ :::image type="content" source="media\iot-metrics-display\iot-metrics-opening-page.png" alt-text="Screenshot the MedTech service metrics page with drop-down menus." lightbox="media\iot-metrics-display\iot-metrics-opening-page.png":::
-4. From the metrics page, you can create the metrics combinations that you want to display for your MedTech service. For this example, we'll be choosing the following selections:
+5. Select the metrics combinations that you want to display for your MedTech service. For this example, we'll be choosing the following selections:
* **Scope** = Your MedTech service name (**Default**)
- * **Metric Namespace** = Standard metrics (**Default**)
+ * **Metric Namespace** = Standard metrics (**Default**)
* **Metric** = The MedTech service metrics you want to display. For this example, we'll choose **Number of Incoming Messages**.
- * **Aggregation** = How you would like to display the metrics. For this example, we'll choose **Count**.
+ * **Aggregation** = How you would like to display the metrics. For this example, we'll choose **Count**.
- :::image type="content" source="media\iot-metrics-display\iot-select-metrics-to-display.png" alt-text="Screenshot of select metrics to display." lightbox="media\iot-metrics-display\iot-metrics-selection-close-up.png":::
+6. You can now see your MedTech service metrics for **Number of Incoming Messages** displayed on the MedTech service metrics page.
-5. We can now see the MedTech service metrics for **Number of Incoming Messages** displayed on the Azure portal.
+ :::image type="content" source="media\iot-metrics-display\iot-metrics-select-options.png" alt-text="Screenshot of select metrics to display." lightbox="media\iot-metrics-display\iot-metrics-select-options.png":::
- > [!TIP]
- > You can add additional metrics by selecting the **Add metric** button and making your choices.
+7. You can add more metrics by selecting **Add metric**.
+
+ :::image type="content" source="media\iot-metrics-display\iot-select-add-metric.png" alt-text="Screenshot of select Add metric to add more MedTech service metrics." lightbox="media\iot-metrics-display\iot-select-add-metric.png":::
+
+8. Then select the metrics that you would like to add to your MedTech service.
- :::image type="content" source="media\iot-metrics-display\iot-metrics-add-button.png" alt-text="Screenshot of select Add metric button to add more metrics." lightbox="media\iot-metrics-display\iot-add-metric-button.png":::
+ :::image type="content" source="media\iot-metrics-display\iot-metrics-select-more-metrics.png" alt-text="Screenshot of select more metrics to add to your MedTech service." lightbox="media\iot-metrics-display\iot-metrics-select-more-metrics.png":::
+
+ > [!TIP]
+ >
+ > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](/azure-monitor/essentials/metrics-getting-started)
> [!IMPORTANT]
- > If you leave the metrics page, the metrics settings for your MedTech service are lost and will have to be recreated. If you would like to save your MedTech service metrics for future viewing, you can pin them to an Azure dashboard as a tile.
+ >
+ > If you leave the MedTech service metrics page, the metrics settings for your MedTech service are lost and will have to be recreated. If you would like to save your MedTech service metrics for future viewing, you can pin them to an Azure dashboard as a tile.
## How to pin the MedTech service metrics tile to an Azure portal dashboard
-1. To pin the MedTech service metrics tile to an Azure portal dashboard, select the **Pin to dashboard** button.
+1. To pin the MedTech service metrics tile to an Azure portal dashboard, select the **Pin to dashboard** option.
- :::image type="content" source="media\iot-metrics-display\iot-metrics-select-add-pin-to-dashboard.png" alt-text="Screenshot of select the Pin to dashboard button." lightbox="media\iot-metrics-display\iot-pin-to-dashboard-button.png":::
+ :::image type="content" source="media\iot-metrics-display\iot-metrics-select-add-pin-to-dashboard.png" alt-text="Screenshot of select the Pin to dashboard option." lightbox="media\iot-metrics-display\iot-metrics-select-add-pin-to-dashboard.png":::
-2. Select the dashboard you would like to display your MedTech service metrics on. For this example, we'll use a private dashboard named `MedTech service metrics`. Select **Pin** to add your MedTech service metrics tile to the dashboard.
+2. Select the dashboard you would like to display your MedTech service metrics to by using the drop-down menu. For this example, we'll use a private dashboard named **Azuredocsdemo_Dashboard**. Select **Pin** to add your MedTech service metrics tile to the dashboard.
- :::image type="content" source="media\iot-metrics-display\iot-select-pin-to-dashboard.png" alt-text="Screenshot of select dashboard and Pin button to complete the dashboard pinning process." lightbox="media\iot-metrics-display\iot-select-pin-to-dashboard.png":::
+ :::image type="content" source="media\iot-metrics-display\iot-select-pin-to-dashboard.png" alt-text="Screenshot of select dashboard and Pin options to complete the dashboard pinning process." lightbox="media\iot-metrics-display\iot-select-pin-to-dashboard.png":::
3. You'll receive a confirmation that your MedTech service metrics tile was successfully added to your selected Azure portal dashboard. :::image type="content" source="media\iot-metrics-display\iot-select-dashboard-pinned-successful.png" alt-text="Screenshot of metrics tile successfully pinned to dashboard." lightbox="media\iot-metrics-display\iot-select-dashboard-pinned-successful.png":::
-4. Once you've received a successful confirmation, select the **Dashboard** button.
+4. Once you've received a successful confirmation, select the **Dashboard** option.
- :::image type="content" source="media\iot-metrics-display\iot-select-dashboard-with-metrics-tile.png" alt-text="Screenshot of select the Dashboard button." lightbox="media\iot-metrics-display\iot-dashboard-button.png":::
+ :::image type="content" source="media\iot-metrics-display\iot-select-dashboard-with-metrics-tile.png" alt-text="Screenshot of select the Dashboard option." lightbox="media\iot-metrics-display\iot-select-dashboard-with-metrics-tile.png":::
-5. Select the dashboard that you pinned your MedTech service metrics tile to. For this example, the dashboard is named **MedTech service metrics**. The dashboard will display the MedTech service metrics tile that you created in the previous steps.
+5. Use the drop-down menu to select the dashboard that you pinned your MedTech service metrics tile. For this example, the dashboard is named **Azuredocsdemo_Dashboard**.
- :::image type="content" source="media\iot-metrics-display\iot-dashboard-with-metrics-tile-displayed.png" alt-text="Screenshot of dashboard with pinned MedTech service metrics tile." lightbox="media\iot-metrics-display\iot-dashboard-with-metrics-tile-displayed.png":::
+ :::image type="content" source="media\iot-metrics-display\iot-select-dashboard-with-metrics-pin.png" alt-text="Screenshot of selecting dashboard with pinned MedTech service metrics tile." lightbox="media\iot-metrics-display\iot-select-dashboard-with-metrics-pin.png":::
- > [!TIP]
- > See the [MedTech service troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors, conditions and issues.
+6. The dashboard will display the MedTech service metrics tile that you created in the previous steps.
+
+ :::image type="content" source="media\iot-metrics-display\iot-metrics-display-dashboard-with-metrics-pin.png" alt-text="Screenshot of dashboard with pinned MedTech service metrics tile." lightbox="media\iot-metrics-display\iot-metrics-display-dashboard-with-metrics-pin.png":::
## Next steps
-To learn how to export the MedTech service metrics, see
+To learn how to configure the diagnostic settings and export the MedTech service metrics to another location (for example: an Azure storage account), see
+
+> [!div class="nextstepaction"]
+> [How to configure diagnostic settings for exporting the MedTech service metrics](iot-metrics-diagnostics-export.md)
+
+To learn about the MedTech service frequently asked questions (FAQs), see
->[!div class="nextstepaction"]
->[How to configure diagnostic settings for exporting the MedTech service metrics](./iot-metrics-diagnostics-export.md)
+> [!div class="nextstepaction"]
+> [Frequently asked questions about the MedTech service](iot-connector-faqs.md)
(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Title: Azure Health Data Services monthly releases description: This article provides details about the Azure Health Data Services monthly features and enhancements. -+ Previously updated : 06/29/2022- Last updated : 08/09/2022++ # Release notes: Azure Health Data Services
Azure Health Data Services is a set of managed API services based on open standards and frameworks for the healthcare industry. They enable you to build scalable and secure healthcare solutions by bringing protected health information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI. This document provides details about the features and enhancements made to Azure Health Data Services including the different service types (FHIR service, DICOM service, and MedTech service) that seamlessly work with one another.
+## July 2022
+
+### FHIR service
+
+#### **Bug fixes**
+
+|Bug fixes |Related information |
+| :-- | : |
+| (Open Source) History bundles were sorted with the oldest version first. | We've recently identified an issue with the sorting order of history bundles on FHIR® server. History bundles were sorted with the oldest version first. Per [FHIR specification](https://hl7.org/fhir/http.html#history), the sorting of versions defaults to the oldest version last. This bug fix, addresses FHIR server behavior for sorting history bundle.<br /><br />We understand if you would like to keep the sorting per existing behavior (oldest version first). To support existing behavior, we recommend you append `_sort=_lastUpdated` to the HTTP `GET` command utilized for retrieving history. <br /><br />For example: `<Server URL>/_history?_sort=_lastUpdated` <br /><br />For more information, see [#2689](https://github.com/microsoft/fhir-server/pull/2689).
+
+#### **Known issues**
+
+| Known Issue | Description |
+| : | :- |
+| Using [token type fields](https://www.hl7.org/fhir/search.html#token) of more than 128 characters in length can result in undesired behavior on `create`, `search`, `update`, and `delete` operations. | Currently, no workaround available. |
+| Queries not providing consistent result count after appended with `_sort` operator. For more information, see [#2680](https://github.com/microsoft/fhir-server/pull/2680). | Currently, no workaround available.|
+
+For more information about the currently known issues with the FHIR service, see [Known issues: FHIR service](known-issues.md).
+
+### MedTech service
+
+#### **Improvements**
+
+|Azure Health Data Services |Related information |
+| :-- | : |
+|Improvements to documentations for Events and MedTech and availability zones. |Tested and enhanced usability and functionality. Added new documents to enable customers to better take advantage of the new improvements. See [Consume Events with Logic Apps](./../healthcare-apis/events/events-deploy-portal.md) and [Deploy Events Using the Azure portal](./../healthcare-apis/events/events-deploy-portal.md). |
+|One touch launch Azure MedTech deploy. |[Deploy the MedTech Service in the Azure portal](./../healthcare-apis/iot/deploy-iot-connector-in-azure.md)|
+
+### DICOM service
+
+#### **Features**
+
+|Enhancements | Related information |
+| : | :- |
+|DICOM Service availability expands to new regions. | The DICOM Service is now available in the following [regions](https://azure.microsoft.com/global-infrastructure/services/): Southeast Asia, Central India, Korea Central, and Switzerland North. |
+|Fast retrieval of individual DICOM frames | For DICOM images containing multiple frames, performance improvements have been made to enable fast retrieval of individual frames (60 KB frames as fast as 60 MS). These improved performance characteristics enable workflows such as [viewing digital pathology images](https://microsofthealth.visualstudio.com/DefaultCollection/Health/_git/marketing-azure-docs?version=GBmain&path=%2Fimaging%2Fdigital-pathology%2FDigital%20Pathology%20using%20Azure%20DICOM%20service.md&_a=preview), which require rapid retrieval of individual frames. |
+ ## June 2022 ### FHIR service
Azure Health Data Services is a set of managed API services based on open standa
|Bug fixes |Related information | | :-- | : | |Export Job not being queued for execution. |Fixes issue with export job not being queued due to duplicate job definition caused due to reference to container URL. For more information, see [#2648](https://github.com/microsoft/fhir-server/pull/2648). |
-|Queries not providing consistent result count after appended with the _sort operator. |Fixes the issue with the help of distinct operator to resolve inconsistency and record duplication in response. For more information, see [#2680](https://github.com/microsoft/fhir-server/pull/2680). |
+|Queries not providing consistent result count after appended with the `_sort operator. |Fixes the issue with the help of distinct operator to resolve inconsistency and record duplication in response. For more information, see [#2680](https://github.com/microsoft/fhir-server/pull/2680). |
## May 2022
Azure Health Data Services is a set of managed API services based on open standa
|Bug fixes |Related information | | :-- | : | |Removes SQL retry on upsert |Removes retry on SQL command for upsert. The error still occurs, but data is saved correctly in success cases. For more information, see [#2571](https://github.com/microsoft/fhir-server/pull/2571). |
-|Added handling for SqlTruncate errors |Added a check for SqlTruncate exceptions and tests. In particular, this will catch SqlTruncate exceptions for Decimal type based on the specified precision and scale. For more information, see [#2553](https://github.com/microsoft/fhir-server/pull/2553). |
+|Added handling for SqlTruncate errors |Added a check for SqlTruncate exceptions and tests. In particular, exceptions and tests will catch SqlTruncate exceptions for Decimal type based on the specified precision and scale. For more information, see [#2553](https://github.com/microsoft/fhir-server/pull/2553). |
### DICOM service
Azure Health Data Services is a set of managed API services based on open standa
|DICOM service supports cross-origin resource sharing (CORS) |DICOM service now supports [CORS](./../healthcare-apis/dicom/configure-cross-origin-resource-sharing.md). CORS allows you to configure settings so that applications from one domain (origin) can access resources from a different domain, known as a cross-domain request. | |DICOMcast supports Private Link |DICOMcast has been updated to support Azure Health Data Services workspaces that have been configured to use [Private Link](./../healthcare-apis/healthcare-apis-configure-private-link.md). | |UPS-RS supports Change and Retrieve work item |Modality worklist (UPS-RS) endpoints have been added to support Change and Retrieve operations for work items. |
-|API version is now required as part of the URI |All REST API requests to the DICOM service must now include the API version in the URI. For more details, see [API versioning for DICOM service](./../healthcare-apis/dicom/api-versioning-dicom-service.md). |
+|API version is now required as part of the URI |All REST API requests to the DICOM service must now include the API version in the URI. For more information, see [API versioning for DICOM service](./../healthcare-apis/dicom/api-versioning-dicom-service.md). |
#### **Bug fixes**
For more information about the currently known issues with the FHIR service, see
|Enhancements | Related information | | : | :- |
-|Events |The Events feature within Health Data Services is now generally available (GA). The Events feature allows customers to receive notifications and triggers when FHIR observations are created, updated, or deleted. For more information, see [Events message structure](events/events-message-structure.md) and [What are events?](events/events-overview.md). |
+|Events |The Events feature within Health Data Services is now generally available (GA). The Events feature allows customers to receive notifications and triggers when FHIR observations are created, updated, or deleted. For more information, see [Events message structure](./../healthcare-apis/events/events-message-structure.md) and [What are events?](./../healthcare-apis/events/events-overview.md). |
|Events documentation for Azure Health Data Services |Updated docs to allow for better understanding, knowledge, and help for Events as it went GA. Updated troubleshooting for ease of use for the customer. | |One touch deploy button for MedTech service launch in the portal |Enables easier deployment and use of MedTech service for customers without the need to go back and forth between pages or interfaces. |
For more information about the currently known issues with the FHIR service, see
|Enhancements | Related information | | : | :- |
-|Customers can define their own query tags using the Extended Query Tags feature |With Extended Query Tags feature, customers now efficiently query non-DICOM metadata for capabilities like multitenancy and cohorts. It's available for all customers in Azure Health Data Services. |
+|Customers can define their own query tags using the Extended Query Tags feature |With Extended Query Tags feature, customers now efficiently query non-DICOM metadata for capabilities like multi-tenancy and cohorts. It's available for all customers in Azure Health Data Services. |
## December 2021
For more information about the currently known issues with the FHIR service, see
| :- | : | |Quota details for support requests |We've updated the quota details for customer support requests with the latest information. | |Local RBAC |We've updated the local RBAC documentation to clarify the use of the secondary tenant and the steps to disable it. |
-|Deploy and configure Azure Health Data Services using scripts |We've started the process of providing PowerShell, CLI scripts, and ARM templates to configure app registration and role assignments. Note that scripts for deploying Azure Health Data Services will be available after GA. |
+|Deploy and configure Azure Health Data Services using scripts |We've started the process of providing PowerShell, CLI scripts, and ARM templates to configure app registration and role assignments. Scripts for deploying Azure Health Data Services will be available after GA. |
### FHIR service
For more information about the currently known issues with the FHIR service, see
|Enhancements | Related information | | : | :- |
-|Added Publisher to `CapabiilityStatement.name` |You can now find the publisher in the capability statement at `CapabilityStatement.name`. [#2319](https://github.com/microsoft/fhir-server/pull/2319) |
-|Log `FhirOperation` linked to anonymous calls to Request metrics |We werenΓÇÖt logging operations that didnΓÇÖt require authentication. We extended the ability to get `FhirOperation` type in `RequestMetrics` for anonymous calls. [#2295](https://github.com/microsoft/fhir-server/pull/2295) |
+|Added Publisher to `CapabilityStatement.name` |You can now find the publisher in the capability statement at `CapabilityStatement.name`. [#2319](https://github.com/microsoft/fhir-server/pull/2319) |
+|Log `FhirOperation` linked to anonymous calls to Request metrics |We weren't logging operations that didnΓÇÖt require authentication. We extended the ability to get `FhirOperation` type in `RequestMetrics` for anonymous calls. [#2295](https://github.com/microsoft/fhir-server/pull/2295) |
#### **Bug fixes**
For more information about the currently known issues with the FHIR service, see
|Process Patient-everything links |We've expanded the Patient-everything capabilities to process patient links [#2305](https://github.com/microsoft/fhir-server/pull/2305). For more information, see [Patient-everything in FHIR](./../healthcare-apis/fhir/patient-everything.md#processing-patient-links) documentation. | |Added software name and version to capability statement. |In the capability statement, the software name now distinguishes if you're using Azure API for FHIR or Azure Health Data Services. The software version will now specify which open-source [release package](https://github.com/microsoft/fhir-server/releases) is live in the managed service [#2294](https://github.com/microsoft/fhir-server/pull/2294). Addresses: [#1778](https://github.com/microsoft/fhir-server/issues/1778) and [#2241](https://github.com/microsoft/fhir-server/issues/2241) | |Compress continuation tokens |In certain instances, the continuation token was too long to be able to follow the [next link](./../healthcare-apis/fhir/overview-of-search.md#pagination) in searches and would result in a 404. To resolve this, we compressed the continuation token to ensure it stays below the size limit [#2279](https://github.com/microsoft/fhir-server/pull/2279). Addresses issue [#2250](https://github.com/microsoft/fhir-server/issues/2250). |
-|FHIR service autoscale |The [FHIR service autoscale](./fhir/fhir-service-autoscale.md) is designed to provide optimized service scalability automatically to meet customer demands when they perform data transactions in consistent or various workloads at any time. It's available in all regions where the FHIR service is supported. |
+|FHIR service autoscale |The [FHIR service autoscale](./fhir/fhir-service-autoscale.md) is designed to provide optimized service scalability automatically to meet customer demands when they perform data transactions in consistent or various workloads at any time. It's available in all [regions](https://azure.microsoft.com/global-infrastructure/services/) where the FHIR service is supported. |
#### **Bug fixes** |Bug fixes |Related information | | :-- | : |
-|Resolved 500 error when the date was passed with a time zone. |This fixes a 500 error when a date with a time zone was passed into a datetime field [#2270](https://github.com/microsoft/fhir-server/pull/2270). |
-|Resolved issue when posting a bundle with incorrect Media Type returned a 500 error. |Previously when posting a search with a key that contains certain characters, a 500 error is returned. This fixes this issue [#2264](https://github.com/microsoft/fhir-server/pull/2264), and it addresses [#2148](https://github.com/microsoft/fhir-server/issues/2148). |
+|Resolved 500 error when the date was passed with a time zone. |This fix addresses a 500 error when a date with a time zone was passed into a datetime field [#2270](https://github.com/microsoft/fhir-server/pull/2270). |
+|Resolved issue when posting a bundle with incorrect Media Type returned a 500 error. |Previously when posting a search with a key that contains certain characters, a 500 error is returned. This fixes issue [#2264](https://github.com/microsoft/fhir-server/pull/2264) and addresses [#2148](https://github.com/microsoft/fhir-server/issues/2148). |
### DICOM service
For more information about the currently known issues with the FHIR service, see
|Enhancements | Related information | | : | :- |
-|Content-Type header now includes transfer-syntax. |This enables the user to know which transfer syntax is used in case multiple accept headers are being supplied. |
+|Content-Type header now includes transfer-syntax. |This enhancement enables the user to know which transfer syntax is used in case multiple accept headers are being supplied. |
## October 2021
For more information about the currently known issues with the FHIR service, see
| Enhancements | Related information | | :- | :-- |
-|Test Data Generator tool |We've updated Azure Health Data Services GitHub samples repo to include a [Test Data Generator tool](https://github.com/microsoft/healthcare-apis-samples/blob/main/docs/HowToRunPerformanceTest.md) using Synthea data. This tool is an improvement to the open source [public test projects](https://github.com/ShadowPic/PublicTestProjects), based on Apache JMeter, that can be deployed to Azure AKS for performance tests. |
+|Test Data Generator tool |We've updated Azure Health Data Services GitHub samples repo to include a [Test Data Generator tool](https://github.com/microsoft/healthcare-apis-samples/blob/main/docs/HowToRunPerformanceTest.md) using Synthea data. This tool is an improvement to the open source [public test projects](https://github.com/ShadowPic/PublicTestProjects), based on Apache JMeter that can be deployed to Azure AKS for performance tests. |
### FHIR service
For more information about the currently known issues with the FHIR service, see
|Added support | Related information | | : | :- |
-|Regions | South Brazil and Central Canada. For more information about Azure regions and availability zones, see [Azure services that support availability zones](./../availability-zones/az-region.md). |
+|Regions | South Brazil and Central Canada. For more information about Azure regions and availability zones, see [Azure services that support availability zones](https://azure.microsoft.com/global-infrastructure/services/). |
|Extended Query tags |DateTime (DT) and Time (TM) Value Representation (VR) types | |Bug fixes | Related information |
For more information about the currently known issues with the FHIR service, see
|Enabled JSON patch in bundles using Binary resources. |[#2143](https://github.com/microsoft/fhir-server/pull/2143) | |Added new audit event [OperationName subtypes](./././azure-api-for-fhir/enable-diagnostic-logging.md#audit-log-details)| [#2170](https://github.com/microsoft/fhir-server/pull/2170) |
-| Running a reindex job | [Reindex improvements](./././fhir/how-to-run-a-reindex.md)|
+| Running a reindex job | [Re-index improvements](./././fhir/how-to-run-a-reindex.md)|
| :- | :-| |Added [boundaries for reindex](./././azure-api-for-fhir/how-to-run-a-reindex.md#performance-considerations) parameters. |[#2103](https://github.com/microsoft/fhir-server/pull/2103)| |Updated error message for reindex parameter boundaries. |[#2109](https://github.com/microsoft/fhir-server/pull/2109)|
For more information about the currently known issues with the FHIR service, see
|Bug fixes | Related information | | :- | :- |
-| MedTech service normalized improvements with calculations to support and enhance health data standardization. | See: [Use Device mappings](./../healthcare-apis/iot/how-to-use-device-mappings.md) and [Calculated Functions](./../healthcare-apis/iot/how-to-use-calculated-functions-mappings.md) |
+| MedTech service normalized improvements with calculations to support and enhance health data standardization. | See [Use Device mappings](./../healthcare-apis/iot/how-to-use-device-mappings.md) and [Calculated Functions](./../healthcare-apis/iot/how-to-use-calculated-functions-mappings.md) |
## Next steps
-In this article, you learned about the features and enhancements made to Azure Health Data Services. For more information about the known issues with Azure Health Data Services, See
+In this article, you learned about the features and enhancements made to Azure Health Data Services. For more information about the known issues with Azure Health Data Services, see
>[!div class="nextstepaction"] >[Known issues: Azure Health Data Services](known-issues.md)
iot-develop About Iot Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/about-iot-sdks.md
The advantages of using an Azure IoT Device SDK over building a custom connectio
| | Custom connection layer | Azure IoT Device SDKs | | :-- | :-- | :-- |
-| **Support** | Need to support and document your solution | Access to Microsoft support (GitHub, Microsoft Q&A, Microsoft Docs, Customer Support teams) |
+| **Support** | Need to support and document your solution | Access to Microsoft support (GitHub, Microsoft Q&A, Microsoft technical documentation, Customer Support teams) |
| **New Features** | Need to manually add new Azure features | Can immediately take advantage of new features added | | **Investment** | Invest hundreds of hours of embedded development to design, build, test, and maintain a custom version | Can take advantage of free, open-source tools. The only cost associated with the SDKs is the learning curve. |
iot-dps Iot Dps Customer Data Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/iot-dps-customer-data-requests.md
It is also possible to perform export operations for enrollments and registratio
## Links to additional documentation
-Full documentation for Device Provisioning Service APIs is located at [https://docs.microsoft.com/rest/api/iot-dps](/rest/api/iot-dps).
+For full documentation of Device Provisioning Service APIs, see [Azure IoT Hub Device Provisioning Service REST API](/rest/api/iot-dps).
-Azure IoT Hub [customer data request features](../iot-hub/iot-hub-customer-data-requests.md).
+Azure IoT Hub [customer data request features](../iot-hub/iot-hub-customer-data-requests.md).
iot-fundamentals Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/security-recommendations.md
Title: Security recommendations for Azure IoT | Microsoft Docs
description: This article summarizes additional steps to ensure security in your Azure IoT Hub solution. ---+++ Last updated 11/13/2019
iot-hub-device-update Device Update Multi Step Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-multi-step-updates.md
Example Update Manifest with one Reference Step:
``` > [!NOTE]
-> In the [update manifest]( https://docs.microsoft.com/azure/iot-hub-device-update/update-manifest), each step should have different ΓÇ£installedCriteriaΓÇ¥ string if that string is being used to determine whether the step should be performed or not.
+> In the [update manifest](/azure/iot-hub-device-update/update-manifest), each step should have different ΓÇ£installedCriteriaΓÇ¥ string if that string is being used to determine whether the step should be performed or not.
## Parent Update vs. Child Update
Inline step(s) specified in `Parent Update` will be applied to the Host Device.
> [!NOTE] > Steps Content Handler:
-> IsInstalled validation logic: The Device Update agentΓÇÖs [step handler]( https://github.com/Azure/iot-hub-device-update/blob/main/src/content_handlers/steps_handler/README.md) checks to see if particular update is already installed (i.e., IsInstalled() resulted in a result code ΓÇ£900ΓÇ¥ i.e., is installed is ΓÇÿtrueΓÇÖ). To avoid installing an update that is already on the device the DU agent will skip future steps because we use it to determine whether to perform the step or not.
-> Reporting an update result: The result of a step handler execution must be written to ADUC_Result struct in a desired result file as specified in --result-file option [learn more]( https://github.com/Azure/iot-hub-device-update/blob/main/src/content_handlers/steps_handler/README.md#steps-content-handler). Then based on results of the execution, for success return 0, for any fatal errors return -1 or 0xFF.
+> IsInstalled validation logic for each step: The Device Update agentΓÇÖs [step handler](https://github.com/Azure/iot-hub-device-update/blob/main/src/content_handlers/steps_handler/README.md) checks to see if particular update is already installed i.e., checks for IsInstalled() resulted in a result code ΓÇ£900ΓÇ¥ which means ΓÇÿtrueΓÇÖ. If an update is already installed, to avoid re-installing an update that is already on the device, the DU agent will skip future steps because we use it to determine whether to perform the step or not.
+> Reporting an update result: The result of a step handler execution must be written to ADUC_Result struct in a desired result file as specified in --result-file option [learn more](https://github.com/Azure/iot-hub-device-update/blob/main/src/content_handlers/steps_handler/README.md#steps-content-handler). Then based on results of the execution, for success return 0, for any fatal errors return -1 or 0xFF.
### Reference Step In Parent Update
iot-hub-device-update Device Update Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-security.md
Last updated 10/5/2021 -+ # Device Update Security Model
iot-hub Iot Hub Csharp Csharp File Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-csharp-csharp-file-upload.md
[!INCLUDE [iot-hub-file-upload-language-selector](../../includes/iot-hub-file-upload-language-selector.md)]
-This article shows you how to use the file upload feature of IoT Hub with the Azure IoT .NET device and service SDKs.
+This article demonstrates how to [file upload capabilities of IoT Hub](iot-hub-devguide-file-upload.md) upload a file to [Azure blob storage](../storage/index.yml), using an Azure IoT .NET device and service SDKs.
The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) quickstart and [Send cloud-to-device messages with IoT Hub](iot-hub-csharp-csharp-c2d.md) article show the basic device-to-cloud and cloud-to-device messaging functionality of IoT Hub. The [Configure Message Routing with IoT Hub](tutorial-routing.md) article shows a way to reliably store device-to-cloud messages in Microsoft Azure blob storage. However, in some scenarios, you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-
* Vibration data sampled at high frequency * Some form of preprocessed data
-These files are typically batch processed in the cloud using tools such as [Azure Data Factory](../data-factory/introduction.md) or the [Hadoop](../hdinsight/index.yml) stack. When you need to upload files from a device, you can still use the security and reliability of IoT Hub. This article shows you how.
+These files are typically batch processed in the cloud, using tools such as [Azure Data Factory](../data-factory/introduction.md) or the [Hadoop](../hdinsight/index.yml) stack. When you need to upload files from a device, you can still use the security and reliability of IoT Hub. This article shows you how.
-At the end of this article you run two .NET console apps:
+At the end of this article, you run two .NET console apps:
* **FileUploadSample**. This device app uploads a file to storage using a SAS URI provided by your IoT hub. You'll run this app from the Azure IoT C# samples repository that you download in the prerequisites. * **ReadFileUploadNotification**. This service app receives file upload notifications from your IoT hub. You'll create this app. > [!NOTE]
-> IoT Hub supports many device platforms and languages, including C, Java, Python, and JavaScript, through Azure IoT device SDKs. Refer to the [Azure IoT Developer Center](https://azure.microsoft.com/develop/iot) for step-by-step instructions on how to connect your device to Azure IoT Hub.
+> IoT Hub supports many device platforms and languages (including C, Java, Python, and JavaScript) through Azure IoT device SDKs. Refer to the [Azure IoT Developer Center](https://azure.microsoft.com/develop/iot) to learn how to connect your device to Azure IoT Hub.
[!INCLUDE [iot-hub-include-x509-ca-signed-file-upload-support-note](../../includes/iot-hub-include-x509-ca-signed-file-upload-support-note.md)] ## Prerequisites
-* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
- * An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md). * A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub). * The sample applications you run in this article are written using C# with .NET Core.
- You can download the .NET Core SDK for multiple platforms from [.NET](https://dotnet.microsoft.com/download).
+ Download the .NET Core SDK for multiple platforms from [.NET](https://dotnet.microsoft.com/download).
- You can verify the current version of the .NET Core SDK on your development machine using the following command:
+ Verify the current version of the .NET Core SDK on your development machine using the following command:
```cmd/sh dotnet --version ```
-* Download the Azure IoT C# samples from [https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/main.zip](https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/main.zip) and extract the ZIP archive.
+* Download the Azure IoT C# samples from [Download sample](https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/main.zip) and extract the ZIP archive.
-* Make sure that port 8883 is open in your firewall. The sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Port 8883 should be open in your firewall. The sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
[!INCLUDE [iot-hub-associate-storage](../../includes/iot-hub-include-associate-storage.md)] ## Upload file from a device app
-In this article, you'll use a sample from the Azure IoT C# samples repository you downloaded earlier as the device app. You can open the files below using Visual Studio, Visual Studio Code, or a text editor of your choice.
+In this article, you use a sample from the Azure IoT C# samples repository you downloaded earlier as the device app. You can open the files below using Visual Studio, Visual Studio Code, or a text editor of your choice.
-The sample is located in the **azure-iot-samples-csharp-main\iot-hub\Samples\device\FileUploadSample** under the folder where you extracted the Azure IoT C# samples.
+The sample is located at **azure-iot-samples-csharp/iot-hub/Samples/device/FileUploadSample** in the folder where you extracted the Azure IoT C# samples.
Examine the code in **FileUpLoadSample.cs**. This file contains the main sample logic. After creating an IoT Hub device client, it follows the standard three-part procedure for uploading files from a device:
iot-hub Iot Hub Csharp Csharp Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-csharp-csharp-schedule-jobs.md
Use Azure IoT Hub to schedule and track jobs that update millions of devices. Us
* Invoke direct methods
-A job wraps one of these actions and tracks the execution against a set of devices that is defined by a device twin query. For example, a back-end app can use a job to invoke a direct method on 10,000 devices that reboots the devices. You specify the set of devices with a device twin query and schedule the job to run at a future time. The job tracks progress as each of the devices receive and execute the reboot direct method.
+A job wraps one of these actions and tracks the execution against a set of devices that is defined by a device twin query. For example, a back-end app can use a job to invoke a direct method on 10,000 devices that reboots the devices. You specify the set of devices with a device twin query and schedule the job to run at a future time. The job tracks progress as each of the devices receives and executes the reboot direct method.
To learn more about each of these capabilities, see:
-* Device twin and properties: [Get started with device twins](iot-hub-csharp-csharp-twin-getstarted.md) and [Tutorial: How to use device twin properties](tutorial-device-twins.md)
+* Device twin and properties: [Get started with device twins](iot-hub-csharp-csharp-twin-getstarted.md) and [Understand and use device twins in IoT Hub](iot-hub-devguide-device-twins.md)
* Direct methods: [IoT Hub developer guide - direct methods](iot-hub-devguide-direct-methods.md) and [Quickstart: Use direct methods](./quickstart-control-device.md?pivots=programming-language-csharp) [!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-whole.md)]
-This article shows you how to:
+This article shows you how to create two .NET (C#) console apps:
-* Create a device app that implements a direct method called **LockDoor**, which can be called by the back-end app.
+* A device app, **SimulateDeviceMethods**, that implements a direct method called **LockDoor**, which can be called by the back-end app.
-* Create a back-end app that creates a job to call the **LockDoor** direct method on multiple devices. Another job sends desired property updates to multiple devices.
+* A back-end app, **ScheduleJob**, that creates two jobs. One job calls the **lockDoor** direct method and another job sends desired property updates to multiple devices.
-At the end of this article, you have two .NET (C#) console apps:
-
-* **SimulateDeviceMethods**. This app connects to your IoT hub and implements the **LockDoor** direct method.
-
-* **ScheduleJob**. This app uses jobs to call the **LockDoor** direct method and update the device twin desired properties on multiple devices.
+> [!NOTE]
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites
At the end of this article, you have two .NET (C#) console apps:
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
-* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
- * Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub). ## Create a simulated device app
In this section, you create a .NET console app that responds to a direct method
1. Save your work and build your solution. > [!NOTE]
-> To keep things simple, this article does not implement any retry policies. In production code, you should implement retry policies (such as connection retry), as suggested in [Transient fault handling](/azure/architecture/best-practices/transient-faults).
->
+> To keep things simple, this article does not implement a retry policies. In production code, you should implement retry policies (such as connection retry), as suggested in [Transient fault handling](/azure/architecture/best-practices/transient-faults).
## Get the IoT hub connection string
You are now ready to run the apps.
## Next steps
-In this article, you used a job to schedule a direct method to a device and the update of the device twin's properties.
-
-* To continue getting started with IoT Hub and device management patterns such as end-to-end image-based update in [Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ Reference Image](../iot-hub-device-update/device-update-raspberry-pi.md).
+In this article, you scheduled jobs to run a direct method and update the device twin's properties.
-* To learn about deploying AI to edge devices with Azure IoT Edge, see [Getting started with IoT Edge](../iot-edge/quickstart-linux.md).
+To continue exploring IoT Hub and device management patterns, update an image in [Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ Reference Image](../iot-hub-device-update/device-update-raspberry-pi.md).
iot-hub Iot Hub Java Java File Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-java-java-file-upload.md
[!INCLUDE [iot-hub-file-upload-language-selector](../../includes/iot-hub-file-upload-language-selector.md)]
-This article shows you how to use the file upload capabilities of IoT Hub using Java. For an overview of the file upload process, see [Upload Files with IoT Hub](iot-hub-devguide-file-upload.md).
+This article demonstrates how to [file upload capabilities of IoT Hub](iot-hub-devguide-file-upload.md) upload a file to [Azure blob storage](../storage/index.yml), using Java.
The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) quickstart and [Send cloud-to-device messages with IoT Hub](iot-hub-java-java-c2d.md) articles show the basic device-to-cloud and cloud-to-device messaging functionality of IoT Hub. The [Configure message routing with IoT Hub](tutorial-routing.md) tutorial shows a way to reliably store device-to-cloud messages in Azure blob storage. However, in some scenarios, you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-
* Vibration data sampled at high frequency * Some form of preprocessed data.
-These files are typically batch processed in the cloud using tools such as [Azure Data Factory](../data-factory/introduction.md) or the [Hadoop](../hdinsight/index.yml) stack. When you need to upload files from a device, you can still use the security and reliability of IoT Hub. This article shows you how. Also, there are two samples located at [https://github.com/Azure/azure-iot-sdk-java/tree/main/device/iot-device-samples/file-upload-sample/src/main/java/samples/com/microsoft/azure/sdk/iot](https://github.com/Azure/azure-iot-sdk-java/tree/main/device/iot-device-samples/file-upload-sample/src/main/java/samples/com/microsoft/azure/sdk/iot) in GitHub.
+These files are typically batch processed in the cloud, using tools such as [Azure Data Factory](../data-factory/introduction.md) or the [Hadoop](../hdinsight/index.yml) stack. When you need to upload files from a device, you can still use the security and reliability of IoT Hub. This article shows you how. View two samples from [azure-iot-sdk-java
+](https://github.com/Azure/azure-iot-sdk-java/tree/main/device/iot-device-samples/file-upload-sample/src/main/java/samples/com/microsoft/azure/sdk/iot) in GitHub.
> [!NOTE]
-> IoT Hub supports many device platforms and languages (including C, .NET, and JavaScript) through Azure IoT device SDKs. Refer to the [Azure IoT Developer Center](https://azure.microsoft.com/develop/iot) for step-by-step instructions on how to connect your device to Azure IoT Hub.
+> IoT Hub supports many device platforms and languages (including C, .NET, and JavaScript) through Azure IoT device SDKs. Refer to the [Azure IoT Developer Center](https://azure.microsoft.com/develop/iot) to learn how to connect your device to Azure IoT Hub.
[!INCLUDE [iot-hub-include-x509-ca-signed-file-upload-support-note](../../includes/iot-hub-include-x509-ca-signed-file-upload-support-note.md)]
These files are typically batch processed in the cloud using tools such as [Azur
* [Maven 3](https://maven.apache.org/download.cgi)
-* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
-
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Port 8883 should be open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
[!INCLUDE [iot-hub-associate-storage](../../includes/iot-hub-include-associate-storage.md)]
The following screenshot shows the output from the **read-file-upload-notificati
## Next steps
-In this article, you learned how to use the file upload capabilities of IoT Hub to simplify file uploads from devices. You can continue to explore IoT hub features and scenarios with the following articles:
+In this article, you learned how to use the file upload feature of IoT Hub to simplify file uploads from devices. You can continue to explore this feature with the following articles:
* [Create an IoT hub programmatically](iot-hub-rm-template-powershell.md)
iot-hub Iot Hub Java Java Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-java-java-schedule-jobs.md
Use Azure IoT Hub to schedule and track jobs that update millions of devices. Us
* Update tags * Invoke direct methods
-A job wraps one of these actions and tracks the execution against a set of devices. A device twin query defines the set of devices the job executes against. For example, a back-end app can use a job to invoke a direct method on 10,000 devices that reboots the devices. You specify the set of devices with a device twin query and schedule the job to run at a future time. The job tracks progress as each of the devices receive and execute the reboot direct method.
+A job wraps one of these actions and tracks the execution against a set of devices. A device twin query defines the set of devices the job executes against. For example, a back-end app can use a job to invoke a direct method on 10,000 devices that reboots the devices. You specify the set of devices with a device twin query and schedule the job to run at a future time. The job tracks progress as each of the devices receives and executes the reboot direct method.
To learn more about each of these capabilities, see:
-* Device twin and properties: [Get started with device twins](iot-hub-java-java-twin-getstarted.md)
+* Device twin and properties: [Get started with device twins](iot-hub-java-java-twin-getstarted.md) and [Understand and use device twins in IoT Hub](iot-hub-devguide-device-twins.md)
* Direct methods: [IoT Hub developer guide - direct methods](iot-hub-devguide-direct-methods.md) and [Quickstart: Use direct methods](./quickstart-control-device.md?pivots=programming-language-java) [!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-whole.md)]
-This article shows you how to:
+This article shows you how to create two Java apps:
-* Create a device app that implements a direct method called **lockDoor**. The device app also receives desired property changes from the back-end app.
+* A device app, **simulated-device**, that implements a direct method called **lockDoor**, which can be called by the back-end app.
-* Create a back-end app that creates a job to call the **lockDoor** direct method on multiple devices. Another job sends desired property updates to multiple devices.
-
-At the end of this article, you have a Java console device app and a Java console back-end app:
-
-**simulated-device** that connects to your IoT hub, implements the **lockDoor** direct method, and handles desired property changes.
-
-**schedule-jobs** that use jobs to call the **lockDoor** direct method and update the device twin desired properties on multiple devices.
+* A back-end app, **schedule-jobs**, that creates two jobs. One job calls the **lockDoor** direct method and another job sends desired property updates to multiple devices.
> [!NOTE]
-> The article [Azure IoT SDKs](iot-hub-devguide-sdks.md) provides information about the Azure IoT SDKs that you can use to build both device and back-end apps.
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites
At the end of this article, you have a Java console device app and a Java consol
* [Maven 3](https://maven.apache.org/download.cgi)
-* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
- * Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
-## Get the IoT hub connection string
+> [!NOTE]
+> To keep things simple, this article does not implement a retry policy. In production code, you should implement retry policies (such as an exponential backoff), as suggested in the article, [Transient Fault Handling](/azure/architecture/best-practices/transient-faults).
+
+## Get the IoT Hub connection string
[!INCLUDE [iot-hub-howto-schedule-jobs-shared-access-policy-text](../../includes/iot-hub-howto-schedule-jobs-shared-access-policy-text.md)]
You are now ready to run the console apps.
## Next steps
-In this article, you used a job to schedule a direct method to a device and the update of the device twin's properties.
-
-Use the following resources to learn how to:
-
-* Send telemetry from devices with the [Get started with IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) article.
+In this article, you scheduled jobs to run a direct method and update the device twin's properties.
-* Control devices interactively (such as turning on a fan from a user-controlled app) with the [Use direct methods](./quickstart-control-device.md?pivots=programming-language-java) quickstart.
+To continue exploring IoT Hub and device management patterns, update an image in [Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ Reference Image](../iot-hub-device-update/device-update-raspberry-pi.md).
iot-hub Iot Hub Node Node File Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-node-node-file-upload.md
# Upload files from your device to the cloud with IoT Hub (Node.js) -
-The article shows you how to:
-
-* Securely provide a device with an Azure blob URI for uploading a file.
-
-* Use the IoT Hub file upload notifications to trigger processing the file in your app back end.
+This article demonstrates how to [file upload capabilities of IoT Hub](iot-hub-devguide-file-upload.md) upload a file to [Azure blob storage](../storage/index.yml), using Node.js.
The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) quickstart and [Send cloud-to-device messages with IoT Hub](iot-hub-node-node-c2d.md) articles show the basic device-to-cloud and cloud-to-device messaging functionality of IoT Hub. The [Configure Message Routing with IoT Hub](tutorial-routing.md) tutorial shows a way to reliably store device-to-cloud messages in Microsoft Azure blob storage. However, in some scenarios, you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-
* Vibration data sampled at high frequency * Some form of pre-processed data.
-These files are typically batch processed in the cloud using tools such as [Azure Data Factory](../data-factory/introduction.md) or the [Hadoop](../hdinsight/index.yml) stack. When you need to upland files from a device, you can still use the security and reliability of IoT Hub. This article shows you how.
+These files are typically batch processed in the cloud, using tools such as [Azure Data Factory](../data-factory/introduction.md) or the [Hadoop](../hdinsight/index.yml) stack. When you need to upland files from a device, you can still use the security and reliability of IoT Hub. This article shows you how.
At the end of this article, you run two Node.js console apps:
At the end of this article, you run two Node.js console apps:
* **FileUploadNotification.js**, which receives file upload notifications from your IoT hub. > [!NOTE]
-> IoT Hub supports many device platforms and languages, including C, Java, Python, and JavaScript, through Azure IoT device SDKs. Refer to the [Azure IoT Developer Center](https://azure.microsoft.com/develop/iot) for step-by-step instructions on how to connect your device to Azure IoT Hub.
+> IoT Hub supports many device platforms and languages (including C, Java, Python, and JavaScript) through Azure IoT device SDKs. Refer to the [Azure IoT Developer Center](https://azure.microsoft.com/develop/iot) to learn how to connect your device to Azure IoT Hub.
[!INCLUDE [iot-hub-include-x509-ca-signed-file-upload-support-note](../../includes/iot-hub-include-x509-ca-signed-file-upload-support-note.md)] ## Prerequisites
-* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
- * An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md). * A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub). * Node.js version 10.0.x or later. The LTS version is recommended. You can download Node.js from [nodejs.org](https://nodejs.org).
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Port 8883 should be open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
[!INCLUDE [iot-hub-associate-storage](../../includes/iot-hub-include-associate-storage.md)]
You can use the portal to view the uploaded file in the storage container you co
## Next steps
-In this article, you learned how to use the file upload capabilities of IoT Hub to simplify file uploads from devices. You can continue to explore IoT hub features and scenarios with the following articles:
+In this article, you learned how to use the file upload feature of IoT Hub to simplify file uploads from devices. You can continue to explore this feature with the following articles:
* [Create an IoT hub programmatically](iot-hub-rm-template-powershell.md)
iot-hub Iot Hub Node Node Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-node-node-schedule-jobs.md
[!INCLUDE [iot-hub-selector-schedule-jobs](../../includes/iot-hub-selector-schedule-jobs.md)]
-Azure IoT Hub is a fully managed service that enables a back-end app to create and track jobs that schedule and update millions of devices. Jobs can be used for the following actions:
+Use Azure IoT Hub to schedule and track jobs that update millions of devices. Use jobs to:
* Update desired properties * Update tags * Invoke direct methods
-Conceptually, a job wraps one of these actions and tracks the progress of execution against a set of devices, which is defined by a device twin query. For example, a back-end app can use a job to invoke a reboot method on 10,000 devices, specified by a device twin query and scheduled at a future time. That application can then track progress as each of those devices receive and execute the reboot method.
+Conceptually, a job wraps one of these actions and tracks the progress of execution against a set of devices, which is defined by a device twin query. For example, a back-end app can use a job to invoke a reboot method on 10,000 devices, specified by a device twin query and scheduled at a future time. That application can then track progress as each of those devices receives and executes the reboot method.
Learn more about each of these capabilities in these articles:
-* Device twin and properties: [Get started with device twins](iot-hub-node-node-twin-getstarted.md) and [Tutorial: How to use device twin properties](tutorial-device-twins.md)
+* Device twin and properties: [Get started with device twins](iot-hub-node-node-twin-getstarted.md) and [Understand and use device twins in IoT Hub](iot-hub-devguide-device-twins.md)
* Direct methods: [IoT Hub developer guide - direct methods](iot-hub-devguide-direct-methods.md) and [Quickstart: direct methods](./quickstart-control-device.md?pivots=programming-language-nodejs) [!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-whole.md)]
-This article shows you how to:
+This article shows you how to create two Node.js apps:
-* Create a Node.js simulated device app that has a direct method, which enables **lockDoor**, which can be called by the solution back end.
+* A Node.js simulated device app, **simDevice.js**, that implements a direct method called **lockDoor**, which can be called by the back-end app.
-* Create a Node.js console app that calls the **lockDoor** direct method in the simulated device app using a job and updates the desired properties using a device job.
+* A Node.js console app, **scheduleJobService.js**, that creates two jobs. One job calls the **lockDoor** direct method and another job sends desired property updates to multiple devices.
-At the end of this article, you have two Node.js apps:
-
-* **simDevice.js**, which connects to your IoT hub with the device identity and receives a **lockDoor** direct method.
-
-* **scheduleJobService.js**, which calls a direct method in the simulated device app and updates the device twin's desired properties using a job.
+> [!NOTE]
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites
At the end of this article, you have two Node.js apps:
* Node.js version 10.0.x or later. [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-node/tree/main/doc/node-devbox-setup.md) describes how to install Node.js for this article on either Windows or Linux.
-* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
- * Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub). ## Create a simulated device app
In this section, you create a Node.js console app that responds to a direct meth
8. Save and close the **simDevice.js** file. > [!NOTE]
-> To keep things simple, this article does not implement any retry policy. In production code, you should implement retry policies (such as an exponential backoff), as suggested in the article, [Transient Fault Handling](/azure/architecture/best-practices/transient-faults).
->
+> To keep things simple, this article does not implement a retry policy. In production code, you should implement retry policies (such as an exponential backoff), as suggested in the article, [Transient Fault Handling](/azure/architecture/best-practices/transient-faults).
-## Get the IoT hub connection string
+## Get the IoT Hub connection string
[!INCLUDE [iot-hub-howto-schedule-jobs-shared-access-policy-text](../../includes/iot-hub-howto-schedule-jobs-shared-access-policy-text.md)]
You are now ready to run the applications.
## Next steps
-In this article, you used a job to schedule a direct method to a device and the update of the device twin's properties.
-
-To continue getting started with IoT Hub and device management patterns such as end-to-end image-based update in [Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ Reference Image](../iot-hub-device-update/device-update-raspberry-pi.md).
+In this article, you scheduled jobs to run a direct method and update the device twin's properties.
-To continue getting started with IoT Hub, see [Getting started with Azure IoT Edge](../iot-edge/quickstart-linux.md).
+To continue exploring IoT Hub and device management patterns, update an image in [Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ Reference Image](../iot-hub-device-update/device-update-raspberry-pi.md).
iot-hub Iot Hub Python Python File Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-python-python-file-upload.md
[!INCLUDE [iot-hub-file-upload-language-selector](../../includes/iot-hub-file-upload-language-selector.md)]
-This article shows how to use the [file upload capabilities of IoT Hub](iot-hub-devguide-file-upload.md) to upload a file to [Azure blob storage](../storage/index.yml). The article shows you how to:
-
-* Securely provide a storage container for uploading a file.
-
-* Use the Python client to upload a file through your IoT hub.
+This article demonstrates how to [file upload capabilities of IoT Hub](iot-hub-devguide-file-upload.md) upload a file to [Azure blob storage](../storage/index.yml), using Python.
The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) quickstart and [Send cloud-to-device messages with IoT Hub](iot-hub-python-python-c2d.md) articles show the basic device-to-cloud and cloud-to-device messaging functionality of IoT Hub. The [Configure Message Routing with IoT Hub](tutorial-routing.md) tutorial shows a way to reliably store device-to-cloud messages in Microsoft Azure blob storage. However, in some scenarios, you can't easily map the data your devices send into the relatively small device-to-cloud messages that IoT Hub accepts. For example:
The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-
* Vibration data sampled at high frequency * Some form of pre-processed data.
-These files are typically batch processed in the cloud using tools such as [Azure Data Factory](../data-factory/introduction.md) or the [Hadoop](../hdinsight/index.yml) stack. When you need to upland files from a device, you can still use the security and reliability of IoT Hub. This article shows you how.
-
-At the end of this article, you run the Python console app:
+These files are typically batch processed in the cloud, using tools such as [Azure Data Factory](../data-factory/introduction.md) or the [Hadoop](../hdinsight/index.yml) stack. When you need to upland files from a device, you can still use the security and reliability of IoT Hub. This article shows you how.
-* **FileUpload.py**, which uploads a file to storage using the Python Device SDK.
+At the end of this article, you run the Python console app **FileUpload.py**, which uploads a file to storage using the Python Device SDK.
[!INCLUDE [iot-hub-include-python-sdk-note](../../includes/iot-hub-include-python-sdk-note.md)]
At the end of this article, you run the Python console app:
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Port 8883 should be open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
[!INCLUDE [iot-hub-associate-storage](../../includes/iot-hub-include-associate-storage.md)]
Now you're ready to run the application.
## Next steps
-In this article, you learned how to use the file upload capabilities of IoT Hub to simplify file uploads from devices. You can continue to explore IoT hub features and scenarios with the following articles:
+In this article, you learned how to use the file upload feature of IoT Hub to simplify file uploads from devices. You can continue to explore this feature with the following articles:
* [Create an IoT hub programmatically](iot-hub-rm-template-powershell.md)
iot-hub Iot Hub Python Python Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-python-python-schedule-jobs.md
[!INCLUDE [iot-hub-selector-schedule-jobs](../../includes/iot-hub-selector-schedule-jobs.md)]
-Azure IoT Hub is a fully managed service that enables a back-end app to create and track jobs that schedule and update millions of devices. Jobs can be used for the following actions:
+Use Azure IoT Hub to schedule and track jobs that update millions of devices. Use jobs to:
* Update desired properties * Update tags * Invoke direct methods
-Conceptually, a job wraps one of these actions and tracks the progress of execution against a set of devices, which is defined by a device twin query. For example, a back-end app can use a job to invoke a reboot method on 10,000 devices, specified by a device twin query and scheduled at a future time. That application can then track progress as each of those devices receive and execute the reboot method.
+Conceptually, a job wraps one of these actions and tracks the progress of execution against a set of devices, which is defined by a device twin query. For example, a back-end app can use a job to invoke a reboot method on 10,000 devices, specified by a device twin query and scheduled at a future time. That application can then track progress as each of those devices receives and executes the reboot method.
Learn more about each of these capabilities in these articles:
-* Device twin and properties: [Get started with device twins](iot-hub-python-twin-getstarted.md) and [Tutorial: How to use device twin properties](tutorial-device-twins.md)
+* Device twin and properties: [Get started with device twins](iot-hub-python-twin-getstarted.md) and [Understand and use device twins in IoT Hub](iot-hub-devguide-device-twins.md)
* Direct methods: [IoT Hub developer guide - direct methods](iot-hub-devguide-direct-methods.md) and [Quickstart: direct methods](./quickstart-control-device.md?pivots=programming-language-python) [!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-whole.md)]
-This article shows you how to:
+This article shows you how to create two Python apps:
-* Create a Python simulated device app that has a direct method, which enables **lockDoor**, which can be called by the solution back end.
+* A Python simulated device app, **simDevice.py**, that implements a direct method called **lockDoor**, which can be called by the back-end app.
-* Create a Python console app that calls the **lockDoor** direct method in the simulated device app using a job and updates the desired properties using a device job.
+* A Python console app, **scheduleJobService.py**, that creates two jobs. One job calls the **lockDoor** direct method and another job sends desired property updates to multiple devices.
-At the end of this article, you have two Python apps:
-
-**simDevice.py**, which connects to your IoT hub with the device identity and receives a **lockDoor** direct method.
-
-**scheduleJobService.py**, which calls a direct method in the simulated device app and updates the device twin's desired properties using a job.
-
+> [!NOTE]
+> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
## Prerequisites
In this section, you create a Python console app that responds to a direct metho
6. Save and close the **simDevice.py** file. > [!NOTE]
-> To keep things simple, this article does not implement any retry policy. In production code, you should implement retry policies (such as an exponential backoff), as suggested in the article, [Transient Fault Handling](/azure/architecture/best-practices/transient-faults).
+> To keep things simple, this article does not implement a retry policy. In production code, you should implement retry policies (such as an exponential backoff), as suggested in the article, [Transient Fault Handling](/azure/architecture/best-practices/transient-faults).
>
-## Get the IoT hub connection string
+## Get the IoT Hub connection string
In this article, you create a backend service that invokes a direct method on a device and updates the device twin. The service needs the **service connect** permission to call a direct method on a device. The service also needs the **registry read** and **registry write** permissions to read and write the identity registry. There is no default shared access policy that contains only these permissions, so you need to create one.
You are now ready to run the applications.
## Next steps
-In this article, you used a job to schedule a direct method to a device and the update of the device twin's properties.
+In this article, you scheduled jobs to run a direct method and update the device twin's properties.
-To continue getting started with IoT Hub and device management patterns such as end-to-end image-based update in [Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ Reference Image](../iot-hub-device-update/device-update-raspberry-pi.md).
+To continue exploring IoT Hub and device management patterns, update an image in [Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ Reference Image](../iot-hub-device-update/device-update-raspberry-pi.md).
iot-hub Iot Hub Reliability Features In Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-reliability-features-in-sdks.md
Last updated 07/07/2018-+
iot-hub Iot Hub X509ca Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-x509ca-overview.md
Learn here how to [register your CA certificate](./tutorial-x509-scripts.md)
## How to create a device on IoT Hub
-To preclude device impersonation, IoT Hub requires you to let it know what devices to expect. You do this by creating a device entry in the IoT Hub's device registry. This process is automated when using IoT Hub [Device Provisioning Service](https://azure.microsoft.com/blog/azure-iot-hub-device-provisioning-service-preview-automates-device-connection-configuration/).
+To preclude device impersonation, IoT Hub requires you to let it know what devices to expect. You do this by creating a device entry in the IoT Hub's device registry. This process is automated when using IoT Hub [Device Provisioning Service](../iot-dps/about-iot-dps.md).
Learn here how to [manually create a device in IoT Hub](./tutorial-x509-scripts.md).
iot-hub Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure IoT Hub description: Lists Azure Policy Regulatory Compliance controls available for Azure IoT Hub. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 08/04/2022-+
logic-apps Secure Single Tenant Workflow Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md
ms.suite: integration Previously updated : 03/11/2022 Last updated : 08/08/2022
-# As a developer, I want to connect to my single-tenant logic app workflows with virtual networks using private endpoints and VNet integration.
+# As a developer, I want to connect to my single-tenant logic app workflows with virtual networks using private endpoints and virtual network integration.
-# Secure traffic between single-tenant Standard logic apps and Azure virtual networks using private endpoints and VNet integration
+# Secure traffic between single-tenant Standard logic apps and Azure virtual networks using private endpoints
[!INCLUDE [logic-apps-sku-standard](../../includes/logic-apps-sku-standard.md)]
-To securely and privately communicate between your workflow in a Standard logic app and an Azure virtual network, you can set up *private endpoints* for inbound traffic and use VNet integration for outbound traffic.
+To securely and privately communicate between your workflow in a Standard logic app and an Azure virtual network, you can set up *private endpoints* for inbound traffic and use virtual network integration for outbound traffic.
A private endpoint is a network interface that privately and securely connects to a service powered by Azure Private Link. This service can be an Azure service such as Azure Logic Apps, Azure Storage, Azure Cosmos DB, SQL, or your own Private Link Service. The private endpoint uses a private IP address from your virtual network, which effectively brings the service into your virtual network.
-This article shows how to set up access through private endpoints for inbound traffic and VNet integration for outbound traffic.
+This article shows how to set up access through private endpoints for inbound traffic and virtual network integration for outbound traffic.
For more information, review the following documentation: - [What is Azure Private Endpoint?](../private-link/private-endpoint-overview.md) and [Private endpoints - Integrate your app with an Azure virtual network](../app-service/overview-vnet-integration.md#private-endpoints) - [What is Azure Private Link?](../private-link/private-link-overview.md)-- [What is Vnet integration?](../app-service/networking-features.md#regional-vnet-integration)
+- [Regional virtual network integration?](../app-service/networking-features.md#regional-vnet-integration)
## Prerequisites
For more information, review [Create single-tenant logic app workflows in Azure
<a name="set-up-outbound"></a>
-## Set up outbound traffic using VNet integration
+## Set up outbound traffic using virtual network integration
-To secure outbound traffic from your logic app, you can integrate your logic app with a virtual network. First, create and test an example workflow. You can then set up VNet integration.
+To secure outbound traffic from your logic app, you can integrate your logic app with a virtual network. First, create and test an example workflow. You can then set up virtual network integration.
> [!IMPORTANT] > You can't change the subnet size after assignment, so use a subnet that's large enough to accommodate
To secure outbound traffic from your logic app, you can integrate your logic app
The HTTP action fails, which is by design and expected because the workflow runs in the cloud and can't access your internal service.
-### Set up VNet integration
+### Set up virtual network integration
1. In the Azure portal, on the logic app resource menu, under **Settings**, select **Networking**.
To secure outbound traffic from your logic app, you can integrate your logic app
1. If you use your own domain name server (DNS) with your virtual network, set your logic app resource's `WEBSITE_DNS_SERVER` app setting to the IP address for your DNS. If you have a secondary DNS, add another app setting named `WEBSITE_DNS_ALT_SERVER`, and set the value also to the IP for your DNS.
-1. After Azure successfully provisions the VNet integration, try to run the workflow again.
+1. After Azure successfully provisions the virtual network integration, try to run the workflow again.
The HTTP action now runs successfully.
To secure outbound traffic from your logic app, you can integrate your logic app
> > > For Azure-hosted managed connectors to work, you need to have an uninterrupted connection to the managed API service.
-> With VNet integration, you need to make sure no firewall or network security policy is blocking these connections.
+> With virtual network integration, make sure that no firewall or network security policy blocks these connections.
-### Considerations for outbound traffic through VNet integration
+### Considerations for outbound traffic through virtual network integration
If your virtual network uses a network security group (NSG), user-defined route table (UDR), or a firewall, make sure that the virtual network allows outbound connections to [all managed connector IP addresses](/connectors/common/outbound-ip-addresses#azure-logic-apps) in the corresponding region. Otherwise, Azure-managed connectors won't work.
machine-learning How To Track Experiments Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-experiments-mlflow.md
# Manage experiments and runs with MLflow
-Experiments and runs in Azure Machine Learning can be queried using MLflow client. This removes the need of any Azure ML specific SDKs to manage anything that happens inside of a training job, allowing dependencies removal and creating a more seamless transition between local runs and cloud.
+Experiments and runs in Azure Machine Learning can be queried using MLflow. This removes the need of any Azure Machine Learning specific SDKs to manage anything that happens inside of a training job, allowing dependencies removal and creating a more seamless transition between local runs and cloud.
> [!NOTE] > The Azure Machine Learning Python SDK v2 (preview) does not provide native logging or tracking capabilities. This applies not just for logging but also for querying the metrics logged. Instead, we recommend to use MLflow to manage experiments and runs. This article explains how to use MLflow to manage experiments and runs in Azure ML.
-MLflow client allows you to:
+MLflow allows you to:
-* Create, delete and search for experiments in a workspace
+* Create, delete and search for experiments in a workspace.
* Start, stop, cancel and query runs for experiments. * Track and retrieve metrics, parameters, artifacts and models from runs.
Use MLflow to query and manage all the experiments in Azure Machine Learning. Th
### Prerequisites * Install `azureml-mlflow` plug-in.
-* If you're running in a compute not hosted in Azure ML, configure MLflow to point to the Azure ML MLtracking URL. You can follow the instruction at [Track runs from your local machine](how-to-use-mlflow-cli-runs.md)
+* If you're running in a compute not hosted in Azure ML, configure MLflow to point to the Azure ML tracking URL. You can follow the instruction at [Track runs from your local machine](how-to-use-mlflow-cli-runs.md).
### Support matrix for querying runs and experiments
-The MLflow client exposes several methods to retrieve runs, including options to control what is returned and how. Use the following table to learn about which of those methods are currently supported in MLflow when connected to Azure Machine Learning:
+The MLflow SDK exposes several methods to retrieve runs, including options to control what is returned and how. Use the following table to learn about which of those methods are currently supported in MLflow when connected to Azure Machine Learning:
| Feature | Supported by MLflow | Supported by Azure ML | | :- | :-: | :-: |
The MLflow client exposes several methods to retrieve runs, including options to
| Filtering runs with string comparators (params, tags, and attributes): `=` and `!=` | **&check;** | **&check;**<sup>2</sup> | | Filtering runs with string comparators (params, tags, and attributes): `LIKE`/`ILIKE` | **&check;** | | | Filtering runs with comparators `AND` | **&check;** | **&check;** |
-| Filtering runs with comparators `OR` | **&check;** | |
+| Filtering runs with comparators `OR` | | |
| Renaming experiments | **&check;** | | > [!NOTE] > - <sup>1</sup> Check the section [Getting runs inside an experiment](#getting-runs-inside-an-experiment) for instructions and examples on how to achieve the same functionality in Azure ML.
-> - <sup>2</sup> `!=` for tags not supported
+> - <sup>2</sup> `!=` for tags not supported.
## Getting all the experiments
You can also order by metrics to know which run generated the best results:
### Filtering runs
-You can also look for a run with a specific combination in the hyperparameters using the parameter `filter_string`. Use `params` to access run's parameters and `metrics` to access metrics logged in the run:
+You can also look for a run with a specific combination in the hyperparameters using the parameter `filter_string`. Use `params` to access run's parameters and `metrics` to access metrics logged in the run. MLflow supports expressions joined by the AND keyword (the syntax does not support OR):
```python mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ], filter_string="params.num_boost_round='100'") ```
-
+ ### Filter runs by status You can also filter experiment by status. It becomes useful to find runs that are running, completed, canceled or failed. In MLflow, `status` is an `attribute`, so we can access this value using the expression `attributes.status`. The following table shows the possible values:
machine-learning How To Track Monitor Analyze Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-monitor-analyze-runs.md
This article shows how to do the following tasks:
* Run search over your job history. * Cancel or fail jobs. * Monitor the job status by email notification.
+* Monitor your job resources (preview)
> [!TIP]
To cancel a job in the studio, using the following steps:
1. See [how to create and manage log alerts using Azure Monitor](../azure-monitor/alerts/alerts-log.md).
+## Monitor your job resources (preview)
+
+Navigate to your job in the studio and select the Monitoring tab. This view provides insights on your job's resources on a 30 day rolling basis.
++
+>[!NOTE]
+>This view supports only compute that is managed by AzureML.
+>Jobs with a runtime of less than 5 minutes will not have enough data to populate this view.
+ ## Next steps
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
Make sure model and code artifacts are registered to the same workspace as the d
az ml code show --name <code-name> --version <version> ```
- You can also check if the blobs are present in the workspace storage account.
+You can also check if the blobs are present in the workspace storage account.
- For example, if the blob is `https://foobar.blob.core.windows.net/210212154504-1517266419/WebUpload/210212154504-1517266419/GaussianNB.pkl`, you can use this command to check if it exists:
+
+ ```azurecli
+ az storage blob exists --account-name foobar --container-name 210212154504-1517266419 --name WebUpload/210212154504-1517266419/GaussianNB.pkl --subscription <sub-name>`
+ ```
+
+- If the blob is present, you can use this command to obtain the logs from the storage initializer:
- `az storage blob exists --account-name foobar --container-name 210212154504-1517266419 --name WebUpload/210212154504-1517266419/GaussianNB.pkl --subscription <sub-name>`
+ ```azurecli
+ az ml online-deployment get-logs --endpoint-name <endpoint-name> --name <deployment-name> ΓÇô-container storage-initializer`
+ ```
#### azureml-fe not ready The front-end component (azureml-fe) that routes incoming inference requests to deployed services automatically scales as needed. It's installed during your k8s-extension installation.
For more information, see [Resolve resource not found errors](../azure-resource-
### ERROR: OperationCancelled
-Azure operations have a certain priority level and are executed from highest to lowest. This error happens when your operation happened to be overridden by another operation that has a higher priority. Retrying the operation might allow it to be performed without cancellation.
+Below is a list of reasons you might run into this error:
+
+* [Operation was cancelled by another operation which has a higher priority](#operation-cancelled-by-another-higher-priority-operation)
+* [Operation was cancelled due to a previous operation waiting for lock confirmation](#operation-cancelled-waiting-for-lock-confirmation)
+
+#### Operation cancelled by another higher priority operation
+
+Azure operations have a certain priority level and are executed from highest to lowest. This error happens when your operation happened to be overridden by another operation that has a higher priority.
+
+Retrying the operation might allow it to be performed without cancellation.
+
+#### Operation cancelled waiting for lock confirmation
+
+Azure operations have a brief waiting period after being submitted during which they retrieve a lock to ensure that we do not run into race conditions. This error happens when the operation you submitted is the same as another operation that is currently still waiting for confirmation that it has received the lock to proceed. It may indicate that you have submitted a very similar request too soon after the initial request.
+
+Retrying the operation after waiting a few seconds up to a minute may allow it to be performed without cancellation.
### ERROR: InternalServerError
When you access online endpoints with REST requests, the returned status codes a
- [Deploy and score a machine learning model with a managed online endpoint](how-to-deploy-managed-online-endpoints.md) - [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)-- [Online endpoint YAML reference](reference-yaml-endpoint-online.md)
+- [Online endpoint YAML reference](reference-yaml-endpoint-online.md)
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-tune-hyperparameters.md
command_job_for_sweep = command_job(
This code defines a search space with two parameters - `learning_rate` and `keep_probability`. `learning_rate` has a normal distribution with mean value 10 and a standard deviation of 3. `keep_probability` has a uniform distribution with a minimum value of 0.05 and a maximum value of 0.1.
-For the CLI, you can use the [sweep job YAML schema](./reference-yaml-job-sweep.md)., to define the search space in your YAML:
+For the CLI, you can use the [sweep job YAML schema](./reference-yaml-job-sweep.md), to define the search space in your YAML:
```YAML search_space: conv_size:
sweep_job.early_termination = None
### Picking an early termination policy
-* For a conservative policy that provides savings without terminating promising jobs, consider a Median Stopping Policy with `evaluation_interval` 1 and `delay_evaluation` 5. These are conservative settings, that can provide approximately 25%-35% savings with no loss on primary metric (based on our evaluation data).
+* For a conservative policy that provides savings without terminating promising jobs, consider a Median Stopping Policy with `evaluation_interval` 1 and `delay_evaluation` 5. These are conservative settings that can provide approximately 25%-35% savings with no loss on primary metric (based on our evaluation data).
* For more aggressive savings, use Bandit Policy with a smaller allowable slack or Truncation Selection Policy with a larger truncation percentage. ## Set limits for your sweep job
Control your resource budget by setting limits for your sweep job.
* `max_total_trials`: Maximum number of trial jobs. Must be an integer between 1 and 1000. * `max_concurrent_trials`: (optional) Maximum number of trial jobs that can run concurrently. If not specified, all jobs launch in parallel. If specified, must be an integer between 1 and 100.
-* `timeout`: Maximum time in minutes the entire sweep job is allowed to run. Once this limit is reached the system will cancel the sweep job, including all its trials.
+* `timeout`: Maximum time in seconds the entire sweep job is allowed to run. Once this limit is reached the system will cancel the sweep job, including all its trials.
* `trial_timeout`: Maximum time in seconds each trial job is allowed to run. Once this limit is reached the system will cancel the trial. >[!NOTE]
Control your resource budget by setting limits for your sweep job.
>The number of concurrent trial jobs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency. ```Python
-sweep_job.set_limits(max_total_trials=20, max_concurrent_trials=4, timeout=120)
+sweep_job.set_limits(max_total_trials=20, max_concurrent_trials=4, timeout=1200)
```
-This code configures the hyperparameter tuning experiment to use a maximum of 20 total trial jobs, running four trial jobs at a time with a timeout of 120 minutes for the entire sweep job.
+This code configures the hyperparameter tuning experiment to use a maximum of 20 total trial jobs, running four trial jobs at a time with a timeout of 1200 seconds for the entire sweep job.
## Configure hyperparameter tuning experiment
You can visualize all of your hyperparameter tuning jobs in the [Azure Machine L
:::image type="content" source="media/how-to-tune-hyperparameters/hyperparameter-tuning-metrics.png" alt-text="Hyperparameter tuning metrics chart"::: -- **Parallel Coordinates Chart**: This visualization shows the correlation between primary metric performance and individual hyperparameter values. The chart is interactive via movement of axes (click and drag by the axis label), and by highlighting values across a single axis (click and drag vertically along a single axis to highlight a range of desired values). The parallel coordinates chart includes an axis on the right most portion of the chart that plots the best metric value corresponding to the hyperparameters set for that job instance. This axis is provided in order to project the chart gradient legend onto the data in a more readable fashion.
+- **Parallel Coordinates Chart**: This visualization shows the correlation between primary metric performance and individual hyperparameter values. The chart is interactive via movement of axes (click and drag by the axis label), and by highlighting values across a single axis (click and drag vertically along a single axis to highlight a range of desired values). The parallel coordinates chart includes an axis on the rightmost portion of the chart that plots the best metric value corresponding to the hyperparameters set for that job instance. This axis is provided in order to project the chart gradient legend onto the data in a more readable fashion.
:::image type="content" source="media/how-to-tune-hyperparameters/hyperparameter-tuning-parallel-coordinates.png" alt-text="Hyperparameter tuning parallel coordinates chart":::
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-register-datasets.md
With TabularDatasets, you can specify a time stamp from a column in the data or
Create a TabularDataset with [the Python SDK](#create-a-tabulardataset) or [Azure Machine Learning studio](../how-to-connect-data-ui.md#create-datasets). >[!NOTE]
-> [Automated ML](../concept-automated-ml.md) workflows generated via the Azure Machine Learning studio currently only support TabularDatasets.
+> [Automated ML](../concept-automated-ml.md) workflows generated via the Azure Machine Learning studio currently only support TabularDatasets.
+>
+>[!NOTE]
+> For TabularDatasets generating from [SQL query results](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-sql-query-query--validate-true--set-column-types-none--query-timeout-30-), T-SQL (e.g. 'WITH' sub query) or duplicate column name is not supported. Complex queries like T-SQL can cause performance issues. Duplicate column names in a dataset can cause ambiguity issues.
## Access datasets in a virtual network
titanic_ds = titanic_ds.register(workspace = workspace,
* Learn [how to train with datasets](../how-to-train-with-datasets.md). * Use automated machine learning to [train with TabularDatasets](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb).
-* For more dataset training examples, see the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
+* For more dataset training examples, see the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
marketplace Azure Consumption Commitment Enrollment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-consumption-commitment-enrollment.md
An offer must meet the following requirements to be enrolled in the MACC program
1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Marketplace offers** tile.
- [ ![Illustrates the Marketplace offers tile on the Partner Center Home page.](./media/workspaces/partner-center-home.png) ](./media/workspaces/partner-center-home.png#lightbox)
+ [![Illustrates the Marketplace offers tile on the Partner Center Home page.](./media/workspaces/partner-center-home.png)](./media/workspaces/partner-center-home.png#lightbox)
1. On the Marketplace offers page, select the offer you want to see. 1. On the **Offer overview** page, in the **Marketplace programs** section the **Microsoft Azure consumption commitment** status will show either _Enrolled_ or _Not Enrolled_.
- [ ![Screenshot of the Offer overview page in Partner Center that shows the Microsoft Azure consumption commitment status.](media/azure-benefit/enrolled-workspaces.png) ](media/azure-benefit/enrolled-workspaces.png#lightbox)
+ [![Screenshot of the Offer overview page in Partner Center that shows the Microsoft Azure consumption commitment status.](media/azure-benefit/enrolled-workspaces.png)](./media/azure-benefit/enrolled-workspaces.png#lightbox)
***Figure 1: Offer that is enrolled in the MACC program***
marketplace Marketplace Power Bi Visual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-power-bi-visual.md
Title: Planning a Power BI visual offer in Partner Center for Microsoft AppSource description: Learn what information you'll need on hand to submit your Power BI visual offer in Partner Center.--++
marketplace Pc Saas Fulfillment Subscription Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/pc-saas-fulfillment-subscription-api.md
description: Learn how to use the Subscription APIs, which are part of the the
Previously updated : 07/06/2022 Last updated : 08/10/2022
Response body example:
"autoRenew": true/false, "isTest": true/false, "isFreeTrial": false,
- "allowedCustomerOperations": ["Delete", "Update", "Read"],
+ "allowedCustomerOperations": <CSP purchases>["Read"] <All Others> ["Delete", "Update", "Read"],
"sandboxType": "None", "lastModified": "0001-01-01T00:00:00", "quantity": 5,
marketplace Power Bi Visual Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-visual-availability.md
Title: Defining availability of a Power BI visual offer in Partner Center for Microsoft AppSource description: Learn how to define the availability of a Power VI visual offer in Partner Center.--++
marketplace Power Bi Visual Manage Names https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-visual-manage-names.md
Title: Manage names in a Power BI visual offer in Partner Center for Microsoft AppSource description: Learn how to manage names in a Power BI visual offer in Partner Center for Microsoft AppSource.--++
marketplace Power Bi Visual Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-visual-offer-listing.md
Title: Configure Power BI visual offer listing details in Partner Center for Microsoft AppSource description: Learn how to configure Power BI visual offer listing details in Partner Center for Microsoft AppSource.--++
marketplace Power Bi Visual Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-visual-offer-setup.md
Title: Create a Power BI visual offer in Partner Center for Microsoft AppSource description: Learn how to create a Power BI visual offer in Partner Center.--++ Previously updated : 07/20/2022 Last updated : 08/09/2022 # Create a Power BI visual offer
Review [Plan a Power BI visual offer](marketplace-power-bi-visual.md). It will e
> [!NOTE] > This capability is currently in Public Preview. - **My offer requires purchase of a service or offers additional in-app purchase** to manage licenses and transactions independently.
- > [!NOTE]
- > This capability is currently in Public Preview.
- **My offer does not require purchase of a service and does not offer in app purchases** to provide a free offer. 1. Under **Power BI certification** (optional), read the description carefully and if you want to request [Power BI certification](/power-bi/developer/visuals/power-bi-custom-visuals-certified), select the check box. [Certified](/power-bi/developer/visuals/power-bi-custom-visuals-certified) Power BI visuals meet certain specified code requirements that the Microsoft Power BI team has tested and approved. We recommend that you submit and publish your Power BI visual *before* you request certification, because the certification process takes extra time that could delay publishing of your offer.
marketplace Power Bi Visual Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-visual-plans.md
Title: Create Power BI visual plans in Partner Center for Microsoft AppSource description: Learn how to create plans for a Power BI visual offer in Partner Center for Microsoft AppSource.--++
marketplace Power Bi Visual Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-visual-properties.md
Title: Configure Power BI visual offer properties in Partner Center for Microsoft AppSource description: Learn how to configure Power BI visual offer properties in Partner Center for Microsoft AppSource.--++
marketplace Power Bi Visual Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-visual-technical-configuration.md
Title: Set up Power BI visual offer technical configuration in Partner Center for Microsoft AppSource description: Learn how to set up Power BI visual offer technical configuration in Partner Center for Microsoft AppSource.--++
migrate Onboard To Azure Arc With Azure Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/onboard-to-azure-arc-with-azure-migrate.md
Once the vCenter Server discovery has been completed, software inventory (discov
If you receive an error when onboarding to Azure Arc using the Azure Migrate appliance, the following section can help identify the probable cause and suggested steps to resolve your problem.
-If you don't see the error code listed below or if the error code starts with **_AZCM_**, refer to [this guide for troubleshooting Azure Arc ](../azure-arc/servers/troubleshoot-agent-onboard.md)
+If you don't see the error code listed below or if the error code starts with **_AZCM_**, refer to [this guide for troubleshooting Azure Arc](../azure-arc/servers/troubleshoot-agent-onboard.md)
### Error 60001 - UnableToConnectToPhysicalServer
migrate Tutorial Migrate Webapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-webapps.md
Title: Migrate ASP.NET web apps to Azure App Service using Azure Migrate
+ Title: Modernize ASP.NET web apps to Azure App Service code
description: At-scale migration of ASP.NET web apps to Azure App Service using Azure Migrate Previously updated : 06/21/2022 Last updated : 08/09/2022
-# Migrate ASP.NET web apps to Azure App Service with Azure Migrate
+# Modernize ASP.NET web apps to Azure App Service code
This article shows you how to migrate ASP.NET web apps at-scale to [Azure App Service](https://azure.microsoft.com/services/app-service/) using Azure Migrate.
orbital Overview Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/overview-analytics.md
+
+ Title: What is Azure Orbital Analytics?
+description: Azure Orbital Analytics are Azure capabilities that allow you to discover and distribute the most valuable insights from your spaceborne data.
++++ Last updated : 08/08/2022+++
+# What is Azure Orbital Analytics?
+
+Azure Orbital Analytics are Azure capabilities using spaceborne data and AI that allow you to discover and distribute the most valuable insights from your spaceborne data to take action in less time.
+
+## What it provides
+
+Azure Orbital Analytics provides the ability to downlink spaceborne data from Azure Orbital Ground Station (AOGS), first- or third-party archives, or customer-acquired data directly into Azure. This data is efficiently stored using Azure Data Platform components. From there, you can convert raw spaceborne sensor data into analysis-ready data using Azure Orbital Analytics processing pipelines.
+
+## Integrations
+
+Derive insights on data by applying AI models, integrating applications, and more. Partner AI models and Microsoft tools extract the highest precision results. Finally, deliver data to various endpoints such as Microsoft Teams, Power Platform, or other open-source locations. Azure Orbital Analytics enables scenarios including land classification, asset monitoring, object detection, and more.
+
+## Partnerships
+
+Azure Orbital Analytics is the pathway between satellite operators and Microsoft customers. Partnerships with Airbus, Blackshark, and Orbital Insight enable information extraction and publishing to EsriΓÇÖs ArcGIS workflows.
+
+Orbital Analytics for Azure Synapse applies artificial intelligence over satellite imagery at scale using Azure resources.
+
+## Next steps
+
+- [Geospatial reference architecture](./geospatial-reference-architecture.md)
+- [Spaceborne data analysis with Azure Synapse Analytics](/azure/architecture/industries/aerospace/geospatial-processing-analytics)
orbital Prepare Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/prepare-network.md
Steps:
> [!NOTE] > Address range needs to be at least /24 (example 10.0.0.0/23)
+Here is an example of a typical VNET setup with a subnet delegated to Azure Orbital Ground Station.
++ ## Setting up the contact profile Prerequisites:
Make sure the contact profile properties are set as follows:
The platform pre-reserves IPs in the subnet when the contact is scheduled. These IPs represent the platform side endpoints for each link. IPs will be unique between contacts, and if multiple concurrent contacts are using the same subnet, we guarantee those IPs to be distinct. The service will fail to schedule the contact and an error will be returned if the service runs out of IPs or can't allocate an IP.
-When you create a contact, you can find these IPs by viewing the contact properties. Select JSON view in the portal or use the GET contact API call to view the contact properties. The parameters of interest are below:
+When you create a contact, you can find these IPs by viewing the contact properties. Select JSON view in the portal or use the GET contact API call to view the contact properties. Make sure to use the current API version of 2022-03-01. The parameters of interest are below:
| Parameter | Usage | ||-|
When you create a contact, you can find these IPs by viewing the contact propert
You can use this information to set up network policies or to distinguish between simultaneous contacts to the same endpoint. + > [!NOTE]
-> - The source and destination IPs are always taken from the subnet address range
+> - The source and destination IPs are always taken from the subnet address range.
> - Only one destination IP is present. Any link in client mode should connect to this IP and the links are differentiated based on port. > - Many source IPs can be present. Links in server mode will connect to your specified IP address in the contact profile. The flows will originate from the source IPs present in this field and target the port as per the link details in the contact profile. There is no fixed assignment of link to source IP so please make sure to allow all IPs in any networking setup or firewalls.
Here's how to set up the link flows based on direction on tcp or udp preference.
| Setting | TCP Client | TCP Server | UDP Client | UDP Server | |--|-|--|-|--|
-| Contact Profile Link ipAddress | Blank | Routable IP from delegated subnet | Blank | Routable IP from delegated subnet |
-| Contact Profile Link port | Unique port in 49152-65535 | Unique port in 49152-65535 | Unique port in 49152-65535 | Unique port in 49152-65535 |
+| Contact Profile Link ipAddress | Blank | Routable IP from delegated subnet | Blank | Not applicable |
+| Contact Profile Link port | Unique port in 49152-65535 | Unique port in 49152-65535 | Unique port in 49152-65535 | Not applicable |
| **Output** | | | | | | Contact Object destinationIP | Connect to this IP | Not applicable | Connect to this IP | Not applicable |
-| Contact Object sourceIP | Not applicable | Link will come from one of these IPs | Not applicable | Link will come from one of these IPs |
+| Contact Object sourceIP | Not applicable | Link will come from one of these IPs | Not applicable | Not applicable |
Here's how to set up the link flows based on direction on tcp or udp preference.
| Setting | TCP Client | TCP Server | UDP Client | UDP Server | |--|-|--|-|--|
-| Contact Profile Link ipAddress | Blank | Routable IP from delegated subnet | Blank | Routable IP from delegated subnet |
-| Contact Profile Link port | Unique port in 49152-65535 | Unique port in 49152-65535 | Unique port in 49152-65535 | Unique port in 49152-65535 |
+| Contact Profile Link ipAddress | Blank | Routable IP from delegated subnet | Not applicable | Routable IP from delegated subnet |
+| Contact Profile Link port | Unique port in 49152-65535 | Unique port in 49152-65535 | Not applicable | Unique port in 49152-65535 |
| **Output** | | | | |
-| Contact Object destinationIP | Connect to this IP | Not applicable | Connect to this IP | Not applicable |
+| Contact Object destinationIP | Connect to this IP | Not applicable | Not applicable | Not applicable |
| Contact Object sourceIP | Not applicable | Link will come from one of these IPs | Not applicable | Link will come from one of these IPs | ## Next steps - [Register Spacecraft](register-spacecraft.md)-- [Schedule a contact](schedule-contact.md)
+- [Schedule a contact](schedule-contact.md)
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[pg_freespacemap](https://www.postgresql.org/docs/13/pgfreespacemap.html) | 1.2 | examine the free space map (FSM)| > |[pg_partman](https://github.com/pgpartman/pg_partman) | 4.6.1 | Extension to manage partitioned tables by time or ID | > |[pg_prewarm](https://www.postgresql.org/docs/13/pgprewarm.html) | 1.2 | prewarm relation data|
-> |[pg_repack](https://github.com/reorg/pg_repack) | 1.4.7 | reorganize tables in PostgreSQL databases with minimal locks|
> |[pg_stat_statements](https://www.postgresql.org/docs/13/pgstatstatements.html) | 1.8 | track execution statistics of all SQL statements executed| > |[pg_trgm](https://www.postgresql.org/docs/13/pgtrgm.html) | 1.5 | text similarity measurement and index searching based on trigrams| > |[pg_visibility](https://www.postgresql.org/docs/13/pgvisibility.html) | 1.2 | examine the visibility map (VM) and page-level visibility info|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[pg_freespacemap](https://www.postgresql.org/docs/13/pgfreespacemap.html) | 1.2 | examine the free space map (FSM)| > |[pg_partman](https://github.com/pgpartman/pg_partman) | 4.5.0 | Extension to manage partitioned tables by time or ID | > |[pg_prewarm](https://www.postgresql.org/docs/13/pgprewarm.html) | 1.2 | prewarm relation data|
-> |[pg_repack](https://github.com/reorg/pg_repack) | 1.4.7 | reorganize tables in PostgreSQL databases with minimal locks|
> |[pg_stat_statements](https://www.postgresql.org/docs/13/pgstatstatements.html) | 1.8 | track execution statistics of all SQL statements executed| > |[pg_trgm](https://www.postgresql.org/docs/13/pgtrgm.html) | 1.5 | text similarity measurement and index searching based on trigrams| > |[pg_visibility](https://www.postgresql.org/docs/13/pgvisibility.html) | 1.2 | examine the visibility map (VM) and page-level visibility info|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[pg_freespacemap](https://www.postgresql.org/docs/12/pgfreespacemap.html) | 1.2 | examine the free space map (FSM)| > |[pg_partman](https://github.com/pgpartman/pg_partman) | 4.5.0 | Extension to manage partitioned tables by time or ID | > |[pg_prewarm](https://www.postgresql.org/docs/12/pgprewarm.html) | 1.2 | prewarm relation data|
-> |[pg_repack](https://github.com/reorg/pg_repack) | 1.4.7 | reorganize tables in PostgreSQL databases with minimal locks|
> |[pg_stat_statements](https://www.postgresql.org/docs/12/pgstatstatements.html) | 1.7 | track execution statistics of all SQL statements executed| > |[pg_trgm](https://www.postgresql.org/docs/12/pgtrgm.html) | 1.4 | text similarity measurement and index searching based on trigrams| > |[pg_visibility](https://www.postgresql.org/docs/12/pgvisibility.html) | 1.2 | examine the visibility map (VM) and page-level visibility info|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[pg_freespacemap](https://www.postgresql.org/docs/11/pgfreespacemap.html) | 1.2 | examine the free space map (FSM)| > |[pg_partman](https://github.com/pgpartman/pg_partman) | 4.5.0 | Extension to manage partitioned tables by time or ID | > |[pg_prewarm](https://www.postgresql.org/docs/11/pgprewarm.html) | 1.2 | prewarm relation data|
-> |[pg_repack](https://github.com/reorg/pg_repack) | 1.4.7 | reorganize tables in PostgreSQL databases with minimal locks|
> |[pg_stat_statements](https://www.postgresql.org/docs/11/pgstatstatements.html) | 1.6 | track execution statistics of all SQL statements executed| > |[pg_trgm](https://www.postgresql.org/docs/11/pgtrgm.html) | 1.4 | text similarity measurement and index searching based on trigrams| > |[pg_visibility](https://www.postgresql.org/docs/11/pgvisibility.html) | 1.2 | examine the visibility map (VM) and page-level visibility info|
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
Title: "Migrate from Single Server to Flexible Server by using the Azure CLI"
-description: Learn about migrating your Single Server databases to Azure database for PostgreSQL Flexible Server by using the Azure CLI.
+description: Learn about migrating your Single Server databases to Azure Database for PostgreSQL Flexible Server by using the Azure CLI.
To find out if custom DNS is used for name resolution, go to the virtual network
## Next steps -- For a successful end-to-end migration, follow the post-migration steps in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md).
+- For a successful end-to-end migration, follow the post-migration steps in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md).
private-link Disable Private Endpoint Network Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/disable-private-endpoint-network-policy.md
ms.devlang: azurecli
By default, network policies are disabled for a subnet in a virtual network. To utilize network policies like UDR and NSG support, network policy support must be enabled for the subnet. This setting is only applicable to private endpoints within the subnet. This setting affects all private endpoints within the subnet. For other resources in the subnet, access is controlled based on security rules in the network security group.
-> [!IMPORTANT]
-> NSG and UDR support for private endpoints are in public preview on select regions. For more information, see [Public preview of Private Link UDR Support](https://azure.microsoft.com/updates/public-preview-of-private-link-udr-support/) and [Public preview of Private Link Network Security Group Support](https://azure.microsoft.com/updates/public-preview-of-private-link-network-security-group-support/).
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- You can use the following to enable or disable the setting: * Azure portal
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
Title: What is a private endpoint?+ description: In this article, you'll learn how to use the Private Endpoint feature of Azure Private Link. # Customer intent: As someone who has a basic network background but is new to Azure, I want to understand the capabilities of private endpoints so that I can securely connect to my Azure PaaS services within the virtual network. Previously updated : 05/31/2022 Last updated : 08/10/2022 + # What is a private endpoint?
The DNS settings that you use to connect to a private-link resource are importan
The network interface associated with the private endpoint contains the information that's required to configure your DNS. The information includes the FQDN and private IP address for a private-link resource. For complete, detailed information about recommendations to configure DNS for private endpoints, see [Private endpoint DNS configuration](private-endpoint-dns.md).
-
+ ## Limitations
-
-The following table list the known limitations to the use of private endpoints:
-
-| Limitation | Description |Mitigation |
-| | | |
-| Traffic that's destined for a private endpoint through a user-defined route (UDR) might be asymmetric. | Return traffic from a private endpoint bypasses a network virtual appliance (NVA) and attempts to return to the source virtual machine. | Source network address translation (SNAT) is used to ensure symmetric routing. For all traffic to a private endpoint that uses a UDR, we recommend that you use SNAT for traffic at the NVA. |
-
-> [!IMPORTANT]
-> Network security group (NSG) and UDR support for private endpoints is in preview in select regions. For more information, see [Preview of Private Link UDR support](https://azure.microsoft.com/updates/public-preview-of-private-link-udr-support/) and [Preview of Private Link network security group support](https://azure.microsoft.com/updates/public-preview-of-private-link-network-security-group-support/).
->
-> This preview version is provided without a service-level agreement, and we don't recommend using it for production workloads. Certain features might not be supported or might have constrained capabilities.
->
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Limitations of the preview version
-
-### Network security groups
-
-| Limitation | Description | Mitigation |
-| - | -- | - |
-| Obtaining effective routes and security rules isn't available on a private-endpoint network interface. | You can't navigate to the network interface to view relevant information about the effective routes and security rules. | Q4CY2021 |
-| NSG flow logs aren't supported. | NSG flow logs don't work for inbound traffic that's destined for a private endpoint. | No mitigation information is available at this time. |
-| Intermittent drops with zone-redundant storage (ZRS) storage accounts. | Customers that use ZRS storage accounts might see periodic intermittent drops, even with *allow NSG* applied on a storage private-endpoint subnet. | No mitigation information is available at this time. |
-| Intermittent drops with Azure Key Vault. | Customers that use Azure Key Vault might see periodic intermittent drops, even with *allow NSG* applied on a Key Vault private-endpoint subnet. | No mitigation information is available at this time. |
-| The number of address prefixes per NSG is limited. | Having more than 500 address prefixes in an NSG in a single rule isn't supported. | No mitigation information is available at this time. |
-| AllowVirtualNetworkAccess flag | Customers that set virtual network peering on their virtual network (virtual network A) with the *AllowVirtualNetworkAccess* flag set to *false* on the peering link to another virtual network (virtual network B) can't use the *VirtualNetwork* tag to deny traffic from virtual network B accessing private endpoint resources. The customers need to explicitly place a block for virtual network BΓÇÖs address prefix to deny traffic to the private endpoint. | No mitigation information is available at this time. |
-| Dual port NSG rules are unsupported. | If multiple port ranges are used with NSG rules, only the first port range is honored for allow rules and deny rules. Rules with multiple port ranges are defaulted to *deny all* instead of to denying specific ports. </br><br>For more information, see the UDR rule example in the next table. | No mitigation information is available at this time. |
-| | |
+
+The following information lists the known limitations to the use of private endpoints:
+
+### Network security group
+
+| Limitation | Description |
+| | |
+| Effective routes and security rules unavailable for private endpoint network interface. | Effective routes and security rules won't be displayed for the private endpoint NIC in the Azure portal. |
+| NSG flow logs unsupported. | NSG flow logs unavailable for inbound traffic destined for a private endpoint. |
+| Intermittent drops with zone-redundant storage (ZRS) storage accounts. | Customers that use ZRS storage accounts might see periodic intermittent drops, even with *allow NSG* applied on a storage private-endpoint subnet. |
+| Intermittent drops with Azure Key Vault. | Customers that use Azure Key Vault might see periodic intermittent drops, even with *allow NSG* applied on a Key Vault private-endpoint subnet. |
+| The number of address prefixes per NSG is limited. | Having more than 500 address prefixes in an NSG in a single rule is unsupported. |
+| AllowVirtualNetworkAccess flag | Customers that set virtual network peering on their virtual network (virtual network A) with the *AllowVirtualNetworkAccess* flag set to *false* on the peering link to another virtual network (virtual network B) can't use the *VirtualNetwork* tag to deny traffic from virtual network B accessing private endpoint resources. The customers need to explicitly place a block for virtual network BΓÇÖs address prefix to deny traffic to the private endpoint. |
+| No more than 50 members in an Application Security Group. | Fifty is the number of IP Configurations that can be tied to each respective ASG thatΓÇÖs coupled to the NSG on the private endpoint subnet. Connection failures may occur with more than 50 members. |
+| Destination port ranges supported up to a factor of 250K. | Destination port ranges are supported as a multiplication SourceAddressPrefixes, DestinationAddressPrefixes, and DestinationPortRanges. </br></br> Example inbound rule: </br> 1 source * 1 destination * 4K portRanges = 4K Valid </br> 10 sources * 10 destinations * 10 portRanges = 1K Valid </br> 50 sources * 50 destinations * 50 portRanges = 125K Valid </br> 50 sources * 50 destinations * 100 portRanges = 250K Valid </br> 100 sources * 100 destinations * 100 portRanges = 1M Invalid, NSG has too many sources/destinations/ports. |
+| Source port filtering is interpreted as * | Source port filtering isn't actively used as valid scenario of traffic filtering for traffic destined to a private endpoint. |
+| Feature unavailable in select regions. | Currently unavailable in the following regions: </br> West India </br> UK North </br> UK South 2 </br> Australia Central 2 </br> South Africa West </br> Brazil Southeast |
+| Dual port NSG rules are unsupported. | If multiple port ranges are used with NSG rules, only the first port range is honored for allow rules and deny rules. Rules with multiple port ranges are defaulted to *deny all* instead of to denying specific ports. </br><br>For more information, see the UDR rule example in the next table. |
The following table shows an example of a dual port NSG rule:
-| Priority | Source&nbsp;port&nbsp; | Destination&nbsp;port | Action | Effective&nbsp;action |
+| Priority | Source port | Destination port | Action | Effective action |
| -- | -- | - | | - | | 10 | 10-12 | 10-12 | Allow/Deny | Single port range in source/destination ports will work as expected. | | 10 | 10-12, 13-14 | 14-15, 16-17 | Allow | Only source ports 10-12 and destination ports 14-15 will be allowed. | | 10 | 10-12, 13-14 | 120-130, 140-150 | Deny | Traffic from all source ports will be denied to all destination ports, because there are multiple source and destination port ranges. | | 10 | 10-12, 13-14 | 120-130 | Deny | Traffic from all source ports will be denied to destination ports 120-130 only. There are multiple source port ranges and a single destination port range. |
-| | |
-| Limitation | Description | Mitigation |
-| - | -- | - |
-| Source Network Address Translation (SNAT) is recommended always. | Because of the variable nature of the private-endpoint data plane, we recommend using SNAT traffic that's destined to a private endpoint, which ensures that return traffic is honored. | No mitigation information is available at this time. |
-| | |
-
+### NSG additional considerations
+
+- Outbound traffic denied from a private endpoint isn't a valid scenario, as the service provider can't originate traffic.
+
+- The following services may require all destination ports to be open when leveraging a private endpoint and adding NSG security filters:
+
+ - Cosmos DB - For more information see, [Service port ranges](/azure/cosmos-db/sql/sql-sdk-connection-modes#service-port-ranges).
+
+### UDR
+
+| Limitation | Description |
+| | |
+| SNAT is recommended at all times. | Due to the variable nature of the private endpoint data-plane, it's recommended to SNAT traffic destined to a private endpoint to ensure return traffic is honored. |
+| Feature unavailable in select regions. | Currently unavailable in the following regions: </br> West India </br> UK North </br> UK South 2 </br> Australia Central 2 </br> South Africa West </br> Brazil Southeast |
+
+### Application security group
+
+| Limitation | Description |
+| | |
+| Feature unavailable in select regions. | Currently unavailable in the following regions: </br> West India </br> UK North </br> UK South 2 </br> Australia Central 2 </br> South Africa West </br> Brazil Southeast |
+ ## Next steps - For more information about private endpoints and Private Link, see [What is Azure Private Link?](private-link-overview.md).
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-synapse-workspace.md
Previously updated : 03/14/2022 Last updated : 08/10/2022
GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[scoped_credential] TO [PurviewA
1. In the Azure portal, go to the Azure Synapse workspace.
-1. On the left pane, select **Firewalls**.
+1. On the left pane, select **Networking**.
1. For **Allow Azure services and resources to access this workspace** control, select **ON**. 1. Select **Save**. > [!IMPORTANT]
-> Currently, we do not support setting up scans for an Azure Synapse workspace from the Microsoft Purview governance portal, if you cannot enable **Allow Azure services and resources to access this workspace** on your Azure Synapse workspaces. In this case:
-> - You can use [Microsoft Purview REST API - Scans - Create Or Update](/rest/api/purview/scanningdataplane/scans/create-or-update/) to create a new scan for your Synapse workspaces including dedicated and serverless pools.
-> - You must use **SQL Auth** as authentication mechanism.
+> Currently, if you cannot enable **Allow Azure services and resources to access this workspace** on your Azure Synapse workspaces, when set up scan on Microsoft Purview governance portal, you will hit serverless DB enumeration failure. In this case, to scan serverless DBs, you can use [Microsoft Purview REST API - Scans - Create Or Update](/rest/api/purview/scanningdataplane/scans/create-or-update/) to set up scan. Refer to [this example](#set-up-scan-using-api).
### Create and run scan
To create and run a new scan, do the following:
[!INCLUDE [create and manage scans](includes/view-and-manage-scans.md)]
+### Set up scan using API
+
+Here is an example of creating scan for serverless DB using API. Replace the `{place_holder}` and `enum_option_1 | enum_option_2 (note)` value with your actual settings.
+
+```http
+PUT https://{purview_account_name}.purview.azure.com/scan/datasources/<data_source_name>/scans/{scan_name}?api-version=2022-02-01-preview
+```
+
+```json
+{
+ "properties":{
+ "resourceTypes":{
+ "AzureSynapseServerlessSql":{
+ "scanRulesetName":"AzureSynapseSQL",
+ "scanRulesetType":"System",
+ "resourceNameFilter":{
+ "resources":[ "{serverless_database_name_1}", "{serverless_database_name_2}", ...]
+ }
+ }
+ },
+ "credential":{
+ "referenceName":"{credential_name}",
+ "credentialType":"SqlAuth | ServicePrincipal | ManagedIdentity (if UAMI authentication)"
+ },
+ "collection":{
+ "referenceName":"{collection_name}",
+ "type":"CollectionReference"
+ },
+ "connectedVia":{
+ "referenceName":"{integration_runtime_name}",
+ "integrationRuntimeType":"SelfHosted (if self-hosted IR) | Managed (if VNet IR)"
+ }
+ },
+ "kind":"AzureSynapseWorkspaceCredential | AzureSynapseWorkspaceMsi (if system-assigned managed identity authentication)"
+}
+```
+
+To schedule the scan, additionally create a trigger for it after scan creation, refer to [Triggers - Create Trigger](/rest/api/purview/scanningdataplane/triggers/create-trigger).
+ ## Next steps Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data.
purview Tutorial Azure Purview Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-azure-purview-tools.md
This article lists several open-source tools and utilities (command-line, python
1. [Purview-API-via-PowerShell](https://github.com/Azure/Azure-Purview-API-PowerShell/blob/main/README.md) - **Recommended customer journey stages**: *Learners, Innovators, Enthusiasts, Adopters, Long-Term Regular Users*
- - **Description**: This utility is based on and covers the entire set of [Microsoft Purview REST API Reference](/rest/api/purview/) Microsoft Docs. [Download & Install from PowerShell Gallery](https://aka.ms/purview-api-ps). It helps you execute all the documented Microsoft Purview REST APIs through a breezy fast and easy to use PowerShell interface. Use and automate Microsoft Purview APIs for regular and long-term usage via command-line and scripted methods. This is an alternative for customers looking to do bulk tasks in automated manner, batch-mode, or scheduled cron jobs; as against the GUI method of using the Azure portal and Microsoft Purview governance portal. Detailed documentation, sample usage guide, self-help, and examples are available on [GitHub:Azure-Purview-API-PowerShell](https://github.com/Azure/Azure-Purview-API-PowerShell).
+ - **Description**: This utility is based on and covers the entire set functionality described in the [Microsoft Purview REST API reference](/rest/api/purview/). [Download & Install from PowerShell Gallery](https://aka.ms/purview-api-ps). It helps you execute all the documented Microsoft Purview REST APIs through a breezy fast and easy to use PowerShell interface. Use and automate Microsoft Purview APIs for regular and long-term usage via command-line and scripted methods. This is an alternative for customers looking to do bulk tasks in automated manner, batch-mode, or scheduled cron jobs; as against the GUI method of using the Azure portal and Microsoft Purview governance portal. Detailed documentation, sample usage guide, self-help, and examples are available on [GitHub:Azure-Purview-API-PowerShell](https://github.com/Azure/Azure-Purview-API-PowerShell).
1. [Microsoft Purview Lab](https://aka.ms/purviewlab)
remote-rendering Spatial Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/spatial-queries.md
async void CastRay(RenderingSession session)
var hitObject = hits[0].HitObject; var hitPosition = hits[0].HitPosition; var hitNormal = hits[0].HitNormal;-
+ var hitType = hits[0].HitType;
// do something with the hit information } }
void CastRay(ApiHandle<RenderingSession> session)
auto hitObject = hits[0].HitObject; auto hitPosition = hits[0].HitPosition; auto hitNormal = hits[0].HitNormal;
+ auto hitType = hits[0].HitType;
// do something with the hit information }
The result of a ray cast query is an array of hits. The array is empty, if no ob
A Hit has the following properties: * **`HitEntity`:** Which [entity](../../concepts/entities.md) was hit.
-* **`SubPartId`:** Which *submesh* was hit in a [MeshComponent](../../concepts/meshes.md). Can be used to index into `MeshComponent.UsedMaterials` and look up the [material](../../concepts/materials.md) at that point.
+* **`SubPartId`:** Which *submesh* was hit in a [MeshComponent](../../concepts/meshes.md#meshcomponent). Can be used to index into `MeshComponent.UsedMaterials` and look up the [material](../../concepts/materials.md) at that point.
* **`HitPosition`:** The world space position where the ray intersected the object. * **`HitNormal`:** The world space surface normal of the mesh at the position of the intersection. * **`DistanceToHit`:** The distance from the ray starting position to the hit.
+* **`HitType`:** What was hit by the ray: `TriangleFrontFace`, `TriangleBackFace` or `Point`. By default, [ARR renders double sided](single-sided-rendering.md#prerequisites) so the triangles the user sees are not necessarily front facing. If you want to differentiate between `TriangleFrontFace` and `TriangleBackFace` in your code, make sure your models are authored with correct face directions first.
+
+## Spatial queries
+
+A *spatial query* allows for the runtime to check which [MeshComponent](../../concepts/meshes.md#meshcomponent) are intersected by a world-space axis-aligned bounding box (AABB). This check is very performant as the individual check is performed based on each mesh part's bounds in the scene, not on an individual triangle basis. As an optimization, a maximum number of hit mesh components can be provided.\
+While such a query can be run manually on the client side, for large scenes it will be much faster for the server to compute this.
+
+```cs
+async void QueryAABB(RenderingSession session)
+{
+ // Query all mesh components in a 2x2x2m cube.
+ SpatialQuery query = new SpatialQuery();
+ query.Bounds = new Microsoft.Azure.RemoteRendering.Bounds(new Double3(-1, -1, -1), new Double3(1, 1, 1));
+ query.MaxResults = 100;
+
+ SpatialQueryResult result = await session.Connection.SpatialQueryAsync(query);
+ foreach (MeshComponent meshComponent in result.Overlaps)
+ {
+ Entity owner = meshComponent.Owner;
+ // do something with the hit MeshComponent / Entity
+ }
+}
+```
+
+```cpp
+void QueryAABB(ApiHandle<RenderingSession> session)
+{
+ // Query all mesh components in a 2x2x2m cube.
+ SpatialQuery query;
+ query.Bounds.Min = {-1, -1, -1};
+ query.Bounds.Max = {1, 1, 1};
+ query.MaxResults = 100;
+
+ session->Connection()->SpatialQueryAsync(query, [](Status status, ApiHandle<SpatialQueryResult> result)
+ {
+ if (status == Status::OK)
+ {
+ std::vector<ApiHandle<MeshComponent>> overlaps;
+ result->GetOverlaps(overlaps);
+
+ for (ApiHandle<MeshComponent> meshComponent : overlaps)
+ {
+ ApiHandle<Entity> owner = meshComponent->GetOwner();
+ // do something with the hit MeshComponent / Entity
+ }
+ }
+ });
+}
+```
## API documentation
A Hit has the following properties:
## Next steps * [Object bounds](../../concepts/object-bounds.md)
-* [Overriding hierarchical states](override-hierarchical-state.md)
+* [Overriding hierarchical states](override-hierarchical-state.md)
+* [Single sided rendering](single-sided-rendering.md)
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Azure service: [Azure Spring Apps](../spring-apps/index.yml)
### Microsoft.CertificateRegistration
-Azure service: [App Service Certificates](../app-service/configure-ssl-certificate.md#import-an-app-service-certificate)
+Azure service: [App Service Certificates](../app-service/configure-ssl-certificate.md#buy-and-import-app-service-certificate)
> [!div class="mx-tableFixed"] > | Action | Description |
search Knowledge Store Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-create-portal.md
This quickstart uses the following
+ Azure Storage. [Create an account](../storage/common/storage-account-create.md) or [find an existing account](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/). The account type must be **StorageV2 (general purpose V2)**.
-+ Sample data. This quickstart uses hotel review data saved in a CSV file (originates from Kaggle.com) and contains 19 pieces of customer feedback about a single hotel.
++ Sample data hosted in Azure Storage:
- [Download HotelReviews_Free.csv](https://knowledgestoredemo.blob.core.windows.net/hotel-reviews/HotelReviews_Free.csv?sp=r&st=2019-11-04T01:23:53Z&se=2025-11-04T16:00:00Z&spr=https&sv=2019-02-02&sr=b&sig=siQgWOnI%2FDamhwOgxmj11qwBqqtKMaztQKFNqWx00AY%3D) and then [upload it to a blob container](../storage/blobs/storage-quickstart-blobs-portal.md) in Azure Storage.
+ [Download HotelReviews_Free.csv](https://github.com/Azure-Samples/azure-search-sample-data/blob/master/hotelreviews/HotelReviews_data.csv). This CSV contains 19 pieces of customer feedback about a single hotel (originates from Kaggle.com). The file is in a repo with other sample data. If you don't want the whole repo, copy the raw content and paste it into a spreadsheet app on your device.
+
+ [Upload the file to a blob container](../storage/blobs/storage-quickstart-blobs-portal.md) in Azure Storage.
This quickstart also uses [Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for AI enrichment. Because the workload is so small, Cognitive Services is tapped behind the scenes for free processing for up to 20 transactions. This means that you can complete this exercise without having to create an additional Cognitive Services resource.
search Knowledge Store Create Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-create-rest.md
To make the initial data set available, the hotel reviews are first imported int
This step uses Azure Cognitive Search, Azure Blob Storage, and [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes to provide free processing for up to 20 transactions daily. A small workload means that you can skip creating or attaching a Cognitive Services resource.
-1. [Download HotelReviews_Free.csv](https://knowledgestoredemo.blob.core.windows.net/hotel-reviews/HotelReviews_Free.csv?sp=r&st=2019-11-04T01:23:53Z&se=2025-11-04T16:00:00Z&spr=https&sv=2019-02-02&sr=b&sig=siQgWOnI%2FDamhwOgxmj11qwBqqtKMaztQKFNqWx00AY%3D). This data is hotel review data saved in a CSV file (originates from Kaggle.com) and contains 19 pieces of customer feedback about a single hotel.
+1. [Download HotelReviews_Free.csv](https://github.com/Azure-Samples/azure-search-sample-data/blob/master/hotelreviews/HotelReviews_data.csv). This CSV contains 19 pieces of customer feedback about a single hotel (originates from Kaggle.com). The file is in a repo with other sample data. If you don't want the whole repo, copy the raw content and paste it into a spreadsheet app on your device.
1. In Azure portal, on the Azure Storage resource page, use **Storage Browser** to create a blob container named **hotel-reviews**.
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
Previously updated : 06/10/2022 Last updated : 08/09/2022 # Service limits in Azure Cognitive Search
Maximum running times exist to provide balance and stability to the service as a
| Maximum skillsets <sup>4</sup> |3 |5 or 15 |50 |200 |200 |N/A |10 |10 | | Maximum indexing load per invocation |10,000 documents |Limited only by maximum documents |Limited only by maximum documents |Limited only by maximum documents |Limited only by maximum documents |N/A |No limit |No limit | | Minimum schedule | 5 minutes |5 minutes |5 minutes |5 minutes |5 minutes |5 minutes |5 minutes | 5 minutes |
-| Maximum running time <sup>6</sup>| 1-3 minutes |2-24 hours |2-24 hours |2-24 hours |2-24 hours |N/A |2-24 hours |2-24 hours |
+| Maximum running time <sup>6</sup>| 1-3 minutes |2 or 24 hours |2 or 24 hours |2 or 24 hours |2 or 24 hours |N/A |2 or 24 hours |2 or 24 hours |
| Maximum running time for indexers with a skillset <sup>5</sup> | 3-10 minutes |2 hours |2 hours |2 hours |2 hours |N/A |2 hours |2 hours | | Blob indexer: maximum blob size, MB |16 |16 |128 |256 |256 |N/A |256 |256 | | Blob indexer: maximum characters of content extracted from a blob |32,000 |64,000 |4&nbsp;million |8&nbsp;million |16&nbsp;million |N/A |4&nbsp;million |4&nbsp;million |
Maximum running times exist to provide balance and stability to the service as a
<sup>5</sup> AI enrichment and image analysis are computationally intensive and consume disproportionate amounts of available processing power. Running time for these workloads has been shortened to give other jobs in the queue more opportunity to run.
-<sup>6</sup> Indexer maximum run time for Basic tier or higher may vary between 2 and 24 hours, depending on system resources, product implementation and other factors.
+<sup>6</sup> Indexer maximum run time for Basic tier or higher can be 2 or 24 hours, depending on system resources, product implementation and other factors.
> [!NOTE] > As stated in the [Index limits](#index-limits), indexers will also enforce the upper limit of 3000 elements across all complex collections per document starting with the latest GA API version that supports complex types (`2019-05-06`) onwards. This means that if you've created your indexer with a prior API version, you will not be subject to this limit. To preserve maximum compatibility, an indexer that was created with a prior API version and then updated with an API version `2019-05-06` or later, will still be **excluded** from the limits. Customers should be aware of the adverse impact of having very large complex collections (as stated previously) and we highly recommend creating any new indexers with the latest GA API version.
security Security Code Analysis Customize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/security-code-analysis-customize.md
Available options include:
> > If the new task runs on the same agent as the original task, the new task's output overwrites the original task's output in the *s* sources folder. Although the build output is the same, we advise that you run MSBuild, copy output to the the artifacts staging directory, and then run Roslyn Analyzers.
-For additional resources for the Roslyn Analyzers task, check out [The Roslyn-based Analyzers](/dotnet/standard/analyzers/api-analyzer) on Microsoft Docs.
+For additional resources for the Roslyn Analyzers task, review the [Roslyn-based analyzers](/dotnet/standard/analyzers/api-analyzer).
You can find the analyzer package installed and used by this build task on the NuGet page [Microsoft.CodeAnalysis.FxCopAnalyzers](https://www.nuget.org/packages/Microsoft.CodeAnalysis.FxCopAnalyzers).
For information about YAML configuration for this task, check our [Post Analysis
For information about YAML based configuration, refer to our [YAML Configuration guide](yaml-configuration.md).
-If you have further questions about the Security Code Analysis extension and the tools offered, check out [our FAQ page](security-code-analysis-faq.yml).
+If you have further questions about the Security Code Analysis extension and the tools offered, check out [our FAQ page](security-code-analysis-faq.yml).
sentinel File Event Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/file-event-normalization-schema.md
The following list mentions fields that have specific guidelines for File activi
| **Field** | **Class** | **Type** | **Description** | | | | | |
-| **EventType** | Mandatory | Enumerated | Describes the operation reported by the record. <br><br>For File records, supported values include: <br><br>- `FileCreated`<br>- `FileModified`<br>- `FileDeleted`<br>- `FileRenamed`<br>- `FileCopied`<br>- `FileMoved`<br>- `FolderCreated`<br>- `FolderDeleted` |
+| **EventType** | Mandatory | Enumerated | Describes the operation reported by the record. <br><br>For File records, supported values include: <br><br>- `FileAccessed`<br>- `FileCreated`<br>- `FileModified`<br>- `FileDeleted`<br>- `FileRenamed`<br>- `FileCopied`<br>- `FileMoved`<br>- `FolderCreated`<br>- `FolderDeleted` |
| **EventSchema** | Optional | String | The name of the schema documented here is **FileEvent**. | | **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.1` | | **Dvc** fields| - | - | For File activity events, device fields refer to the system on which the file activity occurred. |
sentinel Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/iot-solution.md
View Defender for IoT alerts in the Microsoft Sentinel **Logs** area.
> > For more information, see [Log queries overview](../azure-monitor/logs/log-query-overview.md) in the Azure Monitor documentation and the [Write your first KQL query](/learn/modules/write-first-query-kusto-query-language/) Learn module. >+
+### Understand alert timestamps
+
+Defender for IoT alerts, in both the Azure portal and on the sensor console, track the time an alert was first detected, last detected, and last changed.
+
+The following table describes the Defender for IoT alert timestamp fields, with a mapping to the relevant fields from Log Analytics shown in Microsoft Sentinel.
+
+|Defender for IoT field |Description | Log Analytics field |
+||||
+|**First detection** |Defines the first time the alert was detected in the network. | `StartTime` |
+|**Last detection** | Defines the last time the alert was detected in the network, and replaces the **Detection time** column.| `EndTime` |
+|**Last activity** | Defines the last time the alert was changed, including manual updates for severity or status, or automated changes for device updates or device/alert de-duplication | `TimeGenerated` |
+
+In Defender for IoT on the Azure portal and the sensor console, the **Last detection** column is shown by default. Edit the columns on the **Alerts** page to show the **First detection** and **Last activity** columns as needed.
+
+For more information, see [View alerts on the Defender for IoT portal](../defender-for-iot/organizations/how-to-manage-cloud-alerts.md) and [View alerts on your sensor](../defender-for-iot/organizations/how-to-view-alerts.md).
+ ## Install the Defender for IoT solution The **IoT OT Threat Monitoring with Defender for IoT** solution is a set of bundled content, including analytics rules, workbooks, and playbooks, configured specifically for Defender for IoT data. This solution currently supports only Operational Networks (OT/ICS).
sentinel Kusto Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/kusto-resources.md
Microsoft Sentinel uses Azure Monitor's Log Analytics environment and the Kusto Query Language (KQL) to build the queries that undergird much of Sentinel's functionality, from analytics rules to workbooks to hunting. This article lists resources that can help you skill up in working with Kusto Query Language, which will give you more tools to work with Microsoft Sentinel, whether as a security engineer or analyst.
-## Microsoft Docs and Learn
+## Microsoft technical resources
### Microsoft Sentinel documentation - [Kusto Query Language in Microsoft Sentinel](kusto-overview.md)
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/network-normalization-schema.md
The following list mentions fields that have specific guidelines for Network Ses
| <a name="eventsubtype"></a>**EventSubType** | Optional | String | Additional description of the event type, if applicable. <br> For Network Session records, supported values include:<br>- `Start`<br>- `End` | | **EventResult** | Mandatory | Enumerated | If the source device does not provide an event result, **EventResult** should be based on the value of [DvcAction](#dvcaction). If [DvcAction](#dvcaction) is `Deny`, `Drop`, `Drop ICMP`, `Reset`, `Reset Source`, or `Reset Destination`<br>, **EventResult** should be `Failure`. Otherwise, **EventResult** should be `Success`. | | **EventSchema** | Mandatory | String | The name of the schema documented here is `NetworkSession`. |
-| **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.2.3`. |
+| **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.2.4`. |
| <a name="dvcaction"></a>**DvcAction** | Recommended | Enumerated | The action taken on the network session. Supported values are:<br>- `Allow`<br>- `Deny`<br>- `Drop`<br>- `Drop ICMP`<br>- `Reset`<br>- `Reset Source`<br>- `Reset Destination`<br>- `Encrypt`<br>- `Decrypt`<br>- `VPNroute`<br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. The original value should be stored in the [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction) field.<br><br>Example: `drop` | | **EventSeverity** | Optional | Enumerated | If the source device does not provide an event severity, **EventSeverity** should be based on the value of [DvcAction](#dvcaction). If [DvcAction](#dvcaction) is `Deny`, `Drop`, `Drop ICMP`, `Reset`, `Reset Source`, or `Reset Destination`<br>, **EventSeverity** should be `Low`. Otherwise, **EventSeverity** should be `Informational`. | | **DvcInterface** | | | The DvcInterface field should alias either the [DvcInboundInterface](#dvcinboundinterface) or the [DvcOutboundInterface](#dvcoutboundinterface) fields. |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| <a name="networkdirection"></a>**NetworkDirection** | Optional | Enumerated | The direction of the connection or session:<br><br> - For the [EventType](#eventtype) `NetworkSession`, **NetworkDirection** represents the direction relative to the organization or cloud environment boundary. Supported values are `Inbound`, `Outbound`, `Local` (to the organization), `External` (to the organization) or `NA` (Not Applicable).<br><br> - For the [EventType](#eventtype) `EndpointNetworkSession`, **NetworkDirection** represents the direction relative to the endpoint. Supported values are `Inbound`, `Outbound`, `Local` (to the system), `Listen` or `NA` (Not Applicable). The `Listen` value indicates that a device has started accepting network connections but isn't actually, necessarily, connected. | | <a name="networkduration"></a>**NetworkDuration** | Optional | Integer | The amount of time, in milliseconds, for the completion of the network session or connection.<br><br>Example: `1500` | | **Duration** | Alias | | Alias to [NetworkDuration](#networkduration). |
-| **NetworkIcmpCode** | Optional | Integer | For an ICMP message, the ICMP message type numeric value as described in [RFC 2780](https://datatracker.ietf.org/doc/html/rfc2780) for IPv4 network connections, or in [RFC 4443](https://datatracker.ietf.org/doc/html/rfc4443) for IPv6 network connections. If a [NetworkIcmpType](#networkicmptype) value is provided, this field is mandatory. If the value isn't available from the source, derive the value from the [NetworkIcmpType](#networkicmptype) field instead.<br><br>Example: `34` |
-|<a name="networkicmptype"></a> **NetworkIcmpType** | Optional | String | For an ICMP message, the ICMP message type text representation, as described in [RFC 2780](https://datatracker.ietf.org/doc/html/rfc2780) for IPv4 network connections, or in [RFC 4443](https://datatracker.ietf.org/doc/html/rfc4443) for IPv6 network connections.<br><br>Example: `Destination Unreachable` |
+|<a name="networkicmptype"></a> **NetworkIcmpType** | Optional | String | For an ICMP message, the ICMP message type number, as described in [RFC 2780](https://datatracker.ietf.org/doc/html/rfc2780) for IPv4 network connections, or in [RFC 4443](https://datatracker.ietf.org/doc/html/rfc4443) for IPv6 network connections. |
+| **NetworkIcmpCode** | Optional | Integer | For an ICMP message, the ICMP code number as described in [RFC 2780](https://datatracker.ietf.org/doc/html/rfc2780) for IPv4 network connections, or in [RFC 4443](https://datatracker.ietf.org/doc/html/rfc4443) for IPv6 network connections. |
| **NetworkConnectionHistory** | Optional | String | TCP flags and other potential IP header information. | | **DstBytes** | Recommended | Long | The number of bytes sent from the destination to the source for the connection or session. If the event is aggregated, **DstBytes** should be the sum over all aggregated sessions.<br><br>Example: `32455` | | **SrcBytes** | Recommended | Long | The number of bytes sent from the source to the destination for the connection or session. If the event is aggregated, **SrcBytes** should be the sum over all aggregated sessions.<br><br>Example: `46536` |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **NetworkPackets** | Optional | Long | The number of packets sent in both directions. If both **PacketsReceived** and **PacketsSent** exist, **BytesTotal** should equal their sum. The meaning of a packet is defined by the reporting device. If the event is aggregated, **NetworkPackets** should be the sum over all aggregated sessions.<br><br>Example: `6924` | |<a name="networksessionid"></a>**NetworkSessionId** | Optional | string | The session identifier as reported by the reporting device. <br><br>Example: `172\_12\_53\_32\_4322\_\_123\_64\_207\_1\_80` | | **SessionId** | Alias | String | Alias to [NetworkSessionId](#networksessionid). |
+| **TcpFlagsAck** | Optional | Boolean | The TCP ACK Flag reported. The acknowledgment flag is used to acknowledge the successful receipt of a packet. As we can see from the diagram above, the receiver sends an ACK as well as a SYN in the second step of the three way handshake process to tell the sender that it received its initial packet. |
+| **TcpFlagsFin** | Optional | Boolean | The TCP FIN Flag reported. The finished flag means there is no more data from the sender. Therefore, it is used in the last packet sent from the sender. |
+| **TcpFlagsSyn** | Optional | Boolean | The TCP SYN Flag reported. The synchronization flag is used as a first step in establishing a three way handshake between two hosts. Only the first packet from both the sender and receiver should have this flag set. |
+| **TcpFlagsUrg** | Optional | Boolean | The TCP URG Flag reported. The urgent flag is used to notify the receiver to process the urgent packets before processing all other packets. The receiver will be notified when all known urgent data has been received. See [RFC 6093](https://tools.ietf.org/html/rfc6093) for more details. |
+| **TcpFlagsPsh** | Optional | Boolean | The TCP PSH Flag reported. The push flag is somewhat similar to the URG flag and tells the receiver to process these packets as they are received instead of buffering them. |
+| **TcpFlagsRst** | Optional | Boolean | The TCP RST Flag reported. The reset flag gets sent from the receiver to the sender when a packet is sent to a particular host that was not expecting it. |
+| **TcpFlagsEce** | Optional | Boolean | The TCP ECE Flag reported. This flag is responsible for indicating if the TCP peer is [ECN capable](https://en.wikipedia.org/wiki/Explicit_Congestion_Notification). See [RFC 3168](https://tools.ietf.org/html/rfc3168) for more details. |
+| **TcpFlagsCwr** | Optional | Boolean | The TCP CWR Flag reported. The congestion window reduced flag is used by the sending host to indicate it received a packet with the ECE flag set. See [RFC 3168](https://tools.ietf.org/html/rfc3168) for more details. |
+| **TcpFlagsNs** | Optional | Boolean | The TCP NS Flag reported. The nonce sum flag is still an experimental flag used to help protect against accidental malicious concealment of packets from the sender. See [RFC 3540](https://tools.ietf.org/html/rfc3540) for more details |
### Destination system fields
The following fields are used to represent that inspection which a security devi
| **ThreatCategory** | Optional | String | The category of the threat or malware identified in the network session.<br><br>Example: `Trojan` | | **ThreatRiskLevel** | Optional | Integer | The risk level associated with the session. The level should be a number between **0** and **100**.<br><br>**Note**: The value might be provided in the source record by using a different scale, which should be normalized to this scale. The original value should be stored in [ThreatRiskLevelOriginal](#threatriskleveloriginal). | | <a name="threatriskleveloriginal"></a>**ThreatRiskLevelOriginal** | Optional | String | The risk level as reported by the reporting device. |
+| **ThreatIpAddr** | Optional | IP Address | An IP address for which a threat was identified. The field [ThreatField](#threatfield) contains the name of the field **ThreatIpAddr** represents. |
+| <a name="threatfield"></a>**ThreatField** | Optional | Enumerated | The field for which a threat was identified. The value is either `SrcIpAddr` or `DstIpAddr`. |
+| **ThreatConfidence** | Optional | Integer | The confidence level of the threat identified, normalized to a value between 0 and a 100.|
+| **ThreatOriginalConfidence** | Optional | String | The original confidence level of the threat identified, as reported by the reporting device.|
+| **ThreatIsActive** | Optional | Boolean | True ID the threat identified is considered an active threat. |
+| **ThreatFirstReportedTime** | Optional | datetime | The first time the IP address or domain were identified as a threat. |
+| **ThreatLastReportedTime** | Optional | datetime | The last time the IP address or domain were identified as a threat.|
### Other fields
Theses are the changes in version 0.2.3 of the schema:
- The `hostname_has_any` filtering parameter now matches either source or destination hostnames. - Added the fields `ASimMatchingHostname` and `ASimMatchingIpAddr`.
+Theses are the changes in version 0.2.4 of the schema:
+- Added the `TcpFlags` fields.
+- Updated `NetworkIcpmType` and `NetworkIcmpCode` to reflect the number value for both.
+- Added additional inspection fields.
+ ## Next steps For more information, see:
For more information, see:
- [Advanced Security Information Model (ASIM) overview](normalization.md) - [Advanced Security Information Model (ASIM) schemas](normalization-about-schemas.md) - [Advanced Security Information Model (ASIM) parsers](normalization-parsers-overview.md)-- [Advanced Security Information Model (ASIM) content](normalization-content.md)
+- [Advanced Security Information Model (ASIM) content](normalization-content.md)
sentinel Normalization Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-known-issues.md
+
+ Title: Advanced Security Information Model (ASIM) known issues | Microsoft Docs
+description: This article outlines the Microsoft Sentinel Advanced Security Information Model (ASIM) known issues.
++ Last updated : 08/02/2021+++
+# Advanced Security Information Model (ASIM) known issues (Public preview)
++
+The following are the Advanced Security Information Model (ASIM) known issues and limitations:
+
+## Time picker set to a custom range
+
+When using ASIM parsers in the log screen, the time picker will change automatically to "set in query", which will result in querying over all data in the relevant tables. The query results may not be the expected results and performance may be slow.
++
+To ensure correct and timely results, set the time range to your preferred range after it changes to "set in query".
+
+## Performance challenges
+
+ASIM based queries over a long time range, and which do not use filtering parameters, may be slow. Parsing is a resource-intensive operation, and when applied to a large, unfiltered, dataset, it is expected to be slow.
+
+If you encounter performance issues:
+- When using an interactive query, make sure to set the time picker to time range needed.
+- Use parser filters. Most importantly use the `starttime` and the `endtime` filter parameters.
+
+## The ingest_time() function is not supported
+
+The `ingest_time()` function reports the time at which a record was ingested into Microsoft Sentinel, which may be different from `TimeGenerated`. This information is commonly used in queries that take into account ingestion delays. The `ingest_time()` has to be used in the context of a specific table and does not work with ASIM functions, which unify many different tables.
+
+## Misleading informational message
+
+In some cases when using ASIM parser functions, usually when there are no results to the query, the following information message is displayed.
++
+While the message is alarming, it is informational only, and the system behaved as expected. ASIM functions combine data from many sources, regardless of whether they are available in your environment or not. The message suggests that some of the sources are not available in your environment.
+
+## <a name="next-steps"></a>Next steps
+
+This article discusses the Advanced Security Information Model (ASIM) help functions.
+
+For more information, see:
+
+- Watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM)
+- [Advanced Security Information Model (ASIM) overview](normalization.md)
+- [Advanced Security Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced Security Information Model (ASIM) parsers](normalization-about-parsers.md)
+- [Using the Advanced Security Information Model (ASIM)](normalization-about-parsers.md)
+- [Modifying Microsoft Sentinel content to use the Advanced Security Information Model (ASIM) parsers](normalization-modify-content.md)
sentinel Normalization Manage Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-manage-parsers.md
Make sure to add both a filtering custom parser and a parameter-less custom pars
The syntax of the line to add is different for each schema:
-| Schema | Filtering parser | Parameter&#8209;less&nbsp;parser |
-| | - | |
-| DNS | **Name**: `Im_DnsCustom`<br><br> **Line to add**:<br> `_parser_name_ (starttime, endtime, srcipaddr, domain_has_any, responsecodename, response_has_ipv4, response_has_any_prefix, eventtype)` | **Name**: `ASim_DnsCustom`<br><br> **Line to add**:<br> `_parser_name_` |
-| NetworkSession | **Name**: `Im_NetworkSessionCustom`<br><br> **Line to add**:<br> `_parser_name_ (starttime, endtime, srcipaddr_has_any_prefix, dstipaddr_has_any_prefix, dstportnumber, hostname_has_any, dvcaction, eventresult)` | **Name**: `ASim_NetworkSessionCustom`<br><br> **Line to add**:<br> `_parser_name_` |
-| WebSession | **Name**: `Im_WebSessionCustom`<br><br> **Line to add**:<br> `_parser_name_ (starttime, endtime, srcipaddr_has_any_prefix, url_has_any, httpuseragent_has_any, eventresultdetails_in, eventresult)` | **Name**: `ASim_WebSessionCustom`<br><br> **Line to add**:<br> `_parser_name_` |
+| Schema | Parser | Line to add
+| | - | - |
+| DNS | `Im_DnsCustom` | `_parser_name_ (starttime, endtime, srcipaddr, domain_has_any, responsecodename, response_has_ipv4, response_has_any_prefix, eventtype)` |
+| NetworkSession | `Im_NetworkSessionCustom` | `_parser_name_ (starttime, endtime, srcipaddr_has_any_prefix, dstipaddr_has_any_prefix, dstportnumber, hostname_has_any, dvcaction, eventresult)` |
+| WebSession | `Im_WebSessionCustom`| `_parser_name_ (starttime, endtime, srcipaddr_has_any_prefix, url_has_any, httpuseragent_has_any, eventresultdetails_in, eventresult)` |
When adding an additional parser to a unifying custom parser that already references parsers, make sure you add a comma at the end of the previous line.
To modify an existing, built-in source-specific parser:
1. Define the `SourceSpecificParser` value `Exclude<parser name>`, where `<parser name>`is the name of the parser you want to exclude, without a version specifier.
-For example, to exclude the Azure Firewall DNS parser, add the following records to the watchlist:
+For example, to exclude the Azure Firewall DNS parser, add the following record to the watchlist:
| CallerContext | SourceSpecificParser | | - | - | | `Exclude_Im_Dns` | `Exclude_Im_Dns_AzureFirewall` |
-| `Exclude_ASim_Dns` | `Exclude_ASim_Dns_AzureFirewall` |
### Prevent an automated update of a built-in parser
To add a custom parser, insert a line to the `union` statement in the workspace-
Make sure to add both a filtering custom parser and a parameter-less custom parser. The syntax of the line to add is different for each schema:
-| Schema | Filtering parser | Parameter&#8209;less&nbsp;parser |
-| | -- | |
-| **Authentication** | **Name:** `ImAuthentication`<br><br>**Line to add:**<br> `_parser_name_ (starttime, endtime, targetusername_has)` | **Name:** `ASimAuthentication`<br><br> **Line to add:** `_parser_name_` |
-| **DNS** | **Name:** `ImDns`<br><br>**Line to add:**<br> `_parser_name_ (starttime, endtime, srcipaddr, domain_has_any,`<br>` responsecodename, response_has_ipv4, response_has_any_prefix,`<br>` eventtype)` | **Name:** `ASimDns`<br><br>**Line to add:** `_parser_name_` |
-| **File Event** | | **Name:** `imFileEvent`<br><br>**Line to add:** `_parser_name_` |
-| **Network Session** | **Name:** `imNetworkSession`<br><br>**Line to add:**<br> `_parser_name_ (starttime, endtime, srcipaddr_has_any_prefix, dstipaddr_has_any_prefix, dstportnumber, url_has_any,`<br>` httpuseragent_has_any, hostname_has_any, dvcaction, eventresult)` | **Name:** `ASimNetworkSession`<br><br>**Line to add:** `_parser_name_` |
-| **Process Event** | | **Names:**<br> - `imProcess`<br> - `imProcessCreate`<br> - `imProcessTerminate`<br><br>**Line to add:** `_parser_name_` |
-| **Registry Event** | | **Name:** `imRegistry`<br><br>**Line to add:** `_parser_name_` |
-| **Web Session** | **Name:** `imWebSession`<br><br>**Line to add:**<br> `_parser_name_ parser (starttime, endtime, srcipaddr_has_any, url_has_any, httpuseragent_has_any, eventresultdetails_in, eventresult)` | **Name:** `ASimWebSession`<br><br>**Line to add:** `_parser_name_` |
+| Schema | Parser | Line to add |
+| | -- | - |
+| **Authentication** | `ImAuthentication` | `_parser_name_ (starttime, endtime, targetusername_has)` |
+| **DNS** | `ImDns` | `_parser_name_ (starttime, endtime, srcipaddr, domain_has_any,`<br>` responsecodename, response_has_ipv4, response_has_any_prefix,`<br>` eventtype)` |
+| **File Event** | `imFileEvent` | `_parser_name_` |
+| **Network Session** | `imNetworkSession` | `_parser_name_ (starttime, endtime, srcipaddr_has_any_prefix, dstipaddr_has_any_prefix, dstportnumber, url_has_any,`<br>` httpuseragent_has_any, hostname_has_any, dvcaction, eventresult)` |
+| **Process Event** | - `imProcess`<br> - `imProcessCreate`<br> - `imProcessTerminate` | `_parser_name_` |
+| **Registry Event** | `imRegistry`<br><br> | `_parser_name_` |
+| **Web Session** | `imWebSession`<br><br> | `_parser_name_ parser (starttime, endtime, srcipaddr_has_any, url_has_any, httpuseragent_has_any, eventresultdetails_in, eventresult)` |
When adding an additional parser to a unifying parser, make sure you add a comma at the end of the previous line.
sentinel Normalization Parsers List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-parsers-list.md
Deploy the parsers from the [Microsoft Sentinel GitHub repository](https://aka.m
Microsoft Sentinel provides the following out-of-the-box, product-specific DNS parsers:
-| **Source** | **Built-in parsers** | **Workspace deployed parsers** |
-| | | |
-|**Microsoft DNS Server**<br>Collected by the DNS connector<br> and the Log Analytics Agent | `_ASim_Dns_MicrosoftOMS` (regular) <br> `_Im_Dns_MicrosoftOMS` (filtering) <br><br> | `ASimDnsMicrosoftOMS` (regular) <br>`vimDnsMicrosoftOMS` (filtering) <br><br> |
-| **Microsoft DNS Server**<br>Collected by NXlog| `_ASim_Dns_MicrosoftNXlog` (regular)<br>`_Im_Dns_MicrosoftNXlog` (filtering)| `ASimDnsMicrosoftNXlog` (regular)<br> `vimDnsMicrosoftNXlog` (filtering)|
-| **Azure Firewall** | `_ASim_Dns_AzureFirewall` (regular)<br> `_Im_Dns_AzureFirewall` (filtering) | `ASimDnsAzureFirewall` (regular)<br>`vimDnsAzureFirewall` (filtering) |
-| **Sysmon for Windows** (event 22)<br> Collected by the Log Analytics Agent<br> or the Azure Monitor Agent,<br>supporting both the<br> `Event` and `WindowsEvent` tables | `_ASim_Dns_MicrosoftSysmon` (regular)<br> `_Im_Dns_MicrosoftSysmon` (filtering) | `ASimDnsMicrosoftSysmon` (regular)<br> `vimDnsMicrosoftSysmon` (filtering) |
-| **Cisco Umbrella** | `_ASim_Dns_CiscoUmbrella` (regular)<br> `_Im_Dns_CiscoUmbrella` (filtering) | `ASimDnsCiscoUmbrella` (regular)<br> `vimDnsCiscoUmbrella` (filtering) |
-| **Infoblox NIOS**<br><br>The InfoBlox parsers<br>require [configuring the relevant sources](normalization-manage-parsers.md#configure-the-sources-relevant-to-a-source-specific-parser).<br> Use `InfobloxNIOS` as the source type. | `_ASim_Dns_InfobloxNIOS` (regular)<br> `_Im_Dns_InfobloxNIOS` (filtering) | `ASimDnsInfobloxNIOS` (regular)<br> `vimDnsInfobloxNIOS` (filtering) |
-| **GCP DNS** | `_ASim_Dns_Gcp` (regular)<br> `_Im_Dns_Gcp` (filtering) | `ASimDnsGcp` (regular)<br> `vimDnsGcp` (filtering) |
-| **Corelight Zeek DNS events** | `_ASim_Dns_CorelightZeek` (regular)<br> `_Im_Dns_CorelightZeek` (filtering) | `ASimDnsCorelightZeek` (regular)<br> `vimDnsCorelightZeek` (filtering) |
-| **Vectra AI** |`_ASim_Dns_VectraIA` (regular)<br> `_Im_Dns_VectraIA` (filtering) | `AsimDnsVectraAI` (regular)<br> `vimDnsVectraAI` (filtering) |
-| **Zscaler ZIA** |`_ASim_Dns_ZscalerZIA` (regular)<br> `_Im_Dns_ZscalerZIA` (filtering) | `AsimDnsZscalerZIA` (regular)<br> `vimDnsSzcalerZIA` (filtering) |
+| **Source** | **Notes** | **Parser**
+| | | - |
+| **Normalized DNS Logs** | Any event normalized at ingestion to the `ASimDnsActivityLogs` table. | `_Im_Dns_Native` |
+| **Azure Firewall** | | `_Im_Dns_AzureFirewallVxx` |
+| **Cisco Umbrella** | | `_Im_Dns_CiscoUmbrellaVxx` |
+| **Corelight Zeek** | | `_Im_Dns_CorelightZeekVxx` |
+| **GCP DNS** | | `_Im_Dns_GcpVxx` |
+| - **Infoblox NIOS**<br> - **BIND**<br> - **BlucCat** | The same parsers support multiple sources. | `_Im_Dns_InfobloxNIOSVxx` |
+| **Microsoft DNS Server** | Collected by the DNS connector and the Log Analytics Agent. | `_Im_Dns_MicrosoftOMSVxx` |
+| **Microsoft DNS Server** | Collected by NXlog. | `_Im_Dns_MicrosoftNXlogVxx` |
+| **Sysmon for Windows** (event 22) | Collected by the Log Analytics Agent<br> or the Azure Monitor Agent,<br>supporting both the<br> `Event` and `WindowsEvent` tables. | `_Im_Dns_MicrosoftSysmonVxx` |
+| **Vectra AI** | |`_Im_Dns_VectraIAVxx` |
+| **Zscaler ZIA** | | `_Im_Dns_ZscalerZIAVxx` |
|||| Deploy the workspace deployed parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/AsimDNS).
Deploy the parsers from the [Microsoft Sentinel GitHub repository](https://aka.m
Microsoft Sentinel provides the following out-of-the-box, product-specific Network Session parsers:
-| **Source** | **Built-in parsers** | **Workspace deployed parsers** |
+| **Source** | **Notes** | **Parser** |
| | | |
-| **AppGate SDP** ip connection logs collected using Syslog |`_ASim_NetworkSession_AppGateSDP` (regular)<br> `_Im_NetworkSession_AppGateSDP` (filtering)<br> (Pending deployment) | `ASimNetworkSessionAppGateSDP` (regular)<br> `vimNetworkSessionAppGateSDP` (filtering) |
-| **AWS VPC logs** collected using the AWS S3 connector |`_ASim_NetworkSession_AWSVPC` (regular)<br> `_Im_NetworkSession_AWSVPC` (filtering) | `ASimNetworkSessionAWSVPC` (regular)<br> `vimNetworkSessionAWSVPC` (filtering) |
-| **Azure Firewall logs** |`_ASim_NetworkSession_AzureFirewall` (regular)<br> `_Im_NetworkSession_AzureFirewall` (filtering) | `ASimNetworkSessionAzureFirewall` (regular)<br> `vimNetworkSessionAzureFirewall` (filtering) |
-| **Azure Monitor VMConnection** collected as part of the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-overview.md) |`_ASim_NetworkSession_VMConnection` (regular)<br> `_Im_NetworkSession_VMConnection` (filtering) | `ASimNetworkSessionVMConnection` (regular)<br> `vimNetworkSessionVMConnection` (filtering) |
-| **Azure Network Security Groups (NSG) logs** collected as part of the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-overview.md) |`_ASim_NetworkSession_AzureNSG` (regular)<br> `_Im_NetworkSession_AzureNSG` (filtering) | `ASimNetworkSessionAzureNSG` (regular)<br> `vimNetworkSessionAzureNSG` (filtering) |
-| **Fortigate FortiOS** ip connection logs collected using Syslog |`_ASim_NetworkSession_FortinetFortiGate` (regular)<br> `_Im_NetworkSession_FortinetFortiGate` (filtering)<br> (Pending deployment) | `ASimNetworkSessionFortinetFortiGate` (regular)<br> `vimNetworkSessionFortinetFortiGate` (filtering) |
-| **Microsoft 365 Defender for Endpoint** | `_ASim_NetworkSession_Microsoft365Defender` (regular)<br><br>`_Im_NetworkSession_Microsoft365Defender` (filtering) | `ASimNetworkSessionMicrosoft365Defender` (regular)<br><br> `vimNetworkSessionMicrosoft365Defender` (filtering) |
-| **Microsoft Defender for IoT - Endpoint** |`_ASim_NetworkSession_MD4IoT` (regular)<br><br>`_Im_NetworkSession_MD4IoT` (filtering) | `ASimNetworkSessionMD4IoT` (regular)<br><br> `vimNetworkSessionMD4IoT` (filtering) |
-| **Palo Alto PanOS traffic logs** collected using CEF |`_ASim_NetworkSession_PaloAltoCEF` (regular)<br> `_Im_NetworkSession_PaloAltoCEF` (filtering) | `ASimNetworkSessionPaloAltoCEF` (regular)<br> `vimNetworkSessionPaloAltoCEF` (filtering) |
-| **Sysmon for Linux** (event 3)<br> Collected using the Log Analytics Agent<br> or the Azure Monitor Agent |`_ASim_NetworkSession_LinuxSysmon` (regular)<br><br>`_Im_NetworkSession_LinuxSysmon` (filtering) | `ASimNetworkSessionLinuxSysmon` (regular)<br><br> `vimNetworkSessionLinuxSysmon` (filtering) |
-| **Vectra AI** |`_ASim_NetworkSession_VectraIA` (regular)<br> `_Im_NetworkSession_VectraIA` (filtering) | `AsimNetworkSessionVectraAI` (regular)<br> `vimNetworkSessionVectraAI` (filtering) |
-| **Windows Firewall logs**<br>Collected as Windows events using the Log Analytics Agent (Event table) or Azure Monitor Agent (WindowsEvent table). Supports Windows events 5150 to 5159. |`_ASim_NetworkSession_`<br>`MicrosoftWindowsEventFirewall` (regular)<br><br>`_Im_NetworkSession_`<br>`MicrosoftWindowsEventFirewall` (filtering) | `ASimNetworkSession`<br>`MicrosoftWindowsEventFirewall` (regular)<br><br> `vimNetworkSession`<br>`MicrosoftWindowsEventFirewall` (filtering) |
-| **Zscaler ZIA firewall logs** |`_ASim_NetworkSessionZscalerZIA` (regular)<br> `_Im_NetworkSessionZscalerZIA` (filtering) | `AsimNetworkSessionZscalerZIA` (regular)<br> `vimNetowrkSessionSzcalerZIA` (filtering) |
+| **AppGate SDP** | IP connection logs collected using Syslog. | `_Im_NetworkSession_AppGateSDPVxx` |
+| **AWS VPC logs** | Collected using the AWS S3 connector. | `_Im_NetworkSession_AWSVPCVxx` |
+| **Azure Firewall logs** | |`_Im_NetworkSession_AzureFirewallVxx`|
+| **Azure Monitor VMConnection** | Collected as part of the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-overview.md). | `_Im_NetworkSession_VMConnectionVxx` |
+| **Azure Network Security Groups (NSG) logs** | Collected as part of the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-overview.md). | `_Im_NetworkSession_AzureNSGVxx` |
+| **Fortigate FortiOS** | IP connection logs collected using Syslog. | `_Im_NetworkSession_FortinetFortiGateVxx` |
+| **Microsoft 365 Defender for Endpoint** | | `_Im_NetworkSession_Microsoft365DefenderVxx`|
+| **Microsoft Defender for IoT - Endpoint** | | `_Im_NetworkSession_MD4IoTVxx` |
+| **Palo Alto PanOS traffic logs** | Collected using CEF. | `_Im_NetworkSession_PaloAltoCEFVxx` |
+| **Sysmon for Linux** (event 3) | Collected using the Log Analytics Agent<br> or the Azure Monitor Agent. |`_Im_NetworkSession_LinuxSysmonVxx` |
+| **Vectra AI** | | `_Im_NetworkSession_VectraIAVxx` |
+| **Windows Firewall logs** | Collected as Windows events using the Log Analytics Agent (Event table) or Azure Monitor Agent (WindowsEvent table). Supports Windows events 5150 to 5159. | `_Im_NetworkSession_MicrosoftWindowsEventFirewallVxx`|
+| **Zscaler ZIA firewall logs** | Collected using CEF. | `_Im_NetworkSessionZscalerZIAVxx` |
Deploy the workspace deployed parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/AsimNetworkSession).
Deploy Registry Event parsers from the [Microsoft Sentinel GitHub repository](ht
Microsoft Sentinel provides the following out-of-the-box, product-specific Web Session parsers:
-| **Source** | **Built-in parsers** | **Workspace deployed parsers** |
+| **Source** | **Notes** | **Parser** |
| | | |
-|**Squid Proxy** | `_ASim_WebSession_SquidProxy` (regular) <br> `_Im_WebSession_SquidProxy` (filtering) <br><br> | `ASimWebSessionSquidProxy` (regular) <br>`vimWebSessionSquidProxy` (filtering) <br><br> |
-| **Vectra AI Streams** |`_ASim_WebSession_VectraAI` (regular)<br> `_Im_WebSession_VectraAI` (filtering) <br> (Pending deployment) | `ASimWebSessionVectraAI` (regular)<br> `vimWebSessionVectraAI` (filtering) |
-| **Zscaler ZIA** |`_ASim_WebSessionZscalerZIA` (regular)<br> `_Im_WebSessionZscalerZIA` (filtering) | `AsimWebSessionZscalerZIA` (regular)<br> `vimWebSessionSzcalerZIA` (filtering) |
+|**Squid Proxy** | | `_Im_WebSession_SquidProxyVxx` |
+| **Vectra AI Streams** | | `_Im_WebSession_VectraAIVxx` |
+| **Zscaler ZIA** | Collected using CEF | `_Im_WebSessionZscalerZIAVxx` |
These parsers can be deployed from the [Microsoft Sentinel GitHub repository](https://aka.ms/DeployASIM).
service-fabric How To Managed Cluster Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-dedicated-hosts.md
Before you begin:
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free) * Retrieve a managed cluster ARM template. Sample Resource Manager templates are available in the [Azure samples on GitHub](https://github.com/Azure-Samples/service-fabric-cluster-templates). These templates can be used as a starting point for your cluster template. This guide shows how to deploy a Standard SKU cluster with two node types and 12 nodes.
-* The user needs to have Microsoft.Authorization/roleAssignments/write permissions to the host group such as [User Access Administrator](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#user-access-administrator) or [Owner](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#owner) to do role assignments in a host group. For more information, see [Assign Azure roles using the Azure portal - Azure RBAC](https://docs.microsoft.com/azure/role-based-access-control/role-assignments-portal?tabs=current#prerequisites).
+* The user needs to have Microsoft.Authorization/roleAssignments/write permissions to the host group such as [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) or [Owner](/azure/role-based-access-control/built-in-roles#owner) to do role assignments in a host group. For more information, see [Assign Azure roles using the Azure portal - Azure RBAC](/azure/role-based-access-control/role-assignments-portal?tabs=current#prerequisites).
## Review the template
The template used in this guide is from [Azure Samples - Service Fabric cluster
## Create a client certificate Service Fabric managed clusters use a client certificate as a key for access control. If you already have a client certificate that you would like to use for access control to your cluster, you can skip this step.
-If you need to create a new client certificate, follow the steps in [set and retrieve a certificate from Azure Key Vault](https://docs.microsoft.com/azure/key-vault/certificates/quick-create-portal). Note the certificate thumbprint as it will be required to deploy the template in the next step.
+If you need to create a new client certificate, follow the steps in [set and retrieve a certificate from Azure Key Vault](/azure/key-vault/certificates/quick-create-portal). Note the certificate thumbprint as it will be required to deploy the template in the next step.
## Deploy Dedicated Host resources and configure access to Service Fabric Resource Provider
Create a dedicated host group and add a role assignment to the host group with t
> * Each fault domain needs a dedicated host to be placed in it and Service Fabric managed clusters require five fault domains. Therefore, at least five dedicated hosts should be present in each dedicated host group.
-3. The [sample ARM deployment template for Dedicated Host Group](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-ADH) used in the previous step also adds a role assignment to the host group with contributor access. For more information on Azure roles, see [Azure built-in roles - Azure RBAC](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#all). This role assignment is defined in the resources section of template with Principal ID determined from the first step and a role definition ID.
+3. The [sample ARM deployment template for Dedicated Host Group](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-ADH) used in the previous step also adds a role assignment to the host group with contributor access. For more information on Azure roles, see [Azure built-in roles - Azure RBAC](/azure/role-based-access-control/built-in-roles#all). This role assignment is defined in the resources section of template with Principal ID determined from the first step and a role definition ID.
```JSON "variables": {
Create an Azure Service Fabric managed cluster with node type(s) configured to r
* Cluster Name: Enter a unique name for your cluster, such as mysfcluster. * Admin Username: Enter a name for the admin to be used for RDP on the underlying VMs in the cluster. * Admin Password: Enter a password for the admin to be used for RDP on the underlying VMs in the cluster.
- * Client Certificate Thumbprint: Provide the thumbprint of the client certificate that you would like to use to access your cluster. If you don't have a certificate, follow [set and retrieve a certificate](https://docs.microsoft.com/azure/key-vault/certificates/quick-create-portal) to create a self-signed certificate.
+ * Client Certificate Thumbprint: Provide the thumbprint of the client certificate that you would like to use to access your cluster. If you don't have a certificate, follow [set and retrieve a certificate](/azure/key-vault/certificates/quick-create-portal) to create a self-signed certificate.
* Node Type Name: Enter a unique name for your node type, such as nt1. 3. Deploy an ARM template through one of the methods below:
Create an Azure Service Fabric managed cluster with node type(s) configured to r
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fservice-fabric-cluster-templates%2Fmaster%2FSF-Managed-Standard-SKU-2-NT-ADH%2Fazuredeploy.json)
- * ARM PowerShell cmdlets: [New-AzResourceGroupDeployment (Az.Resources)](https://docs.microsoft.com/powershell/module/az.resources/new-azresourcegroupdeployment). Store the paths of your ARM template and parameter files in variables, then deploy the template.
+ * ARM PowerShell cmdlets: [New-AzResourceGroupDeployment (Az.Resources)](/powershell/module/az.resources/new-azresourcegroupdeployment). Store the paths of your ARM template and parameter files in variables, then deploy the template.
```powershell $templateFilePath = "<full path to azuredeploy.json>"
service-fabric How To Managed Cluster Ephemeral Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-ephemeral-os-disks.md
Ephemeral OS disks work well where applications are tolerant of individual VM fa
This article describes how to create a Service Fabric managed cluster node types specifically with Ephemeral OS disks using an Azure Resource Manager template (ARM template). ## Prerequisites
-This guide builds upon the managed cluster quick start guide: [Deploy a Service Fabric managed cluster using Azure Resource Manager](https://docs.microsoft.com/azure/service-fabric/quickstart-managed-cluster-template)
+This guide builds upon the managed cluster quick start guide: [Deploy a Service Fabric managed cluster using Azure Resource Manager](/azure/service-fabric/quickstart-managed-cluster-template)
Before you begin: * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free) * Retrieve a managed cluster ARM template. Sample Resource Manager templates are available in the [Azure samples on GitHub](https://github.com/Azure-Samples/service-fabric-cluster-templates). These templates can be used as a starting point for your cluster template. * Ephemeral OS disks are supported both for primary and secondary node type. This guide shows how to deploy a Standard SKU cluster with two node types - a primary and a secondary node type, which uses Ephemeral OS disk.
-* Ephemeral OS disks aren't supported for every SKU. VM sizes such as DSv1, DSv2, DSv3, Esv3, Fs, FsV2, GS, M, Mdsv2, Bs, Dav4, Eav4 supports Ephemeral OS disks. Ensure the SKU with which you want to deploy supports Ephemeral OS disk. For more information on individual SKU, see [supported VM SKU](https://docs.microsoft.com/azure/virtual-machines/dv3-dsv3-series) and navigate to the desired SKU on left side pane.
+* Ephemeral OS disks aren't supported for every SKU. VM sizes such as DSv1, DSv2, DSv3, Esv3, Fs, FsV2, GS, M, Mdsv2, Bs, Dav4, Eav4 supports Ephemeral OS disks. Ensure the SKU with which you want to deploy supports Ephemeral OS disk. For more information on individual SKU, see [supported VM SKU](/azure/virtual-machines/dv3-dsv3-series) and navigate to the desired SKU on left side pane.
* Ephemeral OS disks in Service Fabric are placed in the space for temporary disks for the VM SKU. Ensure the VM SKU you're using has more than 127 GiB of temporary disk space to place Ephemeral OS disk. ## Review the template
The template used in this guide is from [Azure Samples - Service Fabric cluster
## Create a client certificate Service Fabric managed clusters use a client certificate as a key for access control. If you already have a client certificate that you would like to use for access control to your cluster, you can skip this step.
-If you need to create a new client certificate, follow the steps in [set and retrieve a certificate from Azure Key Vault](https://docs.microsoft.com/azure/key-vault/certificates/quick-create-portal). Note the certificate thumbprint as it will be required to deploy the template in the next step.
+If you need to create a new client certificate, follow the steps in [set and retrieve a certificate from Azure Key Vault](/azure/key-vault/certificates/quick-create-portal). Note the certificate thumbprint as it will be required to deploy the template in the next step.
## Deploy the template
If you need to create a new client certificate, follow the steps in [set and ret
* Cluster Name: Enter a unique name for your cluster, such as mysfcluster. * Admin Username: Enter a name for the admin to be used for RDP on the underlying VMs in the cluster. * Admin Password: Enter a password for the admin to be used for RDP on the underlying VMs in the cluster.
- * Client Certificate Thumbprint: Provide the thumbprint of the client certificate that you would like to use to access your cluster. If you don't have a certificate, follow [set and retrieve a certificate](https://docs.microsoft.com/azure/key-vault/certificates/quick-create-portal) to create a self-signed certificate.
+ * Client Certificate Thumbprint: Provide the thumbprint of the client certificate that you would like to use to access your cluster. If you don't have a certificate, follow [set and retrieve a certificate](/azure/key-vault/certificates/quick-create-portal) to create a self-signed certificate.
* Node Type Name: Enter a unique name for your node type, such as nt1.
If you need to create a new client certificate, follow the steps in [set and ret
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fservice-fabric-cluster-templates%2Fmaster%2FSF-Managed-Standard-SKU-2-NT-Ephemeral%2Fazuredeploy.json)
- * ARM PowerShell cmdlets: [New-AzResourceGroupDeployment (Az.Resources)](https://docs.microsoft.com/powershell/module/az.resources/new-azresourcegroupdeployment). Store the paths of your ARM template and parameter files in variables, then deploy the template.
+ * ARM PowerShell cmdlets: [New-AzResourceGroupDeployment (Az.Resources)](/powershell/module/az.resources/new-azresourcegroupdeployment). Store the paths of your ARM template and parameter files in variables, then deploy the template.
```powershell $templateFilePath = "<full path to azuredeploy.json>"
service-fabric How To Managed Cluster Modify Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-modify-node-type.md
Add another resource type `Microsoft.ServiceFabric/managedclusters/nodetypes` wi
* Make sure to set `isPrimary` to `true` if you are intending to replace an existing primary node type. ```json
- {
- "apiVersion": "[variables('sfApiVersion')]",
- "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
- "name": "[concat(parameters('clusterName'), '/', parameters('nodeType2Name'))]",
- "location": "[resourcegroup().location]",
- "dependsOn": [
- "[concat('Microsoft.ServiceFabric/managedclusters/', parameters('clusterName'))]"
- ],
- "properties": {
- "isPrimary": false,
- "vmImagePublisher": "[parameters('vmImagePublisher')]",
- "vmImageOffer": "[parameters('vmImageOffer')]",
- "vmImageSku": "[parameters('vmImageSku')]",
- "vmImageVersion": "[parameters('vmImageVersion')]",
- "vmSize": "[parameters('nodeType2VmSize')]",
- "vmInstanceCount": "[parameters('nodeType2VmInstanceCount')]",
- "dataDiskSizeGB": "[parameters('nodeType2DataDiskSizeGB')]",
- "dataDiskType":ΓÇ»"[parameters('nodeType2managedDataDiskType')]"
- }
+{
+ "apiVersion": "[variables('sfApiVersion')]",
+ "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
+ "name": "[concat(parameters('clusterName'), '/', parameters('nodeType2Name'))]",
+ "location": "[resourcegroup().location]",
+ "dependsOn": [
+ "[concat('Microsoft.ServiceFabric/managedclusters/', parameters('clusterName'))]"
+ ],
+ "properties": {
+ "isPrimary": false,
+ "vmImagePublisher": "[parameters('vmImagePublisher')]",
+ "vmImageOffer": "[parameters('vmImageOffer')]",
+ "vmImageSku": "[parameters('vmImageSku')]",
+ "vmImageVersion": "[parameters('vmImageVersion')]",
+ "vmSize": "[parameters('nodeType2VmSize')]",
+ "vmInstanceCount": "[parameters('nodeType2VmInstanceCount')]",
+ "dataDiskSizeGB": "[parameters('nodeType2DataDiskSizeGB')]",
+ "dataDiskType": "[parameters('nodeType2managedDataDiskType')]"
+ }
+}
```+ For an example two node type configuration, see our [sample two node type ARM Template](https://github.com/Azure-Samples/service-fabric-cluster-templates/blob/master/SF-Managed-Standard-SKU-2-NT) ### Add with PowerShell
In this walkthrough, you will learn how to modify the node count for a node type
![Sample showing a node count increase][adjust-node-count]
-6) Select `Manage node type scaling` to configure the scaling settings and choose between custom autoscale and manual scale options. Autoscale is a built-in feature that helps applications perform their best when demand changes. You can choose to scale your resource manually to a specific instance count, or via a custom Autoscale policy that scales based on metric(s) thresholds, or schedule instance count which scales during designated time windows. [Learn more about Azure Autoscale](https://docs.microsoft.com/azure/azure-monitor/platform/autoscale-get-started?WT.mc_id=Portal-Microsoft_Azure_Monitoring) or [view the how-to video](https://www.microsoft.com/videoplayer/embed/RE4u7ts).
+6) Select `Manage node type scaling` to configure the scaling settings and choose between custom autoscale and manual scale options. Autoscale is a built-in feature that helps applications perform their best when demand changes. You can choose to scale your resource manually to a specific instance count, or via a custom Autoscale policy that scales based on metric(s) thresholds, or schedule instance count which scales during designated time windows. [Learn more about Azure Autoscale](/azure/azure-monitor/platform/autoscale-get-started?WT.mc_id=Portal-Microsoft_Azure_Monitoring) or [view the how-to video](https://www.microsoft.com/videoplayer/embed/RE4u7ts).
* **Custom autoscale**: Select the appropriate `scale mode` to define the custom Autoscale policy - `Scale to a specific instance count`or `Scale based on a metric`. The latter is based on metric trigger rules, for example, increase instance count by 1 when CPU Percentage is above 70%. Once you define the policy, select `Save` at the top.
Service Fabric managed clusters by default configure a Service Fabric data disk
[change-nodetype-os-image]: ./media/how-to-managed-cluster-modify-node-type/sfmc-change-os-image.png [nodetype-placement-property]: ./media/how-to-managed-cluster-modify-node-type/sfmc-nodetype-placement-property.png [addremove]: ./media/how-to-managed-cluster-modify-node-type/sfmc-addremove-node-type.png-
service-fabric Service Fabric Sfctl Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-node.md
This api allows removing all existing configuration overrides on specified node.
## sfctl node remove-state Notifies Service Fabric that the persisted state on a node has been permanently removed or lost.
-This implies that it is not possible to recover the persisted state of that node. This generally happens if a hard disk has been wiped clean, or if a hard disk crashes. The node has to be down for this operation to be successful. This operation lets Service Fabric know that the replicas on that node no longer exist, and that Service Fabric should stop waiting for those replicas to come back up. Do not run this cmdlet if the state on the node has not been removed and the node can come back up with its state intact. Starting from Service Fabric 6.5, in order to use this API for seed nodes, please change the seed nodes to regular (non-seed) nodes and then invoke this API to remove the node state. If the cluster is running on Azure, after the seed node goes down, Service Fabric will try to change it to a non-seed node automatically. To make this happen, make sure the number of non-seed nodes in the primary node type is no less than the number of Down seed nodes. If necessary, add more nodes to the primary node type to achieve this. For standalone cluster, if the Down seed node is not expected to come back up with its state intact, please remove the node from the cluster, see https\://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-windows-server-add-remove-nodes.
+This implies that it is not possible to recover the persisted state of that node. This generally happens if a hard disk has been wiped clean, or if a hard disk crashes. The node has to be down for this operation to be successful. This operation lets Service Fabric know that the replicas on that node no longer exist, and that Service Fabric should stop waiting for those replicas to come back up. Do not run this cmdlet if the state on the node has not been removed and the node can come back up with its state intact. Starting from Service Fabric 6.5, in order to use this API for seed nodes, please change the seed nodes to regular (non-seed) nodes and then invoke this API to remove the node state. If the cluster is running on Azure, after the seed node goes down, Service Fabric will try to change it to a non-seed node automatically. To make this happen, make sure the number of non-seed nodes in the primary node type is no less than the number of Down seed nodes. If necessary, add more nodes to the primary node type to achieve this. For standalone cluster, if the Down seed node is not expected to come back up with its state intact, please remove the node from the cluster. For more information, see [Add or remove nodes to a standalone Service Fabric cluster running on Windows Server](/azure/service-fabric/service-fabric-cluster-windows-server-add-remove-nodes).
### Arguments
site-recovery Avs Tutorial Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-failback.md
This article describes how to failback Azure VMs to an Azure VMware Solution pri
## Before you start
-1. Learn about [VMware failback](failover-failback-overview.md#vmwarephysical-reprotectionfailback).
+1. Learn about [VMware vSphere failback](failover-failback-overview.md#vmwarephysical-reprotectionfailback).
2. Make sure you've reviewed and completed the steps to [prepare for failback](vmware-azure-prepare-failback.md), and that all the required components are deployed. Components include a process server in Azure, a master target server, and a VPN site-to-site connection (or ExpressRoute private peering) for failback.
-3. Make sure you've completed the [requirements](avs-tutorial-reprotect.md#before-you-begin) for reprotection and failback, and that you've [enabled reprotection](avs-tutorial-reprotect.md#enable-reprotection) of Azure VMs, so that they're replicating from Azure to the Azure VMware Solution private cloud. VMs must be in a replicated state is order to fail back.
+3. Make sure you've completed the [requirements](avs-tutorial-reprotect.md#before-you-begin) for reprotection and failback, and that you've [enabled reprotection](avs-tutorial-reprotect.md#enable-reprotection) of Azure VMs, so that they're replicating from Azure to the Azure VMware Solution private cloud. VMs must be in a replicated state in order to fail back.
This article describes how to failback Azure VMs to an Azure VMware Solution pri
- **Latest** is a crash-consistent recovery point. - With **Latest**, a VM fails over to its latest available point in time. If you have a replication group for multi-VM consistency within a recovery plan, each VM in the group fails over to its independent latest point in time. - If you use an app-consistent recovery point, each VM fails back to its latest available point. If a recovery plan has a replication group, each group recovers to its common available recovery point.
-5. Failover begins. Site Recovery shuts down the Azure VMs.
+5. Failover begins. Azure Site Recovery shuts down the Azure VMs.
6. After failover completes, check everything's working as expected. Check that the Azure VMs are shut down. 7. With everything verified, right-click the VM > **Commit**, to finish the failover process. Commit removes the failed-over Azure VM. > [!NOTE]
-> For Windows VMs, Site Recovery disables the VMware tools during failover. During failback of the Windows VM, the VMware tools are enable again.
+> For Windows VMs, Azure Site Recovery disables the VMware tools during failover. During failback of the Windows VM, the VMware tools are enabled again.
site-recovery Site Recovery Runbook Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-runbook-automation.md
Previously updated : 07/15/2021 Last updated : 08/10/2022 # Add Azure Automation runbooks to recovery plans
Aman Sharma's blog over at [Harvesting Clouds](http://harvestingclouds.com) has
## Before you start -- If you're new to Azure Automation, you can [sign up](https://azure.microsoft.com/services/automation/) and [download sample scripts](https://azure.microsoft.com/documentation/scripts/).
+- If you're new to Azure Automation, you can [sign up](https://azure.microsoft.com/services/automation/) and [download sample scripts](https://azure.microsoft.com/documentation/scripts/). For more information, see [Automation runbooks - known issues and limitations](/azure/automation/automation-runbook-types#powershell-runbooks).
- Ensure that the Automation account has the following modules: - AzureRM.profile - AzureRM.Resources
spring-apps Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/faq.md
Azure Spring Apps intelligently schedules your applications on the underlying Ku
### In which regions is Azure Spring Apps Basic/Standard tier available?
-East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West US 3, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, China East 2(Mooncake), and China North 2(Mooncake). [Learn More](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud)
+East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West US 3, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, Germany West Central, Switzerland North, China East 2 (Mooncake), and China North 2 (Mooncake). [Learn More](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud)
### In which regions is Azure Spring Apps Enterprise tier available?
-East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West US 3, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, and France Central.
+East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West US 3, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, Germany West Central, and Switzerland North.
### Is any customer data stored outside of the specified region?
spring-apps Quickstart Logs Metrics Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-logs-metrics-tracing.md
zone_pivot_groups: programming-languages-spring-apps
**This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier ::: zone pivot="programming-language-csharp"+ With the built-in monitoring capability in Azure Spring Apps, you can debug and monitor complex issues. Azure Spring Apps integrates Steeltoe [distributed tracing](https://docs.steeltoe.io/api/v3/tracing/) with Azure's [Application Insights](../azure-monitor/app/app-insights-overview.md). This integration provides powerful logs, metrics, and distributed tracing capability from the Azure portal. The following procedures explain how to use Log Streaming, Log Analytics, Metrics, and Distributed Tracing with the sample app that you deployed in the preceding quickstarts. ## Prerequisites
-* Complete the previous quickstarts in this series:
+- Complete the previous quickstarts in this series:
- * [Provision Azure Spring Apps service](./quickstart-provision-service-instance.md).
- * [Set up Azure Spring Apps configuration server](./quickstart-setup-config-server.md).
- * [Build and deploy apps](./quickstart-deploy-apps.md).
- * [Set up Log Analytics workspace](./quickstart-setup-log-analytics.md).
+ - [Provision an Azure Spring Apps service instance](./quickstart-provision-service-instance.md).
+ - [Quickstart: Set up Spring Cloud Config Server for Azure Spring Apps](./quickstart-setup-config-server.md).
+ - [Build and deploy apps to Azure Spring Apps](./quickstart-deploy-apps.md).
+ - [Set up a Log Analytics workspace](./quickstart-setup-log-analytics.md).
## Logs
You can use log streaming in the Azure CLI with the following command.
az spring app logs -n solar-system-weather -f ```
-You will see output similar to the following example:
+You'll see output similar to the following example:
```output => ConnectionId:0HM2HOMHT82UK => RequestPath:/weatherforecast RequestId:0HM2HOMHT82UK:00000003, SpanId:|e8c1682e-46518cc0202c5fd9., TraceId:e8c1682e-46518cc0202c5fd9, ParentId: => Microsoft.Azure.SpringCloud.Sample.SolarSystemWeather.Controllers.WeatherForecastController.Get (Microsoft.Azure.SpringCloud.Sample.SolarSystemWeather)
Executing ObjectResult, writing value of type 'System.Collections.Generic.KeyVal
1. Edit the query to remove the Where clauses that limit the display to warning and error logs.
-1. Then select `Run`, and you will see logs. See [Azure Log Analytics docs](../azure-monitor/logs/get-started-queries.md) for more guidance on writing queries.
+1. Then select **Run**, and you'll see logs. For more information, see [Get started with log queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md).
:::image type="content" source="media/quickstart-logs-metrics-tracing/logs-query-steeltoe.png" alt-text="Screenshot of a Logs Analytics query." lightbox="media/quickstart-logs-metrics-tracing/logs-query-steeltoe.png":::
Executing ObjectResult, writing value of type 'System.Collections.Generic.KeyVal
:::image type="content" source="media/quickstart-logs-metrics-tracing/tracing-overview-steeltoe.png" alt-text="Screenshot of the Application map page." lightbox="media/quickstart-logs-metrics-tracing/tracing-overview-steeltoe.png":::
-1. Select the link between **solar-system-weather** and **planet-weather-provider** to see more details like slowest calls by HTTP methods.
+1. Select the link between **solar-system-weather** and **planet-weather-provider** to see more details such as the slowest calls by HTTP methods.
:::image type="content" source="media/quickstart-logs-metrics-tracing/tracing-call-steeltoe.png" alt-text="Screenshot of Application map details." lightbox="media/quickstart-logs-metrics-tracing/tracing-call-steeltoe.png"::: 1. Finally, select **Investigate Performance** to explore more powerful built-in performance analysis. :::image type="content" source="media/quickstart-logs-metrics-tracing/tracing-performance-steeltoe.png" alt-text="Screenshot of Performance page." lightbox="media/quickstart-logs-metrics-tracing/tracing-performance-steeltoe.png":::+ ::: zone-end ::: zone pivot="programming-language-java"+ With the built-in monitoring capability in Azure Spring Apps, you can debug and monitor complex issues. Azure Spring Apps integrates [Spring Cloud Sleuth](https://spring.io/projects/spring-cloud-sleuth) with Azure's [Application Insights](../azure-monitor/app/app-insights-overview.md). This integration provides powerful logs, metrics, and distributed tracing capability from the Azure portal. The following procedures explain how to use Log Streaming, Log Analytics, Metrics, and Distributed tracing with deployed PetClinic apps. ## Prerequisites
-Complete previous steps:
+- Complete the previous quickstarts in this series:
-* [Provision an instance of Azure Spring Apps](./quickstart-provision-service-instance.md)
-* [Set up the config server](./quickstart-setup-config-server.md). For enterprise tier, please follow [set up Application Configuration Service](./how-to-enterprise-application-configuration-service.md).
-* [Build and deploy apps](./quickstart-deploy-apps.md).
-* [Set up Log Analytics workspace](./quickstart-setup-log-analytics.md).
+ - [Provision an Azure Spring Apps service instance](./quickstart-provision-service-instance.md).
+ - [Quickstart: Set up Spring Cloud Config Server for Azure Spring Apps](./quickstart-setup-config-server.md).
+ - [Build and deploy apps to Azure Spring Apps](./quickstart-deploy-apps.md).
+ - [Set up a Log Analytics workspace](./quickstart-setup-log-analytics.md).
## Logs
You can use log streaming in the Azure CLI with the following command.
az spring app logs -s <service instance name> -g <resource group name> -n gateway -f ```
-You will see logs like this:
+You'll see logs like this:
:::image type="content" source="media/quickstart-logs-metrics-tracing/logs-streaming-cli.png" alt-text="Screenshot of CLI log output." lightbox="media/quickstart-logs-metrics-tracing/logs-streaming-cli.png":::
To get the logs using Azure Toolkit for IntelliJ:
:::image type="content" source="media/quickstart-logs-metrics-tracing/logs-entry.png" alt-text="Screenshot of the Logs opening page." lightbox="media/quickstart-logs-metrics-tracing/logs-entry.png":::
-1. Then you will see filtered logs. See [Azure Log Analytics docs](../azure-monitor/logs/get-started-queries.md) for more guidance on writing queries.
+1. Then you'll see filtered logs. For more information, see [Get started with log queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md).
:::image type="content" source="media/quickstart-logs-metrics-tracing/logs-query.png" alt-text="Screenshot of filtered logs." lightbox="media/quickstart-logs-metrics-tracing/logs-query.png"::: ## Metrics
-Navigate to the `Application insights` blade. Then, navigate to the `Metrics` blade - you can see metrics contributed by Spring Boot apps, Spring modules, and dependencies.
+Navigate to the `Application insights` blade, and then navigate to the `Metrics` blade. You can see metrics contributed by Spring Boot apps, Spring modules, and dependencies.
-The following chart shows `gateway-requests` (Spring Cloud Gateway), `hikaricp_connections`
- (JDBC Connections) and `http_client_requests`.
+The following chart shows `gateway-requests` (Spring Cloud Gateway), `hikaricp_connections` (JDBC Connections), and `http_client_requests`.
:::image type="content" source="media/quickstart-logs-metrics-tracing/petclinic-microservices-metrics.jpg" alt-text="Screenshot of gateway requests." lightbox="media/quickstart-logs-metrics-tracing/petclinic-microservices-metrics.jpg":::
-Spring Boot registers a lot number of core metrics: JVM, CPU, Tomcat, Logback...
-The Spring Boot auto-configuration enables the instrumentation of requests handled by Spring MVC.
-All those three REST controllers `OwnerResource`, `PetResource` and `VisitResource` have been instrumented by the `@Timed` Micrometer annotation at class level.
+Spring Boot registers several core metrics, including JVM, CPU, Tomcat, and Logback. The Spring Boot auto-configuration enables the instrumentation of requests handled by Spring MVC. All three REST controllers (`OwnerResource`, `PetResource`, and `VisitResource`) have been instrumented by the `@Timed` Micrometer annotation at the class level.
+
+The `customers-service` application has the following custom metrics enabled:
+
+ - @Timed: `petclinic.owner`
+ - @Timed: `petclinic.pet`
+
+The `visits-service` application has the following custom metrics enabled:
-* `customers-service` application has the following custom metrics enabled:
- * @Timed: `petclinic.owner`
- * @Timed: `petclinic.pet`
-* `visits-service` application has the following custom metrics enabled:
- * @Timed: `petclinic.visit`
+ - @Timed: `petclinic.visit`
You can see these custom metrics in the `Metrics` blade: :::image type="content" source="media/quickstart-logs-metrics-tracing/petclinic-microservices-custom-metrics.jpg" alt-text="Screenshot of the Metrics blade with custom metrics." lightbox="media/quickstart-logs-metrics-tracing/petclinic-microservices-custom-metrics.jpg":::
-You can use the Availability Test feature in Application Insights and monitor
-the availability of applications:
+You can use the Availability Test feature in Application Insights and monitor the availability of applications:
:::image type="content" source="media/quickstart-logs-metrics-tracing/petclinic-microservices-availability.jpg" alt-text="Screenshot of the Availability Test feature." lightbox="media/quickstart-logs-metrics-tracing/petclinic-microservices-availability.jpg":::
az config set defaults.group=
To explore more monitoring capabilities of Azure Spring Apps, see: > [!div class="nextstepaction"]
-> [Analyze logs and metrics with diagnostics settings](diagnostic-services.md)>
+>
+> [Analyze logs and metrics with diagnostics settings](diagnostic-services.md)
+>
> [Stream Azure Spring Apps app logs in real-time](./how-to-log-streaming.md)
spring-apps Quickstart Setup Config Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-setup-config-server.md
Title: "Quickstart - Set up Azure Spring Apps Config Server"
+ Title: "Quickstart: Set up Spring Cloud Config Server for Azure Spring Apps"
description: Describes the setup of Azure Spring Apps Config Server for app deployment.
zone_pivot_groups: programming-languages-spring-apps
-# Quickstart: Set up Azure Spring Apps Config Server
+# Quickstart: Set up Spring Cloud Config Server for Azure Spring Apps
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier
-Azure Spring Apps Config Server is a centralized configuration service for distributed systems. It uses a pluggable repository layer that currently supports local storage, Git, and Subversion. In this quickstart, you set up the Config Server to get data from a Git repository.
+Config Server is a centralized configuration service for distributed systems. It uses a pluggable repository layer that currently supports local storage, Git, and Subversion. In this quickstart, you set up the Config Server to get data from a Git repository.
::: zone pivot="programming-language-csharp"
Azure Spring Apps Config Server is a centralized configuration service for distr
- Completion of the previous quickstart in this series: [Provision Azure Spring Apps service](./quickstart-provision-service-instance.md). - Azure Spring Apps Config Server is only applicable to basic or standard tier.
-## Azure Spring Apps Config Server procedures
+## Config Server procedures
Set up your Config Server with the location of the git repository for the project by running the following command. Replace *\<service instance name>* with the name of the service you created earlier. The default value for service instance name that you set in the preceding quickstart doesn't work with this command.
This command tells Config Server to find the configuration data in the [steeltoe
- Optionally, [Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli). Install the Azure Spring Apps extension with the following command: `az extension add --name spring` - Optionally, [the Azure Toolkit for IntelliJ](https://plugins.jetbrains.com/plugin/8053-azure-toolkit-for-intellij/).
-## Azure Spring Apps Config Server procedures
+## Config Server procedures
#### [Portal](#tab/Azure-portal)
spring-apps Tutorial Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-custom-domain.md
You need to grant Azure Spring Apps access to your key vault before you import c
1. On the upper menu, select **Add Access Policy**. 1. Fill in the info, and select **Add** button, then **Save** access police.
-| Secret permission | Certificate permission | Select principal |
-|--|--|--|
-| Get, List | Get, List | Azure Spring Apps Domain-Management |
+| Secret permission | Certificate permission | Select principal |
+|-||--|
+| Get, List | Get, List | Azure Spring Cloud Domain-Management |
![Import certificate 2](./media/custom-dns-tutorial/import-certificate-b.png)
static-web-apps Local Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/local-development.md
Previously updated : 01/14/2022 Last updated : 08/09/2022
The following chart shows how requests are handled locally.
- **Authentication and authorization** requests are handled by an emulator, which provides a fake identity profile to your app. -- **Functions Core Tools runtime** handles requests to the site's API.
+- **Functions Core Tools runtime**<sup>1</sup> handles requests to the site's API.
- **Responses** from all services are returned to the browser as if they were all a single application.
The following article details the steps for running a node-based application, bu
swa start http://localhost:<DEV-SERVER-PORT-NUMBER> --api-location http://localhost:7071 ```
+Optionally, if you use the `swa init` command, the Static Web Apps CLI looks at your application code and build a _swa-cli.config.json_ configuration file for the CLI. When you use the _swa-cli.config.json_ file, you can run `swa start` to launch your application locally.
+
+<sup>1</sup> The Azure Functions Core Tools are automatically installed by the CLI if they aren't already on your system.
+ ## Prerequisites - **Existing Azure Static Web Apps site**: If you don't have one, begin with the [vanilla-api](https://github.com/staticwebdev/vanilla-api/generate?return_to=/staticwebdev/vanilla-api/generate) starter app. - **[Node.js](https://nodejs.org) with npm**: Run the [Node.js LTS](https://nodejs.org) version, which includes access to [npm](https://www.npmjs.com/). - **[Visual Studio Code](https://code.visualstudio.com/)**: Used for debugging the API application, but not required for the CLI.-- **[Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing)**: Required to run the API locally. ## Get started
Open a terminal to the root folder of your existing Azure Static Web Apps site.
1. Install the CLI. ```console
- npm install -g @azure/static-web-apps-cli azure-functions-core-tools
+ npm install @azure/static-web-apps-cli
``` 1. Build your app if required by your application. Run `npm run build`, or the equivalent command for your project.
-1. Change into the output directory for your app. Output folders are often named _build_ or something similar.
+1. Initialize the repository for the CLI.
+
+ ```console
+ swa init
+ ```
+
+ Answer the questions posed by the CLI to verify your configuration settings are correct.
1. Start the CLI.
Open a terminal to the root folder of your existing Azure Static Web Apps site.
### Other ways to start the CLI
-| Description | Command |
-| | |
-| Serve a specific folder | `swa start ./output-folder` |
-| Use a running framework development server | `swa start http://localhost:3000` |
-| Start a Functions app in a folder | `swa start ./output-folder --api-location ./api` |
-| Use a running Functions app | `swa start ./output-folder --api-location http://localhost:7071` |
+| Description | Command | Comments |
+|--|--|--|
+| Serve a specific folder | `swa start ./<OUTPUT_FOLDER_NAME>` | Replace `<OUTPUT_FOLDER_NAME>` with the name of your output folder. |
+| Use a running framework development server | `swa start http://localhost:3000` | This command works when you have an instance of your application running under port `3000`. Update the port number if your configuration is different. |
+| Start a Functions app in a folder | `swa start ./<OUTPUT_FOLDER_NAME> --api-location ./api` | Replace `<OUTPUT_FOLDER_NAME>` with the name of your output folder. This command expects your application's API to have files in the _api_ folder. Update this value if your configuration is different. |
+| Use a running Functions app | `swa start ./<OUTPUT_FOLDER_NAME> --api-location http://localhost:7071` | Replace `<OUTPUT_FOLDER_NAME>` with the name of your output folder. This command expects your Azure Functions application to be available through port `7071`. Update the port number if your configuration is different. |
## Authorization and authentication emulation
The emulator provides a page allowing you to provide the following [client princ
| **Username** | The account name associated with the security provider. This value appears as the `userDetails` property in the client principal and is autogenerated if you don't provide a value. | | **User ID** | Value autogenerated by the CLI. | | **Roles** | A list of role names, where each name is on a new line. |
+| **Claims** | A list of [user claims](user-information.md#client-principal-data), where each name is on a new line. |
Once logged in:
The following steps show you a common scenario that uses development servers for
1. Start the Static Web Apps CLI using the following command. - ```console
- swa start http://localhost:<DEV-SERVER-PORT-NUMBER> --api-location http://localhost:7071
+ swa start http://localhost:<DEV-SERVER-PORT-NUMBER> --appDevserverUrl http://localhost:7071
```
- Replace `<DEV-SERVER-PORT-NUMBER>` with the development server's port number.
+ Replace `<DEV_SERVER_PORT_NUMBER>` with the development server's port number.
The following screenshots show the terminals for a typical debugging scenario:
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
Previously updated : 05/24/2022 Last updated : 08/09/2022
You can check the replication status for a blob in the source account. For more
If the replication status for a blob in the source account indicates failure, then investigate the following possible causes: - Make sure that the object replication policy is configured on the destination account.
+- Verify that the destination account still exists.
- Verify that the destination container still exists.
+- Verify that the destination container is not in the process of being deleted, or has not just been deleted. Deleting a container may take up to 30 seconds.
+- Verify that the destination container is still participating in the object replication policy.
- If the source blob has been encrypted with a customer-provided key as part of a write operation, then object replication will fail. For more information about customer-provided keys, see [Provide an encryption key on a request to Blob storage](encryption-customer-provided-keys.md).
+- Check whether the source or destination blob has been moved to the Archive tier. Archived blobs cannot be replicated via object replication. For more information about the Archive tier, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
+- Verify that destination container or blob is not protected by an immutability policy. Keep in mind that a container or blob can inherit an immutability policy from its parent. For more information about immutability policies, see [Overview of immutable storage for blob data](immutable-storage-overview.md).
## Feature support
storage Point In Time Restore Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/point-in-time-restore-overview.md
Point-in-time restore for block blobs has the following limitations and known is
- A blob with an active lease cannot be restored. If a blob with an active lease is included in the range of blobs to restore, the restore operation will fail atomically. Break any active leases prior to initiating the restore operation. - Performing a customer-managed failover on a storage account resets the earliest possible restore point for that storage account. For example, suppose you have set the retention period to 30 days. If more than 30 days have elapsed since the failover, then you can restore to any point within that 30 days. However, if fewer than 30 days have elapsed since the failover, then you cannot restore to a point prior to the failover, regardless of the retention period. For example, if it's been 10 days since the failover, then the earliest possible restore point is 10 days in the past, not 30 days in the past. - Snapshots are not created or deleted as part of a restore operation. Only the base blob is restored to its previous state.-- Restoring Azure Data Lake Storage Gen2 flat and hierarchical namespaces is not supported.
+- Point-in-time restore is not supported for hierarchical namespaces or operations via Azure Data Lake Storage Gen2.
> [!IMPORTANT] > If you restore block blobs to a point that is earlier than September 22, 2020, preview limitations for point-in-time restore will be in effect. Microsoft recommends that you choose a restore point that is equal to or later than September 22, 2020 to take advantage of the generally available point-in-time restore feature.
storage Manage Storage Analytics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/manage-storage-analytics-logs.md
description: Learn how to monitor a storage account in Azure by using Azure Stor
Previously updated : 01/29/2021 Last updated : 08/09/2022
queueClient.SetServiceProperties(serviceProperties);
Log data can accumulate in your account over time which can increase the cost of storage. If you need log data for only a small period of time, you can reduce your costs by modifying the log data retention period. For example, if you need logs for only three days, set your log data retention period to a value of `3`. That way logs will be automatically deleted from your account after 3 days. This section shows you how to view your current log data retention period, and then update that period if that's what you want to do.
-> [!NOTE]
-> These steps apply only for accounts that do not have the **Hierarchical namespace** setting enabled on them. If you've enabled that setting on your account, then the setting for retention days is not yet supported. Instead, you'll have to delete logs manually by using any supported tool such as Azure Storage Explorer, REST or an SDK. To find those logs in your storage account, see [How logs are stored](storage-analytics-logging.md#how-logs-are-stored).
- ### [Portal](#tab/azure-portal) 1. In the [Azure portal](https://portal.azure.com), select **Storage accounts**, then the name of the storage account to open the storage account blade.
storage Storage Explorer Support Policy Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-support-policy-lifecycle.md
This table describes the release date and the end of support date for each relea
| Storage Explorer version | Release date | End of support date | |:-:|::|:-:|
+| v1.25.0 | August 3, 2022 | August 3, 2023 |
+| v1.24.3 | June 21, 2022 | June 21, 2023 |
+| v1.24.2 | May 27, 2022 | May 27, 2023 |
| v1.24.1 | May 12, 2022 | May 12, 2023 | | v1.24.0 | May 3, 2022 | May 3, 2023 | | v1.23.1 | April 12, 2022 | April 12, 2023 |
storage Storage Explorer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-troubleshooting.md
Storage Explorer as provided in the *.tar.gz* download is supported for the foll
- Ubuntu 18.04 x64 - Ubuntu 16.04 x64
-Storage Explorer requires .NET Core 3.1 to be installed on your system.
+Storage Explorer requires the .NET 6 runtime to be installed on your system. The ASP.NET runtime is **not** required.
> [!NOTE]
-> Storage Explorer versions 1.8.0 through 1.20.1 require .NET Core 2.1. Storage Explorer version 1.7.0 and earlier require .NET Core 2.0.
+> Older versions of Storage Explorer may require a different version of .NET or .NET Core. Refer to release notes or in app error messages to help determine the required version.
-### [Ubuntu 20.04](#tab/2004)
+### [Ubuntu 22.04](#tab/2204)
1. Download the Storage Explorer *.tar.gz* file.
-1. Install the [.NET Core Runtime](/dotnet/core/install/linux):
+1. Install the [.NET 6 runtime](/dotnet/core/install/linux-ubuntu)
- ```bash
- wget https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb; \
- sudo dpkg -i packages-microsoft-prod.deb; \
- sudo apt-get update; \
- sudo apt-get install -y apt-transport-https && \
- sudo apt-get update && \
- sudo apt-get install -y dotnet-runtime-3.1
- ```
-### [Ubuntu 18.04](#tab/1804)
+### [Ubuntu 20.04](#tab/2004)
1. Download the Storage Explorer *.tar.gz* file.
-1. Install the [.NET Core Runtime](/dotnet/core/install/linux):
+1. Install the [.NET 6 runtime](/dotnet/core/install/linux-ubuntu)
- ```bash
- wget https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb; \
- sudo dpkg -i packages-microsoft-prod.deb; \
- sudo apt-get update; \
- sudo apt-get install -y apt-transport-https && \
- sudo apt-get update && \
- sudo apt-get install -y dotnet-runtime-3.1
- ```
-
-### [Ubuntu 16.04](#tab/1604)
+### [Ubuntu 18.04](#tab/1804)
1. Download the Storage Explorer *.tar.gz* file.
-1. Install the [.NET Core Runtime](/dotnet/core/install/linux):
-
- ```bash
- wget https://packages.microsoft.com/config/ubuntu/16.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb; \
- sudo dpkg -i packages-microsoft-prod.deb; \
- sudo apt-get update; \
- sudo apt-get install -y apt-transport-https && \
- sudo apt-get update && \
- sudo apt-get install -y dotnet-runtime-3.1
- ```
+1. Install the [.NET 6 runtime](/dotnet/core/install/linux-ubuntu)
storage Table Storage Design Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-patterns.md
As discussed in the section Design for querying, the most efficient query is a p
The easiest way to execute a point query is to use the **GetEntityAsync** method as shown in the following C# code snippet that retrieves an entity with a **PartitionKey** of value "Sales" and a **RowKey** of value "212": ```csharp
-var retrieveResult = employeeTable.GetEntityAsync<EmployeeEntity>("Sales", "212");
-if (retrieveResult.Result != null)
-{
- EmployeeEntity employee = (EmployeeEntity)queryResult.Result;
- ...
-}
+EmployeeEntity employee = await employeeTable.GetEntityAsync<EmployeeEntity>("Sales", "212");
``` Notice how this example expects the entity it retrieves to be of type **EmployeeEntity**.
stream-analytics Cosmos Db Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/cosmos-db-managed-identity.md
Previously updated : 08/04/2022 Last updated : 08/09/2022
First, you create a managed identity for your Azure Stream Analytics job.ΓÇ»
## Grant the Stream Analytics job permissions to access the Azure Cosmos DB account
-For the Stream Analytics job to access your Cosmos DB using managed identity, the service principal you created must have special permissions to your Azure Cosmos DB account. In this step, you can assign a role to your stream analytics job's system-assigned managed identity. Azure Cosmos DB has multiple built-in roles that you can assign to the managed identity. For this solution, you can use the following role:
+For the Stream Analytics job to access your Cosmos DB using managed identity, the service principal you created must have special permissions to your Azure Cosmos DB account. In this step, you can assign a role to your stream analytics job's system-assigned managed identity. Azure Cosmos DB has multiple built-in roles that you can assign to the managed identity. For this solution, you will use the following role:
-|Built-in role |Description |
-|||
-|[DocumentDB Account Contributor](../role-based-access-control/built-in-roles.md#documentdb-account-contributor)|Can manage Azure Cosmos DB accounts. Allows retrieval of read/write keys. |
+|Built-in role |
+||
+|Cosmos DB Built-in Data Contributor|
1. Select **Access control (IAM)**.
For the Stream Analytics job to access your Cosmos DB using managed identity, th
| Setting | Value | | | |
- | Role | DocumentDB Account Contributor |
+ | Role | Cosmos DB Built-in Data Contributor |
| Assign access to | User, group, or service principal | | Members | \<Name of your Stream Analytics job> |
stream-analytics Service Bus Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/service-bus-managed-identity.md
Previously updated : 07/19/2022 Last updated : 08/10/2022
First, you create a managed identity for your Azure Stream Analytics job.ΓÇ»
## Grant the Stream Analytics job permissions to access Azure Service Bus
-For the Stream Analytics job to access your Service Bus using managed identity, the service principal you created must have special permissions to your Azure Service Bus resource. In this step, you can assign a role to your stream analytics job's system-assigned managed identity. Azure provides the below Azure built-in roles for authorizing access to a Service Bus namespace. For Azure Stream Analytics you would need these:
+For the Stream Analytics job to access your Service Bus using managed identity, the service principal you created must have special permissions to your Azure Service Bus resource. In this step, you can assign a role to your stream analytics job's system-assigned managed identity. Azure provides the below Azure built-in roles for authorizing access to a Service Bus namespace. For Azure Stream Analytics you would need this role:
-- [Azure Service Bus Data Owner](../role-based-access-control/built-in-roles.md#azure-service-bus-data-owner): Enables data access to Service Bus namespace and its entities (queues, topics, subscriptions, and filters) - [Azure Service Bus Data Sender](../role-based-access-control/built-in-roles.md#azure-service-bus-data-sender): Use this role to give send access to Service Bus namespace and its entities.-- [Azure Service Bus Data Receiver](../role-based-access-control/built-in-roles.md#azure-service-bus-data-receiver): Use this role to give receiving access to Service Bus namespace and its entities. -
-Please note that Stream Analytics Jobs do not need nor do they use [Azure Service Bus Data Receiver](../role-based-access-control/built-in-roles.md#azure-service-bus-data-receiver).
-
-> [!TIP]
-> When you assign roles, assign only the needed access. For more information about the importance of least privilege access, see the [Lower exposure of privileged accounts](../security/fundamentals/identity-management-best-practices.md#lower-exposure-of-privileged-accounts) article.
1. Select **Access control (IAM)**.
Please note that Stream Analytics Jobs do not need nor do they use [Azure Servic
| Setting | Value | | | |
- | Role | Azure Service Bus Data Owner or Azure Service Bus Data Sender |
+ | Role | Azure Service Bus Data Sender |
| Assign access to | User, group, or service principal | | Members | \<Name of your Stream Analytics job> |
stream-analytics Stream Analytics Managed Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-managed-identities-overview.md
Previously updated : 06/28/2022 Last updated : 08/10/2022 # Managed identities for Azure Stream Analytics
Below is a table that shows Azure Stream Analytics inputs and outputs that suppo
| | Service Bus Topic | Yes | Yes | | | Service Bus Queue | Yes | Yes | | | Cosmos DB | Yes | Yes |
-| | Power BI | Yes | No |
+| | Power BI | No | Yes |
| | Data Lake Storage Gen1 | Yes | Yes | | | Azure Functions | No | No | | | Azure Database for PostgreSQL | No | No |
synapse-analytics Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/metadata/table.md
Azure Synapse Analytics allows the different workspace computational engines to share databases and tables between its Apache Spark pools and serverless SQL pool.
-Once a database has been created by a Spark job, you can create tables in it with Spark that use Parquet or CSV as the storage format. Table names will be converted to lower case and need to be queried using the lower case name. These tables will immediately become available for querying by any of the Azure Synapse workspace Spark pools. They can also be used from any of the Spark jobs subject to permissions.
+Once a database has been created by a Spark job, you can create tables in it with Spark that use Parquet, Delta, or CSV as the storage format. Table names will be converted to lower case and need to be queried using the lower case name. These tables will immediately become available for querying by any of the Azure Synapse workspace Spark pools. They can also be used from any of the Spark jobs subject to permissions.
The Spark created, managed, and external tables are also made available as external tables with the same name in the corresponding synchronized database in serverless SQL pool. [Exposing a Spark table in SQL](#expose-a-spark-table-in-sql) provides more detail on the table synchronization.
Spark provides two types of tables that Azure Synapse exposes in SQL automatical
Spark also provides ways to create external tables over existing data, either by providing the `LOCATION` option or using the Hive format. Such external tables can be over a variety of data formats, including Parquet.
-Azure Synapse currently only shares managed and external Spark tables that store their data in Parquet or CSV format with the SQL engines. Tables backed by other formats are not automatically synced. You may be able to sync such tables explicitly yourself as an external table in your own SQL database if the SQL engine supports the table's underlying format.
+Azure Synapse currently only shares managed and external Spark tables that store their data in Parquet, DELTA, or CSV format with the SQL engines. Tables backed by other formats are not automatically synced. You may be able to sync such tables explicitly yourself as an external table in your own SQL database if the SQL engine supports the table's underlying format.
> [!NOTE]
-> Currently, only Parquet and CSV formats are synced to serverless SQL pool. A Spark delta table metadata will not sync to the SQL engine, even though Delta table uses Parquet as the snapshot's storage format. External tables from Spark are currently not synchronizing into dedicated SQL pool databases.
+> Currently, only Parquet and CSV formats are fully supported in serverless SQL pool. Spark Delta tables are also available in the serverless SQL pool, but this feature is in **public preview**. External tables created in Spark are not available in dedicated SQL pool databases.
### Share Spark tables The shareable managed and external Spark tables exposed in the SQL engine as external tables with the following properties: - The SQL external table's data source is the data source representing the Spark table's location folder.-- The SQL external table's file format is Parquet or CSV.
+- The SQL external table's file format is Parquet, Delta, or CSV.
- The SQL external table's access credential is pass-through. Since all Spark table names are valid SQL table names and all Spark column names are valid SQL column names, the Spark table and column names will be used for the SQL external table.
Spark tables provide different data types than the Synapse SQL engines. The foll
| `array`, `map`, `struct` | `varchar(max)` | **SQL**: Serializes into JSON with collation `Latin1_General_100_BIN2_UTF8`. See [JSON Data](/sql/relational-databases/json/json-data-sql-server).| >[!NOTE]
->Database level collation is `Latin1_General_100_CI_AS_SC_UTF8`.
+> Database level collation is `Latin1_General_100_CI_AS_SC_UTF8`.
## Security model
This command creates the table `myparquettable` in the database `mytestdb`. Tabl
Verify that `myparquettable` is included in the results. >[!NOTE]
->A table that is not using Parquet or CSV as its storage format will not be synchronized.
+> A table that is not using Delta, Parquet or CSV as its storage format will not be synchronized.
Next, insert some values into the table from Spark, for example with the following C# Spark statements in a C# notebook:
synapse-analytics Runtime For Apache Spark Lifecycle And Supportability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/runtime-for-apache-spark-lifecycle-and-supportability.md
The following chart captures a typical lifecycle path for a Synapse runtime for
> > * The above timelines are provided as examples based on current Apache Spark releases. If the Apache Spark project changes the lifecycle of a specific version affecting a Synapse runtime, changes to the stage dates will be noted on the [release notes](./apache-spark-version-support.md). > * Both GA and LTS runtimes may be moved into EOL stage faster based on outstanding security risks and usage rates criteria at Microsoft discretion.
-> * Please refer to [Lifecycle FAQ - Microsoft Azure](https://docs.microsoft.com/lifecycle/faq/azure) for information about Azure lifecycle policies.
+> * Please refer to [Lifecycle FAQ - Microsoft Azure](/lifecycle/faq/azure) for information about Azure lifecycle policies.
> ## Release stages and support
synapse-analytics Design Guidance For Replicated Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/design-guidance-for-replicated-tables.md
To ensure consistent query execution times, consider forcing the build of the re
This query uses the [sys.pdw_replicated_table_cache_state](/sql/relational-databases/system-catalog-views/sys-pdw-replicated-table-cache-state-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) DMV to list the replicated tables that have been modified, but not rebuilt. ```sql
-SELECT [ReplicatedTable] = t.[name]
- FROM sys.tables t 
- JOIN sys.pdw_replicated_table_cache_state c 
- ON c.object_id = t.object_id
- JOIN sys.pdw_table_distribution_properties p
- ON p.object_id = t.object_id
- WHERE c.[state] = 'NotReady'
- AND p.[distribution_policy_desc] = 'REPLICATE'
+SELECT SchemaName = SCHEMA_NAME(t.schema_id)
+ , [ReplicatedTable] = t.[name]
+ , [RebuildStatement] = 'SELECT TOP 1 * FROM ' + '[' + SCHEMA_NAME(t.schema_id) + '].[' + t.[name] +']'
+FROM sys.tables t
+JOIN sys.pdw_replicated_table_cache_state c
+ ON c.object_id = t.object_id
+JOIN sys.pdw_table_distribution_properties p
+ ON p.object_id = t.object_id
+WHERE c.[state] = 'NotReady'
+AND p.[distribution_policy_desc] = 'REPLICATE'
``` To trigger a rebuild, run the following statement on each table in the preceding output.
synapse-analytics Sql Data Warehouse Manage Compute Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-compute-rest-api.md
REST APIs for managing compute for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. > [!NOTE]
-> The REST APIs that are described in this article are not applicable to a dedicated SQL pool that's created in an Azure Synapse Analytics workspace. For information about REST APIs to use specifically for an Azure Synapse Analytics workspace, see [Azure Synapse Analytics workspace REST API](/rest/api/synapse/).
+> The REST APIs that are described in this article are for standalone dedicated SQL pools (formerly SQL DW) and are not applicable to a dedicated SQL pool that's created in an Azure Synapse Analytics workspace. For information about REST APIs to use specifically for an Azure Synapse Analytics workspace, see [Azure Synapse Analytics workspace REST API](/rest/api/synapse/).
## Scale compute
time-series-insights Concepts Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-power-bi.md
Title: 'Power BI integration - Azure Time Series Insights Gen 2 | Microsoft Docs'
+ Title: 'Power BI integration - Azure Time Series Insights Gen 2'
description: Learn about Power BI integration in Azure Time Series Insight.
For advanced querying and editing functionality within Power BI, use Power BIΓÇÖ
* Download [Power BI desktop](https://powerbi.microsoft.com/desktop/) and begin to connect your data.
-* Learn more about [Power BI](/power-bi/).
+* Learn more about [Power BI](/power-bi/).
time-series-insights Overview What Is Tsi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/overview-what-is-tsi.md
Title: 'Overview: What is Azure Time Series Insights Gen2? - Azure Time Series Insights Gen2 | Microsoft Docs'
+ Title: 'Overview: What is Azure Time Series Insights Gen2? - Azure Time Series Insights Gen2'
description: Learn about changes, improvements, and features in Azure Time Series Insights Gen2.
time-series-insights Time Series Insights Manage Reference Data Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-manage-reference-data-csharp.md
Title: 'Manage reference data in GA environments using C# - Azure Time Series Insights | Microsoft Docs'
+ Title: 'Manage reference data in GA environments using C# - Azure Time Series Insights'
description: Learn how to manage reference data for your GA environment by creating a custom application written in C#.
update-center Enable Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/enable-machines.md
After your register for the above feature, go to update management center (previ
For Azure VMs, to register the resource provider, use: ```azurepowershell
-Register-AzResourceProvider -FeatureName InGuestAutoAssessmentVMPreview -ProviderNamespace Microsoft.Compute
+Register-AzProviderPreviewFeature -Name InGuestAutoAssessmentVMPreview -ProviderNamespace Microsoft.Compute
``` ### [CLI](#tab/cli-periodic)
virtual-desktop Autoscale Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-diagnostics.md
# Set up diagnostics for autoscale in Azure Virtual Desktop
-> [!IMPORTANT]
-> Autoscale is currently in preview.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- Diagnostics lets you monitor potential issues and fix them before they interfere with your autoscale scaling plan. Currently, you can either send diagnostic logs for autoscale to an Azure Storage account or consume logs with the Events hub. If you're using an Azure Storage account, make sure it's in the same region as your scaling plan. Learn more about diagnostic settings at [Create diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md). For more information about resource log data ingestion time, see [Log data ingestion time in Azure Monitor](../azure-monitor/logs/data-ingestion-time.md).
The following JSON file is an example of what you'll see when you open a report:
- [Assign your scaling plan to new or existing host pools](autoscale-new-existing-host-pool.md). - Learn more about terms used in this article at our [autoscale glossary](autoscale-glossary.md). - For examples of how autoscale works, see [Autoscale example scenarios](autoscale-scenarios.md).-- View our [autoscale FAQ](autoscale-faq.yml) to answer commonly asked questions.
+- View our [autoscale FAQ](autoscale-faq.yml) to answer commonly asked questions.
virtual-desktop Scheduled Agent Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/scheduled-agent-updates.md
The agent component update won't succeed if the session host VM is shut down or
- The local time zone for VMs you create using the Azure portal is set to Coordinated Universal Time (UTC) by default. If you want to change the VM time zone, run the [Set-TimeZone PowerShell cmdlet](/powershell/module/microsoft.powershell.management/set-timezone) on the VM. -- To get a list of available time zones for a VM, run the [Get-TimeZone PowerShell cmdlet]/powershell/module/microsoft.powershell.management/get-timezone) on the VM.
+- To get a list of available time zones for a VM, run the [Get-TimeZone PowerShell cmdlet](/powershell/module/microsoft.powershell.management/get-timezone) on the VM.
## Next steps
For more information related to Scheduled Agent Updates and agent components, ch
- Learn how to set up diagnostics for this feature at the [Scheduled Agent Updates Diagnostics guide](agent-updates-diagnostics.md). - Learn more about the Azure Virtual Desktop agent, side-by-side stack, and Geneva Monitoring agent at [Getting Started with the Azure Virtual Desktop Agent](agent-overview.md). - For more information about the current and earlier versions of the Azure Virtual Desktop agent, see [Azure Virtual Desktop agent updates](whats-new-agent.md).-- If you're experiencing agent or connectivity-related issues, see the [Azure Virtual Desktop Agent issues troubleshooting guide](troubleshoot-agent.md).
+- If you're experiencing agent or connectivity-related issues, see the [Azure Virtual Desktop Agent issues troubleshooting guide](troubleshoot-agent.md).
virtual-desktop Start Virtual Machine Connect Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/start-virtual-machine-connect-faq.md
To configure the deallocation policy:
>[!NOTE] >Make sure to set the time limit for the "End a disconnected session" policy to a value greater than five minutes. A low time limit can cause users' sessions to end if their network loses connection for too long, resulting in lost work.
-Signing users out won't deallocate their VMs. To learn how to deallocate VMs, see [Start or stop VMs during off hours](../automation/automation-solution-vm-management.md) for personal host pools and [Scale session hosts using Azure Automation](set-up-scaling-script.md) for pooled host pools.
+Signing users out won't deallocate their VMs. To learn how to deallocate VMs, see [Start or stop VMs during off hours](../automation/automation-solution-vm-management.md) for personal host pools and [Autoscale](autoscale-scaling-plan.md) for pooled host pools.
## Can users turn off the VM from their clients?
-Yes. Users can shut down the VM by using the Start menu within their session, just like they would with a physical machine. However, shutting down the VM won't deallocate the VM. To learn how to deallocate VMs, see [Start or stop VMs during off hours](../automation/automation-solution-vm-management.md) for personal host pools and [Scale session hosts using Azure Automation](set-up-scaling-script.md) for pooled host pools.
+Yes. Users can shut down the VM by using the Start menu within their session, just like they would with a physical machine. However, shutting down the VM won't deallocate the VM. To learn how to deallocate VMs, see [Start or stop VMs during off hours](../automation/automation-solution-vm-management.md) for personal host pools and [Autoscale](autoscale-scaling-plan.md) for pooled host pools.
## Next steps
virtual-desktop Tag Virtual Desktop Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/tag-virtual-desktop-resources.md
Like with the [general suggestions](#suggested-tags-for-azure-virtual-desktop),
### Use the cm-resource-parent tag to automatically group costs by host pool
-You can group costs by host pool by using the cm-resource-parent tag. This tag won't impact billing but will let you review tagged costs in Microsoft Cost Management without having to use filters. The key for this tag is **cm-resource-parent** and its value is the resource ID of the Azure resource you want to group costs by. For example, you can group costs by host pool by entering the host pool resource ID as the value. To learn more about how to use this tag, see [Group related resources in the cost analysis (preview)](../cost-management-billing/costs/enable-preview-features-cost-management-labs#group-related-resources-in-the-cost-analysis-preview).
+You can group costs by host pool by using the cm-resource-parent tag. This tag won't impact billing but will let you review tagged costs in Microsoft Cost Management without having to use filters. The key for this tag is **cm-resource-parent** and its value is the resource ID of the Azure resource you want to group costs by. For example, you can group costs by host pool by entering the host pool resource ID as the value. To learn more about how to use this tag, see [Group related resources in the cost analysis (preview)](../cost-management-billing/costs/enable-preview-features-cost-management-labs.md#group-related-resources-in-the-cost-analysis-preview).
## Suggested tags for other Azure Virtual Desktop resources
virtual-desktop Teams On Avd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-on-avd.md
You can deploy the Teams desktop app using a per-machine or per-user installatio
1. Download the [Teams MSI package](/microsoftteams/teams-for-vdi#deploy-the-teams-desktop-app-to-the-vm) that matches your environment. We recommend using the 64-bit installer on a 64-bit operating system.
- > [!IMPORTANT]
- > Teams Desktop client version 1.3.00.21759 fixed an issue where Teams showed UTC time zone in chat, channels, and calendar. Later versions of the client will show the remote session time zone.
- 2. Run one of the following commands to install the MSI to the host VM: - Per-user installation
Using Teams in a virtualized environment is different from using Teams in a non-
- With per-machine installation, Teams on VDI isn't automatically updated the same way non-VDI Teams clients are. To update the client, you'll need to update the VM image by installing a new MSI. - Media optimization for Teams is only supported for the Remote Desktop client on machines running Windows 10 or later or macOS 10.14 or later. - Use of explicit HTTP proxies defined on the client endpoint device isn't supported.
+- Zoom in/zoom out of chat windows isn't supported.
### Calls and meetings
virtual-desktop Troubleshoot Vm Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-vm-configuration.md
The VM used to run remediation must be on the same subnet and domain as the VM w
Follow these instructions to run remediation from the same subnet and domain: 1. Connect with standard Remote Desktop Protocol (RDP) to the VM from where fix will be applied.
-2. Download PsExec from [https://docs.microsoft.com/sysinternals/downloads/psexec](/sysinternals/downloads/psexec).
+2. Download PsExec from [PsExec v2.40](/sysinternals/downloads/psexec).
3. Unzip the downloaded file. 4. Start command prompt as local administrator. 5. Navigate to folder where PsExec was unzipped.
virtual-desktop Troubleshoot Vm Configuration 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-vm-configuration-2019.md
The VM used to run remediation must be on the same subnet and domain as the VM w
Follow these instructions to run remediation from the same subnet and domain: 1. Connect with standard Remote Desktop Protocol (RDP) to the VM from where fix will be applied.
-2. Download PsExec from [https://docs.microsoft.com/sysinternals/downloads/psexec](/sysinternals/downloads/psexec).
+2. Download PsExec from [PsExec v2.40](/sysinternals/downloads/psexec).
3. Unzip the downloaded file. 4. Start command prompt as local administrator. 5. Navigate to folder where PsExec was unzipped.
virtual-machine-scale-sets Virtual Machine Scale Sets Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview.md
Title: Overview of autoscale with Azure virtual machine scale sets description: Learn about the different ways that you can automatically scale an Azure virtual machine scale set based on performance or on a fixed schedule--++ Last updated 06/30/2020 - # Overview of autoscale with Azure virtual machine scale sets
You can create autoscale rules that use host-based metrics with one of the follo
- [Azure CLI](tutorial-autoscale-cli.md) - [Azure template](tutorial-autoscale-template.md)
-This overview detailed how to use autoscale rules to scale horizontally and increase or decrease the *number* of VM instances in your scale set. You can also scale vertically to increase or decrease the VM instance *size*. For more information, see [Vertical autoscale with Virtual Machine Scale sets](virtual-machine-scale-sets-vertical-scale-reprovision.md).
- For information on how to manage your VM instances, see [Manage virtual machine scale sets with Azure PowerShell](./virtual-machine-scale-sets-manage-powershell.md). To learn how to generate alerts when your autoscale rules trigger, see [Use autoscale actions to send email and webhook alert notifications in Azure Monitor](../azure-monitor/autoscale/autoscale-webhook-email.md). You can also [Use audit logs to send email and webhook alert notifications in Azure Monitor](../azure-monitor/alerts/alerts-log-webhook.md).
virtual-machines Automatic Vm Guest Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-vm-guest-patching.md
As a new rollout is triggered every month, a VM will receive at least one patch
| Canonical | UbuntuServer | 18.04-LTS | | Canonical | 0001-com-ubuntu-pro-bionic | pro-18_04-lts | | Canonical | 0001-com-ubuntu-server-focal | 20_04-lts |
+| Canonical | 0001-com-ubuntu-server-focal | 20_04-lts-gen2 |
| Canonical | 0001-com-ubuntu-pro-focal | pro-20_04-lts |
+| MicrosoftCBLMariner | CBL-Mariner | 1-gen2 |
+| MicrosoftCBLMariner | CBL-Mariner | CBL-Mariner-2-gen2 |
| Redhat | RHEL | 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7_9, 7-RAW, 7-LVM | | Redhat | RHEL | 8, 8.1, 8.2, 8_3, 8_4, 8_5, 8-LVM | | Redhat | RHEL-RAW | 8-raw |
As a new rollout is triggered every month, a VM will receive at least one patch
| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter | | MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-Server-Core | | MicrosoftWindowsServer | WindowsServer | 2019-Datacenter |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-gensecond |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-smalldisk-g2 |
| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-Core | | MicrosoftWindowsServer | WindowsServer | 2022-datacenter |
+| MicrosoftWindowsServer | WindowsServer | 2022-datacenter-g2 |
| MicrosoftWindowsServer | WindowsServer | 2022-datacenter-core | | MicrosoftWindowsServer | WindowsServer | 2022-datacenter-azure-edition |
+| MicrosoftWindowsServer | WindowsServer | 2022-datacenter-azure-edition-core |
+| MicrosoftWindowsServer | WindowsServer | 2022-datacenter-azure-edition-core-smalldisk |
| MicrosoftWindowsServer | WindowsServer | 2022-datacenter-azure-edition-smalldisk | ## Patch orchestration modes
virtual-machines Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-compute-gallery.md
There are limits, per subscription, for deploying resources using Azure Compute
- 100 galleries, per subscription, per region - 1,000 image definitions, per subscription, per region - 10,000 image versions, per subscription, per region-- 10 image version replicas, per subscription, per region
+- 100 image version replicas, per subscription, per region however 50 replicas should be sufficient for most use cases
- Any disk attached to the image must be less than or equal to 1TB in size For more information, see [Check resource usage against limits](../networking/check-usage-against-limits.md) for examples on how to check your current usage.
virtual-machines Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/delete.md
PUT https://management.azure.com/subscriptions/subid/resourceGroups/rg1/provider
## Update the delete behavior on an existing VM
-You can use the Azure REST API to patch a VM to change the behavior when you delete a VM. The following example updates the VM to delete the NIC, OS disk, and data disk when the VM is deleted.
+You can change the behavior when you delete a VM. The following example updates the VM to delete the NIC, OS disk, and data disk when the VM is deleted.
+### [CLI](#tab/cli3)
+
+The following example sets the delete option to `detach` so you can reuse the disk.
+
+```azurecli-interactive
+az resource update --resource-group myResourceGroup --name myVM --resource-type virtualMachines --namespace Microsoft.Compute --set properties.storageProfile.osDisk.deleteOption=detach
+```
+
+### [REST](#tab/rest3)
```rest PATCH https://management.azure.com/subscriptions/subID/resourceGroups/resourcegroup/providers/Microsoft.Compute/virtualMachines/testvm?api-version=2021-07-01
PATCH https://management.azure.com/subscriptions/subID/resourceGroups/resourcegr
} } ```+ ## Force Delete for VMs Force delete allows you to forcefully delete your virtual machine, reducing delete latency and immediately freeing up attached resources. For VMs that do not require graceful shutdown, Force Delete will delete the VM as fast as possible while relieving the logical resources from the VM, bypassing the graceful shutdown and some of the cleanup operations. Force Delete will not immediately free the MAC address associated with a VM, as this is a physical resource that may take up to 10 min to free. If you need to immediately re-use the MAC address on a new VM, Force Delete is not recommended. Force delete should only be used when you are not intending to re-use virtual hard disks. You can use force delete through Portal, CLI, PowerShell, and REST API.
-### [Portal](#tab/portal3)
+### [Portal](#tab/portal4)
When you go to delete an existing VM, you will find an option to apply force delete in the delete pane.
When you go to delete an existing VM, you will find an option to apply force del
1. In the **Delete virtual machine** pane, select the checkbox for **Apply force delete**. 1. Select **Ok**.
-### [CLI](#tab/cli3)
+### [CLI](#tab/cli4)
Use the `--force-deletion` parameter for [az vm delete](/cli/azure/vm?view=azure-cli-latest#az-vm-delete&preserve-view=true).
az vm delete \
--force-deletion ```
-### [PowerShell](#tab/powershell3)
+### [PowerShell](#tab/powershell4)
Use the `-ForceDeletion` parameter for [Remove-AzVm](/powershell/module/az.compute/remove-azvm).
Remove-AzVm `
-ForceDeletion $true ```
-### [REST](#tab/rest3)
+### [REST](#tab/rest4)
You can use the Azure REST API to apply force delete to your virtual machines. Use the `forceDeletion` parameter for [Virtual Machines - Delete](/rest/api/compute/virtual-machines/delete).
You can use the Azure REST API to apply force delete to your virtual machines. U
Force delete allows you to forcefully delete your **Uniform** virtual machine scale sets, reducing delete latency and immediately freeing up attached resources. . Force Delete will not immediately free the MAC address associated with a VM, as this is a physical resource that may take up to 10 min to free. If you need to immediately re-use the MAC address on a new VM, Force Delete is not recommended. Force delete should only be used when you are not intending to re-use virtual hard disks. You can use force delete through Portal, CLI, PowerShell, and REST API.
-### [Portal](#tab/portal4)
+### [Portal](#tab/portal5)
When you go to delete an existing virtual machine scale set, you will find an option to apply force delete in the delete pane.
When you go to delete an existing virtual machine scale set, you will find an op
1. In the **Delete virtual machine scale set** pane, select the checkbox for **Apply force delete**. 1. Select **Ok**.
-### [CLI](#tab/cli4)
+### [CLI](#tab/cli5)
Use the `--force-deletion` parameter for [az vmss delete](/cli/azure/vmss?view=azure-cli-latest#az-vmss-delete&preserve-view=true).
az vmss delete \
--force-deletion ```
-### [PowerShell](#tab/powershell4)
+### [PowerShell](#tab/powershell5)
Use the `-ForceDeletion` parameter for [Remove-AzVmss](/powershell/module/az.compute/remove-azvmss).
Remove-AzVmss `
-ForceDeletion $true ```
-### [REST](#tab/rest4)
+### [REST](#tab/rest5)
You can use the Azure REST API to apply force delete to your virtual machine scale set. Use the `forceDeletion` parameter for [Virtual Machines Scale Sets - Delete](/rest/api/compute/virtual-machine-scale-sets/delete).
virtual-machines Disks Deploy Zrs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-deploy-zrs.md
Last updated 09/01/2021 -+ ms.devlang: azurecli
virtual-machines Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/troubleshoot.md
## Viewing extension status Azure Resource Manager templates can be executed from Azure PowerShell. Once the template is executed, the extension status can be viewed from Azure Resource Explorer or the command-line tools.
-Here is an example:
+Here's an example:
Azure PowerShell:
Azure PowerShell:
Get-AzVM -ResourceGroupName $RGName -Name $vmName -Status ```
-Here is the sample output:
+Here's the sample output:
```output Extensions: {
Extensions: {
## Troubleshooting extension failures ### Verify that the VM Agent is running and Ready
-The VM Agent is required to manage, install and execute extensions. If the VM Agent is not running or is failing to report a Ready status to the Azure platform, then the extensions will not work correctly.
+The VM Agent is required to manage, install and execute extensions. If the VM Agent isn't running or is failing to report a Ready status to the Azure platform, then the extensions won't work correctly.
Please refer to the following pages to troubleshoot the VM Agent: - [Troubleshooting Windows Azure Guest Agent](/troubleshoot/azure/virtual-machines/windows-azure-guest-agent) for a Windows VM
or in the Azure portal, by browsing to the VM Blade / Settings / Extensions. You
### Rerun the extension on the VM
-If you are running scripts on the VM using Custom Script Extension, you could sometimes run into an error where VM was created successfully but the script has failed. Under these conditions, the recommended way to recover from this error is to remove the extension and rerun the template again.
+If you're running scripts on the VM using Custom Script Extension, you could sometimes run into an error where VM was created successfully but the script has failed. Under these conditions, the recommended way to recover from this error is to remove the extension and rerun the template again.
Note: In future, this functionality would be enhanced to remove the need for uninstalling the extension. #### Remove the extension from Azure PowerShell
That certificate will be automatically regenerated by restarting the Windows Gue
- Right-click, and select "End Task". The process will be automatically restarted
-You can also trigger a new GoalState to the VM, by executing a "VM Reapply". VM [Reapply](/rest/api/compute/virtualmachines/reapply) is an API introduced in 2020 to reapply a VM's state. We recommend doing this at a time when you can tolerate a short VM downtime. While Reapply itself does not cause a VM reboot, and the vast majority of times calling Reapply will not reboot the VM, there is a very small risk that some other pending update to the VM model gets applied when Reapply triggers a new goal state, and that other change could require a restart.
+You can also trigger a new GoalState to the VM, by executing a "VM Reapply". VM [Reapply](/rest/api/compute/virtualmachines/reapply) is an API introduced in 2020 to reapply a VM's state. We recommend doing this at a time when you can tolerate a short VM downtime. While Reapply itself doesn't cause a VM reboot, and the vast majority of times calling Reapply won't reboot the VM, there's a very small risk that some other pending update to the VM model gets applied when Reapply triggers a new goal state, and that other change could require a restart.
Azure portal:
They also write detailed logs of their execution (eg. _"/var/log/azure/custom-sc
### If the VM is recreated from an existing VM
-It could happen that you're creating an Azure VM based on a specialized Disk coming from another Azure VM. In that case, it's possible that the old VM contained extensions, and so will have binaries, logs and status files left over. The new VM model will not be aware of the previous VM's extensions states, and it might report an incorrect status for these extensions. We strongly recommend to remove the extensions from the old VM before creating the new one, and then reinstall these extensions once the new VM is created.
-The same can happen when you create a generalized image from an existing Azure VM. We invite you to remove extensions to avoid inconsistent state from the extensions.
+It could happen that you're creating an Azure VM based on a specialized Disk coming from another Azure VM. In that case, it's possible that the old VM contained extensions, and so will have binaries, logs and status files left over. The new VM model won't be aware of the previous VM's extensions states, and it might report an incorrect status for these extensions. We strongly recommend you to remove the extensions from the old VM before creating the new one, and then reinstall these extensions once the new VM is created.
+The same can happen when you create a generalized image from an existing Azure VM. We invite you to remove extensions to avoid inconsistent state from the extensions.
++
+## Known issues
++
+### PowerShell isn't recognized as an internal or external command
+
+You notice the following error entries in the RunCommand extension's output:
+
+```Log sample
+RunCommandExtension failed with "'powershell' isn't recognized as an internal or external command,"
+```
+
+**Analysis**
+
+Extensions run under Local System account, so it's very possible that powershell.exe works fine when you RDP into the VM, but fails when run with RunCommand.
+
+**Solution**
+
+- Check that PowerShell is properly listed in the PATH environment variable:
+ - Open Control Panel
+ - System and Security
+ - System
+ - Advanced tab -> Environmental Variables
+- Under 'System variables' click edit and ensure that PowerShell is in the PATH environment variable (usually: "C:\Windows\System32\WindowsPowerShell\v1.0")
+- Reboot the VM or restart the WindowsAzureGuestAgent service then try the Run Command again.
++
+### Command isn't recognized as an internal or external command
+
+You see the following in the C:\WindowsAzure\Logs\Plugins\<ExtensionName>\<Version>\CommandExecution.log file:
+
+```Log sample
+Execution Error: '<command>' isn't recognized as an internal or external command, operable program or batch file.
+```
+
+**Analysis**
+
+Extensions run under Local System account, so it's very possible that powershell.exe works fine when you RDP into the VM, but fails when run with RunCommand.
+
+**Solution**
+
+- Open a Command Prompt in the VM and execute a command to reproduce the error. The VM Agent uses the Administrator cmd.exe and you may have some preconfigured command to execute every time cmd is started.
+- It's also likely that your PATH variable is misconfigured, but this will depend on the command that is having the problem.
+++++
+### VMAccessAgent is failing with Cannot update Remote Desktop Connection settings for Administrator account. Error: System.Runtime.InteropServices.COMException (0x800706D9): There are no more endpoints available from the endpoint mapper.
+
+You see the following in the extension's status:
+
+```Log sample
+Type Microsoft.Compute.VMAccessAgent
+Version 2.4.8
+Status Provisioning failed
+Status level Error
+Status message Cannot update Remote Desktop Connection settings for Administrator account. Error: System.Runtime.InteropServices.COMException (0x800706D9): There are no more endpoints available from the endpoint mapper. (Exception from HRESULT: 0x800706D9) at NetFwTypeLib.INetFwRules.GetEnumerator() at
+Microsoft.WindowsAzure.GuestAgent.Plugins.JsonExtensions.VMAccess.RemoteDesktopManager.EnableRemoteDesktopFirewallRules()
+at Microsoft.WindowsAzure.GuestAgent.Plugins.JsonExtensions.VMAccess.RemoteDesktopManager.EnableRemoteDesktop() at
+```
+
+**Analysis**
+
+This error can happen when the Windows Firewall service isn't running.
+
+**Solution**
+
+Check if the Windows Firewall service is enabled and running. If it's not, please enable and start it - then try again to run the VMAccessAgent.
+++++
+### The remote certificate is invalid according to the validation procedure.
+
+You see the following in the WaAppAgent.log
+
+```Log sample
+System.Net.WebException: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. > System.Security.
+Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure.
+```
+
+**Analysis**
+
+Your VM is probably missing the Baltimore CyberTrust Root certificate in "Trusted Root Certification Authorities".
+
+**Solution**
+
+Open the certificates console with certmgr.msc, and check if the certificate is there.
+If it's not, please install it from https://cacert.omniroot.com/bc2025.crt
+
+Another possible issue is that the certificate chain is broken by a third party SSL Inspection tool, like ZScaler. That kind of tool should be configured to bypass SSL inspection.
++++
virtual-machines Field Programmable Gate Arrays Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/field-programmable-gate-arrays-attestation.md
Prior to performing any operations with Azure, you must log into Azure and set t
Your netlist file must be uploaded to an Azure storage blob container for access by the attestation service.
-Refer to this page for more information on creating the account, a container, and uploading your netlist as a blob to that container: [https://docs.microsoft.com/azure/storage/blobs/storage-quickstart-blobs-cli](../storage/blobs/storage-quickstart-blobs-cli.md).
+For more information on creating the account, a container, and uploading your netlist as a blob to that container, see [Quickstart: Create, download, and list blobs with Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md).
You can also use the Azure portal for this as well.
virtual-machines Add Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/add-disk.md
Title: Add a data disk to Linux VM using the Azure CLI description: Learn to add a persistent data disk to your Linux VM with the Azure CLI -+
virtual-machines Convert Disk Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/convert-disk-storage.md
Title: Convert managed disks storage between different disk types using Azure CLI description: How to convert Azure managed disks between the different disks types by using the Azure CLI. -+ Last updated 02/13/2021
virtual-machines Detach Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/detach-disk.md
Title: Detach a data disk from a Linux VM - Azure description: Learn to detach a data disk from a virtual machine in Azure using Azure CLI or the Azure portal. -+ Last updated 06/08/2022
virtual-machines Expand Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/expand-disks.md
Title: Expand virtual hard disks on a Linux VM description: Learn how to expand virtual hard disks on a Linux VM with the Azure CLI. -+ Last updated 08/02/2022
virtual-machines Tutorial Azure Devops Blue Green Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-azure-devops-blue-green-strategy.md
Title: Configure canary deployments for Azure Linux virtual machines
-description: Learn how to set up a continuous deployment (CD) pipeline. This pipeline updates a group of Azure Linux virtual machines using the blue-green deployment strategy.
+description: Learn how to set up a classic release pipeline and deploy to Linux virtual machines using the blue-green deployment strategy.
tags: azure-devops-pipelines
azure-pipelines Previously updated : 4/10/2020 Last updated : 08/08/2022 -
-#Customer intent: As a developer, I want to learn about CI/CD features in Azure so that I can use Azure DevOps services like Azure Pipelines to build and deploy my applications automatically.
# Configure the blue-green deployment strategy for Azure Linux virtual machines
-**Applies to:** :heavy_check_mark: Linux VMs
-
-## Infrastructure as a service (IaaS) - Configure CI/CD
+**Applies to:** :heavy_check_mark: Linux VMs
-Azure Pipelines provides a fully featured set of CI/CD automation tools for deployments to virtual machines. You can configure a continuous-delivery pipeline for an Azure VM from the Azure portal.
+Azure Pipelines provides a fully featured set of CI/CD automation tools for deployments to virtual machines. This article will show you how to set up a classic release pipeline that uses the blue-green strategy to deploy to Linux virtual machines. Azure also supports other strategies like [rolling](./tutorial-devops-azure-pipelines-classic.md) and [canary](./tutorial-azure-devops-canary-strategy.md) deployments.
-This article shows how to set up a CI/CD pipeline that uses the blue-green strategy for multimachine deployments. The Azure portal also supports other strategies like [rolling](./tutorial-devops-azure-pipelines-classic.md) and [canary](./tutorial-azure-devops-canary-strategy.md).
+## Blue-green deployments
-### Configure CI/CD on virtual machines
+A blue-green deployment is a deployment strategy where you create two separate and identical environments but only one is live at any time. This strategy is used to increase availability and reduce downtime by switching between the blue/green environments. The blue environment is usually set to run the current version of the application while the green environment is set to host the updated version. When all updates are completed, traffic is directed to the green environment and blue environment is set to idle.
-You can add virtual machines as targets to a [deployment group](/azure/devops/pipelines/release/deployment-groups). You can then target them for multimachine updates. After you deploy to machines, view **Deployment History** within a deployment group. This view lets you trace from VM to the pipeline and then to the commit.
+Using the **Continuous-delivery** feature, you can use the blue-green deployment strategy to deploy to your virtual machines from Azure portal.
-### Blue-green deployments
+1. Sign in to [Azure portal](https://portal.azure.com/) and navigate to a virtual machine.
-A blue-green deployment reduces downtime by having an identical standby environment. Only one environment is live at any time.
+1. ISelect **Continuous delivery**, and then select **Configure**.
-As you prepare for a new release, complete the final stage of testing in the green environment. After the software works in the green environment, switch traffic so that all incoming requests go to the green environment. The blue environment is idle.
+ ![A screenshot showing how to navigate to the continuous delivery feature.](media/tutorial-devops-azure-pipelines-classic/azure-devops-configure.png)
-Using the continuous-delivery option, you can configure blue-green deployments to your virtual machines from the Azure portal. Here is the step-by-step walk-through:
+1. In the configuration panel, select **Use existing** and select your organization/project or select **Create** and create new ones.
-1. Sign in to the Azure portal and navigate to a virtual machine.
-1. In the leftmost pane of the VM settings, select **Continuous delivery**. Then select **Configure**.
+1. Select your **Deployment group name** from the dropdown menu or create a new one.
- ![The Continuous delivery pane with the Configure button](media/tutorial-devops-azure-pipelines-classic/azure-devops-configure.png)
+1. Select your **Build pipeline** from the dropdown menu.
-1. In the configuration panel, select **Azure DevOps Organization** to choose an existing account or create a new one. Then select the project under which you want to configure the pipeline.
+1. Select the **Deployment strategy** dropdown menu, and then select **Blue-Green**.
- ![The Continuous delivery panel](media/tutorial-devops-azure-pipelines-classic/azure-devops-rolling.png)
+ ![A screenshot showing how to configure a blue green continuous delivery strategy.](media/tutorial-devops-azure-pipelines-classic/azure-devops-rolling.png)
-1. A deployment group is a logical set of deployment target machines that represent the physical environments. Dev, Test, UAT, and Production are examples. You can create a new deployment group or select an existing one.
-1. Select the build pipeline that publishes the package to be deployed to the virtual machine. The published package should have a deployment script named deploy.ps1 or deploy.sh in the deployscripts folder in the package's root folder. The pipeline runs this deployment script.
-1. In **Deployment strategy**, select **Blue-Green**.
-1. Add a "blue" or "green" tag to VMs that are to be part of blue-green deployments. If a VM is for a standby role, tag it as "green". Otherwise, tag it as "blue".
+1. Add a "blue" or "green" tag to VMs that are used for blue-green deployments. If a VM is for a standby role, tag it as "green". Otherwise, tag it as "blue".
- ![The Continuous delivery panel, with the Deployment strategy value Blue-Green chosen](media/tutorial-devops-azure-pipelines-classic/azure-devops-blue-green-configure.png)
+ ![A screenshot showing a blue-green deployment strategy tagged green.](media/tutorial-devops-azure-pipelines-classic/azure-devops-blue-green-configure.png)
-1. Select **OK** to configure the continuous-delivery pipeline to deploy to the virtual machine.
+1. Select **OK** to configure the classic release pipeline to deploy to your virtual machine.
- ![The blue-green pipeline](media/tutorial-devops-azure-pipelines-classic/azure-devops-blue-green-pipeline.png)
+ ![A screenshot showing the classic release pipeline.](media/tutorial-devops-azure-pipelines-classic/azure-devops-blue-green-pipeline.png)
-1. The deployment details for the virtual machine are displayed. You can select the link to go to the release pipeline in Azure DevOps. In the release pipeline, select **Edit** to view the pipeline configuration. The pipeline has these three phases:
+1. Navigate to your release pipeline and then select **Edit** to view the pipeline configuration. In this example, the *dev* stage is composed of three jobs:
- 1. This phase is a deployment-group phase. Applications are deployed to standby VMs, which are tagged as "green".
- 1. In this phase, the pipeline pauses and waits for manual intervention to resume the run. Users can resume the pipeline run once they have manually ensured stability of deployment to VMs tagged as "green".
- 1. This phase swaps the "blue" and "green" tags in the VMs. This ensures that VMs with older application versions are now tagged as "green". During the next pipeline run, applications will be deployed to these VMs.
+ 1. Deploy Green: the app is deployed to a standby VM tagged "green".
+ 1. Wait for manual resumption: the pipeline pauses and waits for manual intervention.
+ 1. Swap Blue-Green: this job swaps the "blue" and "green" tags in the VMs. This ensures that VMs with older application versions are now tagged as "green". During the next pipeline run, applications will be deployed to these VMs.
- ![The Deployment group pane for the Deploy Blue-Green task](media/tutorial-devops-azure-pipelines-classic/azure-devops-blue-green-tasks.png)
+ ![A screenshot showing the three pipeline jobs](media/tutorial-devops-azure-pipelines-classic/azure-devops-blue-green-tasks.png)
-1. The Execute Deploy Script task by default runs the deployment script deploy.ps1 or deploy.sh. The script is in the deployscripts folder in the root folder of the published package. Ensure that the selected build pipeline publishes the deployment in the root folder of the package.
+## Resources
- ![The Artifacts pane showing deploy.sh in the deployscripts folder](media/tutorial-deployment-strategy/package.png)
+- [Deploy to Azure virtual machines with Azure DevOps](../../devops-project/azure-devops-project-vms.md)
+- [Deploy to an Azure virtual machine scale set](/azure/devops/pipelines/apps/cd/azure/deploy-azure-scaleset)
-## Other deployment strategies
+## Related articles
- [Configure the rolling deployment strategy](./tutorial-devops-azure-pipelines-classic.md) - [Configure the canary deployment strategy](./tutorial-azure-devops-canary-strategy.md)-
-## Azure DevOps Projects
-
-You can get started with Azure easily. With Azure DevOps Projects, start running your application on any Azure service in just three steps by selecting:
--- An application language-- A runtime-- An Azure service-
-[Learn more](https://azure.microsoft.com/features/devops-projects/).
-
-## Additional resources
--- [Deploy to Azure virtual machines by using Azure DevOps Projects](../../devops-project/azure-devops-project-vms.md)-- [Implement continuous deployment of your app to an Azure virtual machine scale set](/azure/devops/pipelines/apps/cd/azure/deploy-azure-scaleset)
virtual-machines Vm Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications.md
When the application file gets downloaded to the VM, the file name is the same a
For example, if I name my VM application `myApp` when I create it in the Gallery, but it's stored as `myApplication.exe` in the storage account, when it gets downloaded to the VM the file name will be `myApp`. My install string should start by renaming the file to be whatever it needs to be to run on the VM (like myApp.exe).
-The install, update, and remove commands must be written with file naming in mind. The `configFileName` is assigned to the config file for the VM and `packageFileName` is the name assigned downloaded package on the VM. For more information regarding these additional VM settings, refer to [UserArtifactSettings](https://docs.microsoft.com/rest/api/compute/gallery-application-versions/create-or-update?tabs=HTTP#userartifactsettings) in our API docs.
+The install, update, and remove commands must be written with file naming in mind. The `configFileName` is assigned to the config file for the VM and `packageFileName` is the name assigned downloaded package on the VM. For more information regarding these additional VM settings, refer to [UserArtifactSettings](/rest/api/compute/gallery-application-versions/create-or-update?tabs=HTTP#userartifactsettings) in our API docs.
## Command interpreter
virtual-machines Attach Disk Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/attach-disk-ps.md
Title: Attach a data disk to a Windows VM in Azure by using PowerShell description: How to attach a new or existing data disk to a Windows VM using PowerShell with the Resource Manager deployment model. -+ Last updated 06/08/2022
virtual-machines Attach Managed Disk Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/attach-managed-disk-portal.md
Title: Attach a managed data disk to a Windows VM - Azure description: How to attach a managed data disk to a Windows VM by using the Azure portal. -+ Last updated 02/06/2020
virtual-machines Convert Disk Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/convert-disk-storage.md
Title: Convert managed disks storage between different disk types by using Azure PowerShell description: How to convert Azure managed disks between the different disks types by using Azure PowerShell. -+ Last updated 02/13/2021
virtual-machines Convert Unmanaged To Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/convert-unmanaged-to-managed-disks.md
Title: Convert a Windows virtual machine from unmanaged disks to managed disks description: How to convert a Windows VM from unmanaged disks to managed disks by using PowerShell in the Resource Manager deployment model -+ Last updated 07/12/2018
virtual-machines Detach Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/detach-disk.md
Title: Detach a data disk from a Windows VM - Azure description: Detach a data disk from a virtual machine in Azure using the Resource Manager deployment model. -+
virtual-machines Jboss Eap Marketplace Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/jboss-eap-marketplace-image.md
For any support-related questions, issues or customization requirements, contact
* [JBoss EAP in Azure App Service](/azure/developer/java/ee/jboss-on-azure) * [Azure Hybrid Benefits](../../windows/hybrid-use-benefit-licensing.md) * [Red Hat Subscription Management](https://access.redhat.com/products/red-hat-subscription-management)
-* [Microsoft docs for Red Hat on Azure](./overview.md)
+* [Red Hat on Azure overview](./overview.md)
## Next steps
virtual-machines Jboss Eap On Azure Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/jboss-eap-on-azure-best-practices.md
For any support-related questions, issues or customization requirements, contact
* Learn more about [JBoss EAP](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html/getting_started_with_jboss_eap_for_openshift_online/index) * Red Hat Subscription Manager (RHSM) [Cloud Access](https://access.redhat.com/documentation/en/red_hat_subscription_management/1/html-single/red_hat_cloud_access_reference_guide/index) * [Azure Red Hat OpenShift (ARO)](https://azure.microsoft.com/services/openshift/)
-* Microsoft Docs for [Red Hat on Azure](./overview.md)
+* [Red Hat on Azure overview](./overview.md)
* [RHEL BYOS Gold Images in Azure](./byos.md) * JBoss EAP on Azure [Quickstart video tutorial](https://www.youtube.com/watch?v=3DgpVwnQ3V4)
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 07/29/2022 Last updated : 08/09/2022
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- August 09, 2022: Release of scenario [HA for SAP ASCS/ERS with NFS simple mount](./high-availability-guide-suse-nfs-simple-mount.md) on SLES 15 for SAP Applications
- July 18, 2022: Clarify statement around Pacemaker support on Oracle Linux in [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms_guide_oracle.md) - June 29, 2022: Add recommendation and links to Pacemaker usage for Db2 versions 11.5.6 and higher in the documents [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload](./dbms_guide_ibm.md), [High availability of IBM Db2 LUW on Azure VMs on SUSE Linux Enterprise Server with Pacemaker](./dbms-guide-ha-ibm.md), and [High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server](./high-availability-guide-rhel-ibm-db2-luw.md) - June 08, 2022: Change in [HA for SAP NW on Azure VMs on SLES with ANF](./high-availability-guide-suse-netapp-files.md) and [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) to adjust timeouts when using NFSv4.1 (related to NFSv4.1 lease renewal) for more resilient Pacemaker configuration
virtual-machines Sap Planning Supported Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-planning-supported-configurations.md
For simplification, we did not distinguish between SAP Central Services and SAP
## High Availability protection for the SAP DBMS layer As you look to deploy SAP production systems, you need to consider hot standby type of high availability configurations. Especially with SAP HANA, where data needs to be loaded into memory before being able to get the full performance and scalability back, Azure service healing is not an ideal measure for high availability.
-In general Microsoft supports only high availability configurations and software packages that are described under the SAP workload section in docs.microsoft.com. You can read the same statement in SAP note [#1928533](https://launchpad.support.sap.com/#/notes/1928533). Microsoft will not provide support for other high availability third-party software frameworks that are not documented by Microsoft with SAP workload. In such cases, the third-party supplier of the high availability framework is the supporting party for the high availability configuration who needs to be engaged by you as a customer into the support process. Exceptions are going to be mentioned in this article.
+In general, Microsoft supports only high availability configurations and software packages that are described in the [SAP workload scenarios](/azure/virtual-machines/workloads/sap/get-started). You can read the same statement in SAP note [#1928533](https://launchpad.support.sap.com/#/notes/1928533). Microsoft will not provide support for other high availability third-party software frameworks that are not documented by Microsoft with SAP workload. In such cases, the third-party supplier of the high availability framework is the supporting party for the high availability configuration who needs to be engaged by you as a customer into the support process. Exceptions are going to be mentioned in this article.
In general Microsoft supports a limited set of high availability configurations on Azure VMs or HANA Large Instances units. For the supported scenarios of HANA Large Instances, read the document [Supported scenarios for HANA Large Instances](./hana-supported-scenario.md).
Scenario(s) that we did not test and therefore have no experience with list like
## Next Steps Read next steps in the [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md)-----
virtual-network-manager Common Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/common-issues.md
Title: 'Common issues seen with Azure Virtual Network Manager (Preview)' description: Learn about common issues seen when using Azure Virtual Network Manager.--++ Last updated 11/02/2021
virtual-network-manager Concept Azure Policy Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-azure-policy-integration.md
+
+ Title: "Configuring Azure Policy with network groups in Azure Virtual Network Manager (Preview)"
+description: Learn about how to utilize Azure Policy to configure a high scale and dynamic network group used with Azure Virtual Network Manager.
++++ Last updated : 08/22/2022+++
+# Configuring Azure Policy with network groups in Azure Virtual Network Manager (Preview)
+
+In this article, you'll learn how [Azure Policy](../governance/policy/overview.md) is used in Azure Virtual Network Manager to define dynamic network group membership. Dynamic network groups allow you to create scalable and dynamically adapting virtual network environments in your organization.
+
+> [!IMPORTANT]
+> Azure Virtual Network Manager is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Azure Policy overview
+
+Azure Policy evaluates resources in Azure by comparing the properties of those resources to business rules. These business rules, described in JSON format, are known as [policy definitions](#policy-definition). Once your business rules have been formed, the policy definition is assigned to any scope of resources that Azure supports, such as management groups, subscriptions, resource groups, or individual resources. The assignment applies to all resources within the Resource Manager scope of that assignment. Learn more about scope usage with [Scope in Azure Policy](../governance/policy/concepts/scope.md).
+
+> [!NOTE]
+> Azure Policy is only used for the definition of dynamic network group membership.
++
+## Policy definition
+
+Creating and implementing a policy in Azure Policy begins with creating a policy definition resource. Every policy definition has conditions under which it's enforced, and a defined effect that takes place if the conditions are met.
+
+With network groups, your policy definition includes your conditional expression for matching virtual networks meeting your criteria, and specifies the destination network group where any matching resources are placed. The `addToNetworkGroup` effect is used to accomplish this. The following is a sample of a policy rule definition with the `addToNetworkGroup` effect.
+
+```json
+
+"policyRule": {
+ "if": {
+ "allOf": [
+ {
+ "field": "Name",
+ "contains": "-gen"
+ }
+ ]
+ },
+ "then": {
+ "effect": "addToNetworkGroup",
+ "details": {
+ "networkGroupId": "/subscriptions/12345678-abcd-123a-1234-1234abcd7890/resourceGroups/myResourceGroup2/providers/Microsoft.Network/networkManagers/myAVNM/networkGroups/myNG"
+ }
+ }
+}
+
+```
+Learn more about [policy definition structure](../governance/policy/concepts/definition-structure.md).
+
+## Policy assignments
+
+Similar to Virtual Network Manager configurations, policy definitions don't immediately take effect when you create them. To begin applying, you must create a Policy Assignment, which assigns a definition to evaluate at a given scope. Currently, all resource within the scope will be evaluated against the definition. This allows you to have a single reusable definition that you can assign at multiple places for more granular group membership control. Learn more information on the [Assignment Structure](../governance/policy/concepts/assignment-structure.md) for Azure Policy.
+
+Policy definitions and assignment can be created through with API/PS/CLI or [Azure Policy Portal]().
+
+## Required permissions
+
+To use Azure Policy with network groups, users need the following permissions:
+- `Microsoft.ApiManagement/service/apis/operations/policy/write` is needed at the scope you're assigning.
+- `Microsoft.Network/networkManagers/networkGroups/join/action` action is needed on the target network group referenced in the **Add to network group** section. This permission allows for the adding and removing of objects from the target network group.
+- When using set definitions to assign multiple policies at the same time, concurrent `networkGroup/join/action` permissions are needed on all definitions being assigned at the time of assignment.
+
+To set the needed permissions, uses can be assigned built-in roles with [role-based access control](../role-based-access-control/quickstart-assign-role-user-portal.md):
+- **Network Contributor** role to the target network group.
+- **API Management Service Contributor** role at the target scope level.
+
+For more granular role assignment, you can create [custom roles](../role-based-access-control/custom-roles-portal.md) using the `networkGroups/join/action` permission and `policy/write` permission.
+## Helpful tips
+
+### Type filtering
+
+When configuring your policy definitions, it's recommended to always include a **type** condition to scope it to virtual networks. This will allow Policy to filter out non virtual network operations and improve the efficiency of your policy resources.
+
+### Regional slicing
+
+Policy resources are global, which means that any change will take effect on all resources under the assignment scope, regardless of region. If regional slicing and gradual rollout is a concern for you, it's recommended to also include a `where location in []` condition. Then, you can incrementally expand the locations list to gradually roll out the effect.
+
+### Assignment scoping
+If you're following management group best practices using [Azure management groups](../governance/management-groups/overview.md), it's likely you already have your resources organized in a hierarchy structure. Using assignments, you can assign the same definition to multiple distinct scopes within your hierarchy, allowing you to have higher granularity control of which resources are eligible for your network group
+
+## Next steps
+
+- Create an [Azure Virtual Network Manager](create-virtual-network-manager-portal.md) instance.
+- Learn about [configuration deployments](concept-deployments.md) in Azure Virtual Network Manager.
+- Learn how to block network traffic with a [SecurityAdmin configuration](how-to-block-network-traffic-portal.md).
virtual-network-manager Concept Connectivity Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-connectivity-configuration.md
Title: 'Connectivity configuration in Azure Virtual Network Manager (Preview)' description: Learn about different types network topology you can create with a connectivity configuration in Azure Virtual Network Manager.--++ Last updated 11/02/2021
virtual-network-manager Concept Network Manager Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-network-manager-scope.md
Title: 'Understand and work with Azure Virtual Network Manager scopes' description: Learn about Azure Virtual Network Manager scopes and the effects it has on managing virtual networks.--++ Last updated 11/02/2021
virtual-network-manager Concept Remove Components Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-remove-components-checklist.md
Title: 'Removing Azure Virtual Network Manager Preview components checklist' description: This article is a checklist for deleting components within Azure Virtual Network Manager.--++ Last updated 11/02/2021
virtual-network-manager Create Virtual Network Manager Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-cli.md
Title: 'Quickstart: Create a mesh network topology with Azure Virtual Network Manager via the Azure CLI' description: Use this quickstart to learn how to create a mesh network topology with Virtual Network Manager using the Azure CLI.--++ Last updated 11/16/2021
virtual-network-manager Create Virtual Network Manager Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-powershell.md
Previously updated : 06/27/2022 Last updated : 08/9/2022
Install the latest *Az.Network* Azure PowerShell module using this command:
Before you can create an Azure Virtual Network Manager, you have to create a resource group to host the Network Manager. Create a resource group with [New-AzResourceGroup](/powershell/module/az.Resources/New-azResourceGroup). This example creates a resource group named **myAVNMResourceGroup** in the **WestUS** location. ```azurepowershell-interactive+
+$location = "West US"
$rg = @{ Name = 'myAVNMResourceGroup'
- Location = 'WestUS'
+ Location = $location
} New-AzResourceGroup @rg+ ``` ## Create Virtual Network Manager
New-AzResourceGroup @rg
1. Define the scope and access type this Azure Virtual Network Manager instance will have. You can choose to create the scope with subscriptions group or management group or a combination of both. Create the scope by using New-AzNetworkManagerScope. ```azurepowershell-interactive
+
Import-Module -Name Az.Network -RequiredVersion "4.15.1" [System.Collections.Generic.List[string]]$subGroup = @()
New-AzResourceGroup @rg
[System.Collections.Generic.List[String]]$access = @() $access.Add("Connectivity"); $access.Add("SecurityAdmin");
-
+
$scope = New-AzNetworkManagerScope -Subscription $subGroup -ManagementGroup $mgGroup
+
``` 1. Create the Virtual Network Manager with New-AzNetworkManager. This example creates an Azure Virtual Network Manager named **myAVNM** in the West US location.-
+
```azurepowershell-interactive $avnm = @{ Name = 'myAVNM'
- ResourceGroupName = 'myAVNMResourceGroup'
+ ResourceGroupName = $rg.Name
NetworkManagerScope = $scope NetworkManagerScopeAccess = $access
- Location = 'West US'
+ Location = $location
} $networkmanager = New-AzNetworkManager @avnm ```
Create three virtual networks with [New-AzVirtualNetwork](/powershell/module/az.
$vnetA = @{ Name = 'VNetA' ResourceGroupName = 'myAVNMResourceGroup'
- Location = 'West US'
+ Location = $location
AddressPrefix = '10.0.0.0/16' }+ $virtualNetworkA = New-AzVirtualNetwork @vnetA $vnetB = @{ Name = 'VNetB' ResourceGroupName = 'myAVNMResourceGroup'
- Location = 'West US'
+ Location = $location
AddressPrefix = '10.1.0.0/16' } $virtualNetworkB = New-AzVirtualNetwork @vnetB
$virtualNetworkB = New-AzVirtualNetwork @vnetB
$vnetC = @{ Name = 'VNetC' ResourceGroupName = 'myAVNMResourceGroup'
- Location = 'West US'
+ Location = $location
AddressPrefix = '10.2.0.0/16' } $virtualNetworkC = New-AzVirtualNetwork @vnetC
$virtualNetworkC = New-AzVirtualNetwork @vnetC
### Add a subnet to each virtual network
-To complete the configuration of the virtual networks add a /24 subnet to each one. Create a subnet configuration named **default** with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig).
+To complete the configuration of the virtual networks, add a /24 subnet to each one. Create a subnet configuration named **default** with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig).
```azurepowershell-interactive $subnetA = @{
$virtualnetworkA | Set-AzVirtualNetwork
$subnetB = @{ Name = 'default' VirtualNetwork = $virtualNetworkB
- AddressPrefix = '10.0.0.0/24'
+ AddressPrefix = '10.1.0.0/24'
} $subnetConfigC = Add-AzVirtualNetworkSubnetConfig @subnetB $virtualnetworkB | Set-AzVirtualNetwork
$virtualnetworkB | Set-AzVirtualNetwork
$subnetC = @{ Name = 'default' VirtualNetwork = $virtualNetworkC
- AddressPrefix = '10.0.0.0/24'
+ AddressPrefix = '10.2.0.0/24'
} $subnetConfigC = Add-AzVirtualNetworkSubnetConfig @subnetC $virtualnetworkC | Set-AzVirtualNetwork
$virtualnetworkC | Set-AzVirtualNetwork
## Create a network group
-### Static membership
-
-1. Create a static virtual network member with New-AzNetworkManagerGroupMembersItem.
+1. Create a network group to add virtual networks to.
```azurepowershell-interactive $ng = @{
- Name = 'myNetworkGroup'
- ResourceGroupName = 'myAVNMResourceGroup'
- NetworkManagerName = 'myAVNM'
- MemberType = 'Microsoft.Network/VirtualNetwork'
- }
- $networkgroup = New-AzNetworkManagerGroup @ng
+ Name = 'myNetworkGroup'
+ ResourceGroupName = $rg.Name
+ NetworkManagerName = $networkManager.Name
+ }
+ $networkgroup = New-AzNetworkManagerGroup @ng
```
+
+### Option 1: Static membership
-1. Add the static member to the static membership group with the following commands:
-
+1. Add the static member to the network group with the following commands:
+ 1. Static members must have a network group scoped unique name. It's recommended to use a consistent hash of the virtual network ID. Below is an approach using the ARM Templates uniqueString() implementation.
+
+ ```azurepowershell-interactive
+ function Get-UniqueString ([string]$id, $length=13)
+ {
+ $hashArray = (new-object System.Security.Cryptography.SHA512Managed).ComputeHash($id.ToCharArray())
+ -join ($hashArray[1..$length] | ForEach-Object { [char]($_ % 26 + [byte][char]'a') })
+ }
+ ```
+
```azurepowershell-interactive
- $sm = @{
- Name = 'myStaticMember'
- ResourceGroupName = 'myAVNMResourceGroup'
- NetworkGroupName = 'myNetworkGroup'
- NetworkManagerName = 'myAVNM'
- ResourceId = '/subscriptions/abcdef12-3456-7890-abcd-ef1234567890/resourceGroups/myAVNMResourceGroup/providers/Microsoft.Network/virtualNetworks/VNetA'
- }
- $statimember = New-AzNetworkManagerStaticMember @sm
+ $smA = @{
+ Name = Get-UniqueString $virtualNetworkA.Id
+ ResourceGroupName = $rg.Name
+ NetworkGroupName = $networkGroup.Name
+ NetworkManagerName = $networkManager.Name
+ ResourceId = $virtualNetworkA.Id
+ }
+ $statimemberA = New-AzNetworkManagerStaticMember @sm
```
+
+ ```azurepowershell-interactive
+ $smB = @{
+ Name = Get-UniqueString $virtualNetworkB.Id
+ ResourceGroupName = $rg.Name
+ NetworkGroupName = $networkGroup.Name
+ NetworkManagerName = $networkManager.Name
+ ResourceId = $virtualNetworkB.Id
+ }
+ $statimemberB = New-AzNetworkManagerStaticMember @sm
+ ```
+
+ ```azurepowershell-interactive
+ $smC = @{
+ Name = Get-UniqueString $virtualNetworkC.Id
+ ResourceGroupName = $rg.Name
+ NetworkGroupName = $networkGroup.Name
+ NetworkManagerName = $networkManager.Name
+ ResourceId = $virtualNetworkC.Id
+ }
+ $statimemberC = New-AzNetworkManagerStaticMember @sm
+ ```
+
+### Option 2: Dynamic membership
-### Dynamic membership
+1. Define the conditional statement and store it in a variable.
+> [!NOTE]
+> It is recommended to scope all of your conditionals to only scan for type `Microsoft.Network/virtualNetwork` for efficiency.
-1. Define the conditional statement and store it in a variable:
+```azurepowershell-interactive
+$conditionalMembership = '{
+ "allof":[
+ {
+ "field": "type",
+ "equals": "Microsoft.Network/virtualNetwork"
+ }
+ {
+ "field": "name",
+ "contains": "VNet"
+ }
+ ]
+}'
+```
+
+1. Create the Azure Policy definition using the conditional statement defined in the last step using New-AzPolicyDefinition.
- ```azurepowershell-interactive
- $conditionalMembership = '{
- "allof":[
- {
- "field": "name",
- "contains": "VNet"
- }
- ]
- }'
- ```
+> [!IMPORTANT]
+> Policy resources must have a scope unique name. It is recommended to use a consistent hash of the network group. Below is an approach using the ARM Templates uniqueString() implementation.
+
+```azurepowershell-interactive
+ function Get-UniqueString ([string]$id, $length=13)
+ {
+ $hashArray = (new-object System.Security.Cryptography.SHA512Managed).ComputeHash($id.ToCharArray())
+ -join ($hashArray[1..$length] | ForEach-Object { [char]($_ % 26 + [byte][char]'a') })
+ }
+```
-1. Create the network group using the conditional statement defined in the last step using New-AzNetworkManagerGroup.
+```azurepowershell-interactive
+$defn = @{
+ Name = Get-UniqueString $networkgroup.Id
+ Mode = 'Microsoft.Network.Data'
+ Policy = $conditionalMembership
+}
+
+$policyDefinition = New-AzPolicyDefinition $defn
+```
+
+1. Assign the policy definition at a scope within your network managers scope for it to begin taking effect.
```azurepowershell-interactive
- $ng = @{
- Name = 'myNetworkGroup'
- ResourceGroupName = 'myAVNMResourceGroup'
- GroupMember = $groupMembers
- ConditionalMembership = $conditionalMembership
- NetworkManagerName = 'myAVNM'
- MemberType = 'Microsoft.Network/VirtualNetwork'
+ $assgn = @{
+ Name = Get-UniqueString $networkgroup.Id
+ PolicyDefinition = $policyDefinition
}
- $networkgroup = New-AzNetworkManagerGroup @ng
+
+ $policyAssignment = New-AzPolicyAssignment $assgn
```-
+
## Create a configuration 1. Create a connectivity group item to add a network group to with New-AzNetworkManagerConnectivityGroupItem.
$virtualnetworkC | Set-AzVirtualNetwork
} $groupItem = New-AzNetworkManagerConnectivityGroupItem @gi ```-
+
1. Create a configuration group and add the group item from the previous step. ```azurepowershell-interactive [System.Collections.Generic.List[Microsoft.Azure.Commands.Network.Models.PSNetworkManagerConnectivityGroupItem]]$configGroup = @() $configGroup.Add($groupItem) ```-
+
1. Create the connectivity configuration with New-AzNetworkManagerConnectivityConfiguration. ```azurepowershell-interactive $config = @{ Name = 'connectivityconfig'
- ResourceGroupName = 'myAVNMResourceGroup'
- NetworkManagerName = 'myAVNM'
+ ResourceGroupName = $rg.Name
+ NetworkManagerName = $networkManager.Name
ConnectivityTopology = 'Mesh' AppliesToGroup = $configGroup } $connectivityconfig = New-AzNetworkManagerConnectivityConfiguration @config
- ```
+ ```
## Commit deployment
-Commit the configuration to the target regions with Deploy-AzNetworkManagerCommit.
+Commit the configuration to the target regions with Deploy-AzNetworkManagerCommit. This will trigger your configuration to begin taking effect.
```azurepowershell-interactive [System.Collections.Generic.List[string]]$configIds = @()
$configIds.add($connectivityconfig.id)
$target.Add("westus") $deployment = @{
- Name = 'myAVNM'
- ResourceGroupName = 'myAVNMResourceGroup'
+ Name = $networkManager.Name
+ ResourceGroupName = $rg.Name
ConfigurationId = $configIds TargetLocation = $target CommitType = 'Connectivity'
If you no longer need the Azure Virtual Network Manager, you'll need to make sur
1. Remove the connectivity configuration with Remove-AzNetworkManagerConnectivityConfiguration ```azurepowershell-interactive
- $removeconfig = @{
- Name = 'connectivityconfig'
- ResourceGroupName = 'myAVNMResourceGroup'
- NetworkManagerName = 'myAVNM'
- }
- Remove-AzNetworkManagerConnectivityConfiguration @removeconfig
+
+ Remove-AzNetworkManagerConnectivityConfiguration @connectivityconfig.Id
+
+ ```
+2. Remove the policy resources with Remove-AzPolicy*
+
+ ```azurepowershell-interactive
+
+ Remove-AzPolicyAssignment $policyAssignment.Id
+ Remove-AzPolicyAssignment $policyDefinition.Id
+
```
-1. Remove the network group with Remove-AzNetworkManagerGroup.
+3. Remove the network group with Remove-AzNetworkManagerGroup.
```azurepowershell-interactive
- $removegroup = @{
- Name = 'myNetworkGroup'
- ResourceGroupName = 'myAVNMResourceGroup'
- NetworkManagerName = 'myAVNM'
- }
- Remove-AzNetworkManagerGroup @removegroup
+ Remove-AzNetworkManagerGroup $networkGroup.Id
```
-1. Delete the network manager instance with Remove-AzNetworkManager.
+4. Delete the network manager instance with Remove-AzNetworkManager.
```azurepowershell-interactive
- $removenetworkmanager = @{
- Name = 'myAVNM'
- ResourceGroupName = 'myAVNMResourceGroup'
- }
- Remove-AzNetworkManager @removenetworkmanager
+ Remove-AzNetworkManager $networkManager.Id
```
-1. If you no longer need the resource created, delete the resource group with [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup).
+5. If you no longer need the resource created, delete the resource group with [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup).
```azurepowershell-interactive Remove-AzResourceGroup -Name 'myAVNMResourceGroup'
If you no longer need the Azure Virtual Network Manager, you'll need to make sur
## Next steps
-After you've created the Azure Virtual Network Manager, continue on to learn how to block network traffic by using the security admin configuration:
- > [!div class="nextstepaction"]
-> [Block network traffic with security admin rules](how-to-block-network-traffic-powershell.md)
+> Learn how to [Block network traffic with security admin rules](how-to-block-network-traffic-powershell.md)
virtual-network-manager How To Block Network Traffic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-block-network-traffic-powershell.md
Title: 'How to block network traffic with Azure Virtual Network Manager (Preview) - Azure PowerShell' description: Learn how to block network traffic using security rules in Azure Virtual Network Manager with the Azure PowerShell.--++ Last updated 11/02/2021
virtual-network-manager How To Create Hub And Spoke Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-hub-and-spoke-powershell.md
Title: 'Create a hub and spoke topology with Azure Virtual Network Manager (Preview) - Azure PowerShell' description: Learn how to create a hub and spoke network topology with Azure Virtual Network Manager using Azure PowerShell.--++ Last updated 11/02/2021
virtual-network-manager How To Create Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-hub-and-spoke.md
Title: 'Create a hub and spoke topology with Azure Virtual Network Manager (Preview)' description: Learn how to create a hub and spoke network topology with Azure Virtual Network Manager.--++ Last updated 11/02/2021
virtual-network-manager How To Exclude Elements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-exclude-elements.md
Title: 'Excluding elements from dynamic membership in Azure Virtual Network Manager (Preview)' description: Learn how to exclude elements from dynamic membership in Azure Virtual Network Manager.--++ Last updated 11/02/2021
virtual-network-manager Tutorial Create Secured Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/tutorial-create-secured-hub-and-spoke.md
Title: 'Tutorial: Create a secured hub and spoke network' description: In this tutorial, you'll learn how to create a hub and spoke network with Azure Virtual Network Manager. Then you'll secure all your virtual networks with a security policy.--++ Last updated 11/02/2021
virtual-network Network Security Groups Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-security-groups-overview.md
# Network security groups <a name="network-security-groups"></a>
-You can use an Azure network security group to filter network traffic to and from Azure resources in an Azure virtual network. A network security group contains [security rules](#security-rules) that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.
+You can use an Azure network security group to filter network traffic between Azure resources in an Azure virtual network. A network security group contains [security rules](#security-rules) that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.
This article describes properties of a network security group rule, the [default security rules](#default-security-rules) that are applied, and the rule properties that you can modify to create an [augmented security rule](#augmented-security-rules).
A network security group contains zero, or as many rules as desired, within Azur
|Name|A unique name within the network security group.| |Priority | A number between 100 and 4096. Rules are processed in priority order, with lower numbers processed before higher numbers, because lower numbers have higher priority. Once traffic matches a rule, processing stops. As a result, any rules that exist with lower priorities (higher numbers) that have the same attributes as rules with higher priorities aren't processed.| |Source or destination| Any, or an individual IP address, classless inter-domain routing (CIDR) block (10.0.0.0/24, for example), service tag, or application security group. If you specify an address for an Azure resource, specify the private IP address assigned to the resource. Network security groups are processed after Azure translates a public IP address to a private IP address for inbound traffic, and before Azure translates a private IP address to a public IP address for outbound traffic. Fewer security rules are needed when you specify a range, a service tag, or application security group. The ability to specify multiple individual IP addresses and ranges (you can't specify multiple service tags or application groups) in a rule is referred to as [augmented security rules](#augmented-security-rules). Augmented security rules can only be created in network security groups created through the Resource Manager deployment model. You can't specify multiple IP addresses and IP address ranges in network security groups created through the classic deployment model.|
-|Protocol | TCP, UDP, ICMP, ESP, AH, or Any.|
+|Protocol | TCP, UDP, ICMP, ESP, AH, or Any. The ESP and AH protocols are not currently available via the Azure Portal but can be used via ARM templates. |
|Direction| Whether the rule applies to inbound, or outbound traffic.| |Port range |You can specify an individual or range of ports. For example, you could specify 80 or 10000-10005. Specifying ranges enables you to create fewer security rules. Augmented security rules can only be created in network security groups created through the Resource Manager deployment model. You can't specify multiple ports or port ranges in the same security rule in network security groups created through the classic deployment model. | |Action | Allow or deny |
virtual-network Tutorial Restrict Network Access To Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources.md
To test network access to a storage account, deploy a VM to each subnet.
### Create the second virtual machine
-1. Repeat steps 1-5 to create a second virtual machine. In step 3, name the virtual machine *myVmPrivate* and set *NIC network security group* to **None**. In step 4, select the **Private** subnet.
+1. Repeat steps 1-5 to create a second virtual machine. In step 3, name the virtual machine *myVmPrivate*. In step 4, select the **Private** subnet and set *NIC network security group* to **None**.
:::image type="content" source="./media/tutorial-restrict-network-access-to-resources/virtual-machine-2-networking.png" alt-text="Screenshot of create private virtual machine network settings." lightbox="./media/tutorial-restrict-network-access-to-resources/virtual-machine-2-networking-expanded.png":::
virtual-network Virtual Networks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-overview.md
Azure resources communicate securely with each other in one of the following way
### Communicate with on-premises resources
-You can connect your on-premises computers and networks to a virtual network using any combination of the following options:
+You can connect your on-premises computers and networks to a virtual network using any of the following options:
- **Point-to-site virtual private network (VPN):** Established between a virtual network and a single computer in your network. Each computer that wants to establish connectivity with a virtual network must configure its connection. This connection type is great if you're just getting started with Azure, or for developers, because it requires little or no changes to your existing network. The communication between your computer and a virtual network is sent through an encrypted tunnel over the internet. To learn more, see [Point-to-site VPN](../vpn-gateway/point-to-site-about.md?toc=%2fazure%2fvirtual-network%2ftoc.json#). - **Site-to-site VPN:** Established between your on-premises VPN device and an Azure VPN Gateway that is deployed in a virtual network. This connection type enables any on-premises resource that you authorize to access a virtual network. The communication between your on-premises VPN device and an Azure VPN gateway is sent through an encrypted tunnel over the internet. To learn more, see [Site-to-site VPN](../vpn-gateway/design.md?toc=%2fazure%2fvirtual-network%2ftoc.json#s2smulti).
virtual-network Virtual Networks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vs-azure-tools-storage-manage-with-storage-explorer.md
snap connect storage-explorer:password-manager-service :password-manager-service
Storage Explorer is also available as a *.tar.gz* download. If you use the *.tar.gz*, you must install dependencies manually. The following distributions of Linux support *.tar.gz* installation:
+* Ubuntu 22.04 x64
* Ubuntu 20.04 x64 * Ubuntu 18.04 x64
-* Ubuntu 16.04 x64
The *.tar.gz* installation might work on other distributions, but only these listed ones are officially supported.